text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Analytical expressions for thermophysical properties of solid and liquid tungsten relevant for fusion applications The status of the literature is reviewed for several thermophysical properties of pure solid and liquid tungsten which constitute input for the modelling of intense plasma-surface interaction phenomena that are important for fusion applications. Reliable experimental data are analyzed for the latent heat of fusion, the electrical resistivity, the specific isobaric heat capacity, the thermal conductivity and the mass density from the room temperature up to the boiling point of tungsten as well as for the surface tension and the dynamic viscosity across the liquid state. Analytical expressions of high accuracy are recommended for these thermophysical properties that involved a minimum degree of extrapolations. In particular, extrapolations were only required for the surface tension and viscosity. I. INTRODUCTION The survivability of the divertor during prolonged repetitive exposures to harsh edge plasma conditions as well as its longevity deep into the nuclear phase are essential for the success of the ITER project and impose stringent requirements on material selection [1]. After the decision that ITER will begin operations with a full tungsten divertor [2], R&D activities worldwide have focused on assessing various sources of mechanical and structural degradation of tungsten plasma-facing components in the hostile fusion reactor environment [3][4][5]; neutron irradiation effects on mechanical properties, helium irradiation and hydrogen retention effects on the microstructure, thermal shock resistance and thermal fatigue resistance. The dependence of key mechanical properties (ductile to brittle transition temperature, yield strength, fracture toughness) on the fabrication history, the alloying or impurity elements, the metallurgical process and the grain structure is indicative of the complex nature of such investigations [6,7]. As a consequence, the ITER Materials Properties Handbook puts strong emphasis to documenting the mechanical properties of tungsten [8]. Another phenomenon that is crucial for the lifetime of the tungsten divertor is melt layer motion during off-normal or transient events, namely unmitigated edge localized modes, vertical displacement events and major disruptions [2]. Melt layer motion leads to strong modifications of the local surface topology and thus to degradation of powerhandling capabilities but can also lead to plasma contamination by high-Z droplets in the case of splashing [9,10]. The numerical modelling of melt motion is based on coupling the Navier-Stokes equations for the liquid metal with the heat conduction equation as well as the current continuity equation and supplementing the system with appropriate boundary conditions dictated by the incident plasma [11][12][13]. These are the fundamental equations solved in codes such as MEMOS [11,12], where the temperature dependence of the viscosity, surface tension and other thermophysical properties of liquid tungsten constitute a necessary input. In addition, since re-solidification determines the onset but also the arrest of macroscopic motion, the thermophysical properties of solid tungsten at elevated temperatures and their behavior at the solid-liquid phase transition are also necessary input. Unfortunately, the ITER Materials Properties Handbook does not provide any information on the thermophysical properties of liquid tungsten and its recommended description of some thermophysical properties of solid tungsten at elevated temperatures is not based on state-of-the-art experimental data [8]. It should also be mentioned that these properties are also essential input for the modelling of tungsten dust transport with codes such as DUSTT [14] and MIGRAINe [15] (since tungsten dust should promptly melt in ITER-like edge plasmas and thermionic emission at the liquid phase plays a dominant role in its energy budget) and for the modelling of the interaction of transient plasmas with adhered tungsten dust [16] (since wetting is determined by the competition between the spreading and re-solidification rates). This work is focused on reviewing state-of-the-art measurements of thermophysical properties of pure tungsten from the room temperature up to the boiling point. Complications arising in fusion devices due to strong magnetic fields, intense plasma fluxes, impurity alloying and neutron irradiation are also discussed. The thermophysical properties of interest are the latent heat of fusion, the electrical resistivity, the specific isobaric heat capacity, the thermal conductivity and the mass density (solid and liquid phase) as well as the surface tension and the dynamic viscosity (liquid phase). The objective is to identify and critically evaluate reliable experimental datasets in order to propose accurate analytical expressions for the temperature dependence of these quantities that will standardize their description in the multiple heating, melt layer motion and dust transport codes developed by the fusion community. It has been possible to provide accurate analytical expressions for most properties based solely on experimental data and without the need for any extrapolations. The only exceptions are the surface tension and viscosity of liquid tungsten, where wide extrapolations had to be carried out beyond the melting point, since the only experimental sources on the temperature dependence referred to the under-cooled phase. These extrapolations are based on established empirical expressions that are accurate for non-refractory liquid metals and were cross-checked with rigorous constraints imposed by statistical mechanics. Considering that temperature gradients of the surface tension can drive thermo-capillary flows and that viscosity is responsible for melt motion damping, measurements need to be carried out in the unexplored temperature range, e.g. with levitating drop methods on ground-based laboratories [17] or in microgravity [18]. II. THERMOPHYSICAL PROPERTIES OF TUNGSTEN A. The latent heat of fusion The difference between the specific enthalpy of the liquid and solid state at the melting phase transition yields the latent heat of fusion. In Table I, the W latent molar heat of fusion is provided as measured by dedicated experiments [19][20][21][22][23][24][25][26][27][28][29][30][31] or as recommended by theoretical investigations [32][33][34] and material handbooks [35][36][37][38][39]. We point out that the measurement uncertainties in the determination of the heat of fusion with the resistive pulse heating (or dynamic pulse calorimetry) technique are around 10% [40]. Overall, given these uncertainties, the measurements are well clustered around ∆h f ≃ 50 kJ/mol, with 52.3 kJ/mol nearly exclusively cited in material handbooks and modelling works. Some older literature sources recommend a very small value ∆h f ≃ 35.3 kJ/mol, see for instance Refs. [41,42], which does not seem to be supported by any measurements. Most probably, this value stems from a semi-empirical relation known as Richard's rule [43,44]; By equating the liquid state with the solid state specific Gibbs free energy g = h − T s at the melting point, we acquire ∆h f = T m ∆s f . Richard's rule is based on positional disorder arguments and empirical observations, it states that the entropy of fusion is a quasi-universal constant for all metals with an approximate value ∆s f ≃ R, where R = N A k B is the ideal gas constant whose arithmetic value is R = 8.314 J/(mol·K). This rule allows the calculation of ∆h f with knowledge of the melting temperature T m only. For tungsten, we have T m = 3695 K, which translates to 30.72 kJ/mol. Modified versions of Richard's rule take into account the average value of the entropy of fusion for bcc and fcc metals, which is ∆s f ≃ 1.15R [33] and leads to the inaccurate prediction 35.3 kJ/mol. In fact, tungsten is a well-known exception to this entropy rule along with some semi-metals (antimony, bismuth). Significance. The role of the electrical resistivity in heat transfer and especially melt motion problems is indirect but can be crucial: (i) It is a key quantity in the determination of the bulk replacement current density J , i.e. the current that flows through the conductors as a response to thermionic currents emitted or non-ambipolar plasma currents incident at the surface, which leads to a J × B force density that is believed to drive macroscopic melt layer motion. This is better illustrated by considering the simplest stationary unmagnetized case, where the replacement current is fully described by the steady state continuity equation ∇ · J = 0 and the electrostatic condition ∇ × E = 0 [45]. For the isotropic tungsten, Ohm's law becomes E = ρ el J and the irrotational equation can be rewritten as ρ el (∇ × J ) + (∇ρ el ) × J = 0 or by using the chain rule as ρ el (∇ × J ) + (∂ρ el /∂T )(∇T × J ) = 0. Thus, the temperature dependence of the electrical resistivity is responsible for the second term that can have a significant effect, since sharp temperature gradients are generated by the localized intra-ELM heat fluxes. (ii) It is proportional to the volumetric resistive heating caused by the replacement current that is described by the Joule expression ρ el |J | 2 . (iii) For metals, it is inversely proportional to the thermal conductivity, see subsection II D for details. Solid tungsten. In 1984, Desai and collaborators performed the analysis of all 201 experimental datasets then available for the resistivity of tungsten [46]. A complete dataset covering the temperature range from the neighbourhood of the absolute zero up to 5000 K was synthesized from the most reliable measurements over different temperature intervals. For temperatures below the melting point, we shall be completely based on their analysis. In particular, we shall focus on the temperature range 100 < T (K) < 3695. For the purpose of numerical manipulation, polynomial expressions were employed to acquire analytical fits for the electrical resistivity. The Desai fit reads as [46] ρ el = where ρ el is measured in 10 −8 Ωm or in µΩcm. Note that the fitting expression proposed by Desai is continuous at its branch points. The following remarks should be explicitly pointed out: (i) The uncertainty in the recommended values employed for the fit is estimated to be ±5% below 100 K, ±3% from 100 to 300 K, ±2% from 300 to 2500 K, ±3% from 2500 up to 3600 K, ∼ ±5% in the liquid region. (ii) The recommended polynomial fits do not necessarily imply a recommendation for the temperature derivative of the electrical resistivity. (iii) A large portion of the experimental datasets analyzed by Desai concern mono-crystalline specimens and many times the orientation of the single crystal is not even mentioned. It can be theoretically expected that the resistivity differences between monocrystalline and polycrystalline tungsten are insignificant, because of the bcc tungsten structure. In fact, this has been observed by Desai by inspecting the data. The synthesized Desai dataset was revisited by White and Minges in 1997 [47]. These authors fitted a fourth-order polynomial to the -corrected for thermal expansion -recommended values in the range 100 < T (K) < 3600. The White-Minges fit reads as [47] ρ el = −0.9680 + 1.9274 × 10 −2 T + 7.8260 × 10 −6 T 2 − 1.8517 × 10 −9 T 3 + 2.0790 × 10 −13 T 4 100 K ≤ T ≤ 3600 K , where ρ el is measured in 10 −8 Ωm or in µΩcm. This polynomial fit is characterized by a 0.2% rms deviation as well as a maximum deviation of +0.6% at 150 K and −0.5% at 400 K. Finally, in the MIGRAINe dust dynamics code, the resistivity is also an input, since it is needed for the permittivity model that is employed in the Mie calculation of the emissivity [15]. A polynomial fit has been employed in the MIGRAINe code that is similar to the White-Minges expression. The MIGRAINe fit reads as [48] ρ el = +0.000015 + 1.52 where again ρ el is measured in 10 −8 Ωm or in µΩcm. A comparison between the resistivities and the resistivity temperature derivatives stemming from the three different fits can be found in figure 1. The deviations between the different fits are very small also for the temperature derivative. It is preferable though that the White-Minges fit is employed in future applications and extrapolated up to the actual melting point of 3695 K. The justification for the choice of this fit will be provided in the following paragraph. Discontinuity at the melting point. The electrical resistivity of all elements has a discontinuity at the melting point. For most liquid metals ρ l el > ρ s el but there are few exceptions [49]. Reliable measurements of the W electrical resistivity asymptotically before and after the melting point have been outlined in Table II. Their mean values are ρ s el ≃ 121 µΩcm and ρ l el ≃ 136 µΩcm. They are very close to the measurements of Seydel & Fucke [50], whose measurements we are going to adopt not only for the discontinuity but also for the liquid state. The extrapolated value of the Desai fit is ρ s el ≃ 119 µΩcm, the extrapolated value of the White-Minges fit is ρ s el ≃ 122 µΩcm and the extrapolated value of the MIGRAINe fit is ρ s el ≃ 122 µΩcm. Therefore, we conclude that the White-Minges fit (but also the MIGRAINe fit) can be extrapolated from 3600 K to 3695 K with a negligible error. Liquid tungsten. The electrical resistivity of elemental liquid metals generally exhibits two tendencies [46,49,50]: (i) a monotonous increase beyond the melting point at a much slower pace than the solid state increase (e.g. refractory metals such as Ti, V, Mo), (ii) a very slow decrease right after the melting point followed by an increase again at a much slower pace than the solid state increase (e.g. the low melting point Zn). Tungsten belongs to the second group [50]. The experimental results have been fitted with a second-order polynomial. The Seydel-Fucke fit reads as where ρ el is measured in 10 −8 Ωm. We point out that there are some uncertainties in the temperature measurements due to the lack of data for the temperature dependence of the liquid tungsten emissivity. A constant emissivity has been assumed across the liquid phase, which can be expected to translate from a 5% T -uncertainty near the melting point to a 10% T -uncertainty close to 6000 K. On the other hand, the uncertainties in the resistivity measurements should be 5 − 6%. To our knowledge, the only alternative analytical expression for the resistivity of liquid tungsten has been provided by Wilthan et al. [51], see also Refs. [52,53]. The experiments were performed from 423 K to 5400 K and a polynomial fit was employed (including expansion effects). The Wilthan-Cagran-Pottlacher fit reads as [51,52] ρ el = 231.3 − 4.585 × 10 −2 T + 5.650 × 10 −6 T 2 T ≥ 3695 K , where again ρ el is measured in 10 −8 Ωm or in µΩcm. From figure 2, it is evident that the analytical fits are nearly identical. The W electrical resistivity at the solid-liquid phase transition; the values from the solid side ρ s el = ρ el (T − m ) and the liquid side ρ l el = ρ el (T + m ), as well as the discontinuity magnitude ∆ρ el = ρ l el − ρ s el . The first two datasets [19,20] have been corrected for thermal expansion effects following Ref. [46]. Investigators Reference Year ρ s el (µΩcm) ρ l el (µΩcm) ∆ρ el (µΩcm) Lebedev et al. [ Recommended description. The analytical description of the W electrical resistivity consists of employing the White-Minges fit in the temperature range 100 < T (K) < 3695 and the Seydel-Fucke fit in the temperature range 3695 < T (K) < 6000. The W electrical resistivity is illustrated in figure 3. It is worth pointing out that the relative magnitude of the discontinuity of the resistivity at the liquid-solid phase transition is very small, whereas the relative magnitude of the discontinuity of the resistivity temperature derivative is very large (notice also the sign reversal). Finally, we also note that, in the ITER database, a cubic polynomial expression is recommended for the temperature range from 300 to 3300 K [8]. This expression is nearly identical to the Desai, White-Minges and MIGRAINe fits for solid tungsten. C. The specific isobaric heat capacity Solid tungsten. The fourth and last edition of the NIST-JANAF Thermochemical Tables was published in 1998, but the tungsten data were last reviewed in June 1966 [54]. Measurements from 12 different sources were employed that were published from 1924 up to 1964. Only four of these datasets extend at temperatures beyond 2000 K, whereas five datasets are exclusively focused below the room temperature. The NIST webpage provides a Shomate equation fit in the temperature intervals 298 < T (K) < 1900 and 1900 < T (K) < 3680. The NIST fit reads as [55] where c p is measured in J/(mol K). In 1997, White and Minges [47] revisited an earlier synthetic dataset of recommended values [56]. In the range above the room temperature, eleven datasets (dating up to 1994) were selected. The White-Minges fit reads as [47] c p = 21.868372 + 8.068661 × 10 −3 T − 3.756196 × 10 −6 T 2 + 1.075862 × 10 −9 T 3 + 1.406637 × 10 4 where c p is measured in J/(mol K). This fit is characterized by a 1.1% rms deviation, the deviation from the mean is generally less than 1% below 1000 K and less than 2.5% above 1000 K. We point out that there are two misprints in the fitting expression as quoted in the original work [47]. As illustrated in figure 4a, the two expressions begin to strongly diverge above 2900 K; the high temperature measurements employed in the NIST fit are far less reliable. Liquid tungsten. Measurements on free-electron-like elemental metals with low melting points [43] as well as recent experiments on elemental transition metals [30,53] indicate that the enthalpy of liquid metals increases nearly linearly with the temperature over a wide range. In the case of liquid tungsten, the literature consensus is also that the enthalpy at constant pressure is a linear function of the temperature. This implies a constant isobaric heat capacity, courtesy of (∂H/∂T ) P = C p . Thus, also the specific isobaric heat capacity c p = ∂C p /∂m should be constant. However, there is a disagreement concerning the exact value: (i) The NIST-JANAF recommended value is c p = 35.564 J/(mol K). It is very outdated, being based on experiments that were carried out prior to 1961, i.e. many years before the dynamic pulse calorimetry or levitation calorimetry methods were developed. Unfortunately, this value is quoted in material property handbooks [35]. (ii) More reliable measurements provide values that are clustered around c p = 52 J/(mol K). We have c p = 51.8 J/(mol K) [21], c p = 57.0 J/(mol K) [22], c p = 55.1 J/(mol K) [25], c p = 48.2 J/(mol K) [27], c p = 56.1 J/(mol K) [28], c p = 52.9 J/(mol K) [30], c p = 53.7 J/(mol K) [31]. Such deviations are justified in view of the fact that c p is not directly obtained by the measurements but after post-processing (graphical determination from the slope of the enthalpy versus the temperature trace) and thus is subject to an uncertainty of around 10% [40]. (iii) To our knowledge, the most contemporary experiments are those performed by Wilthan et al. [51] in 2005, who performed measurements up to 5400 K and found a constant liquid W value c p = 51.3 J/(mol K), that we shall adopt. Recommended description. (i) A complete analytical description of the tungsten specific isobaric heat capacity can be constructed by combining the White-Minges fit in the temperature range 300 < T (K) < 3695 and the constant value c p = 51.3 J/(mol K) in the temperature range 3695 < T (K) < 6000. See also figure 4b. This implies that the White-Minges fit needs to be extrapolated in the temperature range 3400 < T (K) < 3695. This leads to c s p ≃ 54.7 J/(mol K) and thus to ∆c p ≃ 3.4 J/(mol K). However, as can be observed in figure 4a, the heat capacity starts rapidly increasing at high temperatures, which implies that any extrapolation can lead to significant errors. (ii) Wilthan et al. have also provided an analytical fit for the tungsten specific enthalpy in the range 2300 < T (K) < 3687 [51]. Their fit reads as h(T ) = 83.342 + 0.011T + 3.576 × 10 −5 T 2 (kJ/kg). It would be certainly preferable that the heat capacity was calculated from the local slopes of the experimental data, but here we have to differentiate the above fitting expression, which yields c p = 11 + 7.152 × 10 −2 T [J/(kg K)] or c p = 2.022 + 1.315 × 10 −2 T [J/(mol K)]. Therefore, we have c s p ≃ 50.6 J/(mol K) and thus ∆c p ≃ −0.7 J/(mol K). (iii) Both results are physically acceptable; At the melting point, the difference in the heat capacity of metals between the solid and the liquid phases is rather small and it can be of either sign [43,57]. (iv) In their common range of validity, the fits agree exceptionally well, see figure 5a, but they start diverging at both the interval endpoints. It is preferable to avoid any extrapolations and employ both fits. We shall first calculate their highest temperature intersection point, which is T ≃ 3080 K. This allows us to connect the two fitting expressions in a continuous manner. The recommended description has the form where c p is again measured in J/(mol K). The recommended analytical description is illustrated in figure 5b. Comparison with the fusion literature. In the ITER database; a quadratic polynomial expression is recommended which is valid in the range 273 − 3100 K [8]. It originates from fitting to a synthetic dataset, whose high temperature part is heavily based on measurements provided in the classic 1971 compendium by Touloukian [58]. As illustrated in figure 6a, the ITER recommendation is outdated. The underestimations of the heat capacity start from 2200 K and monotonically increase up to 3100 K. In the extrapolated range 3100 − 3695 K, the situation becomes progressively worse with the underestimation reaching 40% close to the phase transition. In the MEMOS code; a dataset based on Touloukian's compendium is implemented for interpolations in the solid state, whereas the constant NIST-JANAF value of c p = 35.564 J/(mol K) is employed for the liquid state [11]. As evident from figure 6b, in MEMOS, the heat capacity is underestimated from 1700 K with the deviations approaching ∼ 40% from above ∼ 3000 K and across the entire liquid state. Consequences; Underestimation of the heat capacity translates to overestimation of the temperature in the MEMOS simulations compared to the experiments, which could be erroneously attributed to a decreased heat flux incidence from the inter-ELM and intra-ELM plasma. Furthermore, this implies an overestimation of the melt layer depth and a premature initiation of bulk melting during prolonged exposures. D. The thermal conductivity Preliminaries. (i) In condensed matter, heat transfer is mediated by the collisional transport of valence electrons and lattice waves. In metals, the electron contribution dominates over the phonon contribution (which is limited by Umklapp processes) with the exception of samples with high impurity concentration at very low temperatures [59,60]. Due to the fact that the valence electrons are responsible for both charge and heat transfer in metals, a proportion-ality between the thermal conductivity and the electrical conductivity can be expected. This is expressed by the so-called Wiedemann-Franz law that can be derived within Sommerfeld's free-electron theory and the Lorentz gas approximation, it reads as k = (π 2 k 2 b )/(3e 2 ) [T /ρ el ] [59][60][61]. The term in brackets is known as the Lorenz number and its nominal value is L 0 = 2.443 × 10 −8 WΩK −2 . (ii) The resistance to heat transfer by electrons originates from collisions with phonons and collisions with atomic impurities, crystal boundaries, lattice imperfections. The coupling between these collisional contributions is limited, which implies an additivity that is expressed by Matthiessen's rule. However, with the exception of extreme cases where the impurity/imperfection concentration is very large, electronphonon collisions dominate already from the room temperature [60]. Thus, for our temperature range of interest, the thermal conductivity can be expected to be weakly dependent on crystalline structure details. Solid tungsten. In 1972 ; Ho, Powell and Liley provided recommended and estimated thermal conductivity values for all elements with atomic numbers up to Z = 105 [62,63]. These recommended datasets were synthesized for 82 elements after the careful analysis of 5200 different sets of experimental measurements. Their recommended dataset for tungsten will not be employed for the determination of the fitting expression but will be employed for comparison with our recommended treatment. In 1984 ; Hust and Lankford critically analyzed all literature data on the thermal conductivity of four reference metals (aluminium, copper, iron, tungsten) for temperatures up to melting as well as provided analytical fits based on theoretical descriptions [64]. Their analysis was later closely followed by White and Minges [47]. Intricate details of their analysis and, in particular, their utilization of the residual resistivity ratio will not be discussed here, since they are important for the low temperature part of the thermal conductivity ( < ∼ 100 K), which is not relevant for fusion applications. They utilized 13 datasets for their fit, which contain experimental results from 2 K up to 3000 K (only four datasets contained measurements above 2000 K). The basic ingredients of the Hust-Lankford fit for tungsten are the electron-defect interaction term W o (∝ T −1 ), the electron-phonon interaction term W i (approximately ∝ T 2 ), the interaction coupling term W io (nearly zero for tungsten) and the mathematical residual deviation term W c . These terms are combined to provide the thermal conductivity in a manner reminiscent of Matthiessen's rule. The analytical expressions and their connection to the thermal conductivity read as [64] The constant β has been chosen to correspond to a residual resistivity ratio of 300, whereas the P i parameters were determined by least square fits of the combined dataset. Their arithmetic values are [64] β = 0.006626 , P 1 = 31.70 × 10 −8 , P 2 = 2.29 , P 3 = 541.3 , P 4 = −0.22 , P 5 = 69.94 , P 6 = 3.557 . Surpringly, a comparison of the fit with the tabulated values reveals deviations below 90 K. This can either originate from misprints in the residual deviation W c or from improper rounding-off of the least square coefficients. Since these deviations lie well below our temperature range of interest, we have not pursued this issue further. For completeness, the functional form of the tungsten thermal conductivity according to the Hust-Lankford fit is illustrated in figure 7. The plot covers the full temperature range of validity, 2 < T (K) < 3000, but the fit will only be utilized in the temperature range 300 < T (K) < 3000. In the latter range, the comparison with the Ho-Powell-Liley recommended dataset reveals a remarkable agreement. On the other hand, in the low temperature range, there are very strong deviations below 40 K (exceeding by far the selected plot scale). The emergence of these deviations is theoretically expected; they are a direct consequence of the electron-defect interaction term, which becomes dominant at very low temperatures and is a very sensitive function of the sample purity [59]. The Hust-Lankford fitting function is relatively cumbersome for numerical simulations. Its complexity stems from the low temperature maximum of the thermal conductivity, whose position lies well below fusion regimes of interest. An alternative empirical expression has been found by digitizing the Hust-Lankford fitting function with sampling steps of 50 K from 300 K to 3700 K and by least squares fitting the emerging dataset to the Shomate equation. This liquid metals such as tungsten is limited due to the increasing importance of convective and radiative heat transfer [65]. (ii) Experimental techniques that measure the thermal diffusivity α = k/(ρ m c p ) can clearly lead to the evaluation of the thermal conductivity [65]. However, post-processing requires the simultaneous knowledge of the mass density and the heat capacity and the measurement uncertainty can be large. (iii) Experimental techniques that measure the electrical resistivity ρ el can also lead to the evaluation of the thermal conductivity [40,49,65,66]. The connecting relation is the Wiedemann-Franz law, k = L 0 T /ρ el with L 0 = 2.443 × 10 −8 WΩK −2 the nominal Lorenz number. In this manner, the abundance of liquid tungsten resistivity data, that have been acquired by dynamic pulse calorimetry, can be translated to thermal conductivity data. The use of the ρ el (T ) fitting expressions with the Wiedemann-Franz law can lead to the propagation of numerical errors. Therefore, when possible, it is preferable that first each resistivity data point is translated to thermal conductivity and that afterwards curve fitting takes place. This procedure has been followed for the Seydel and Fucke measurements [50]. In the original publication, the authors only provide the fitting expression for the resistivity, but their resistivity data have been presented in graphical form in Ref. [51]. The data have been extracted with the aid of software, they are represented by the average of three different extractions in order to avoid errors due to axis mismatch. The measurements consist of 13 datapoints from the melting temperature up to 6000 K and have been fitted with a quadratic polynomial. The Seydel-Fucke fit reads as where k is measured in W/(m K). The mean value of the absolute relative fitting error is 0.25%, see also figure 8a. Let us compare with the measurements of Pottlacher from melting up to 5000 K [67]. The Pottlacher fit reads as [67] k = 6.24242 + 0.01515T 3695 K ≤ T ≤ 5000 K , where k is measured in W/(m K). We point out that typical uncertainties in the indirect determination of the thermal conductivity with dynamic pulse calorimetry are ∼ 12% [40,66]. The two fitting functions are plotted in figure 8b, in their common domain of definition. The deviations are acceptable being < 7%. Moreover, we note that the Seydel-Fucke experiments are in better agreement with other recent measurements [68]. Finally, it is worth mentioning that Ho-Powell-Liley provide provisional values for the thermal conductivity of tungsten over its entire liquid range, from the melting up to the critical point [62,63]. These values were estimated with the phenomenological theory of Grosse, which is based on an empirical hyperbolic relation for the electrical conductivity of liquid metals [69,70]. As illustrated in figure 8a and expected due to the oversimplified theoretical analysis, these provisional values are not accurate. Recommended description. In order to complete the description, it is necessary to verify that the extrapolation of the modified Hust-Lankford fit in the temperature range 3000 < T (K) < 3695 is viable. (i) We have confirmed that the extrapolated values lie very close to the Ho-Powell-Liley recommended dataset [62,63], which features seven data-points in this range, see figure 9a. (ii) We have performed a comparison with the thermal conductivity resulting from the combination of the White-Minges fit for the electrical resistivity [47] with the Wiedemann-Franz law. The agreement was satisfactory. We also note that the two curves overlap when employing L eff = 1. Comparison with the fusion literature. In the ITER database; a cubic polynomial expression is recommended which is valid in the range 273 − 3653 K [8]. It originates from fitting to a synthetic dataset, whose high temperature part is heavily based on the recommended dataset provided in the classic 1970 compendium by Touloukian [71]. As illustrated in figure 10a, the ITER fit agrees well with our recommended description from the room temperature up to the melting point. However, the ITER fit is characterized by two rather un-physical inflection points, the local sign switching of ∂k/∂T might influence thermal modelling due to the ∇ · (k∇T ) term in the heat conduction equation. In the MEMOS code; Touloukian's recommended dataset is implemented for interpolations in the solid state, whereas the Ho-Powell-Liley provisional dataset is implemented for interpolations in the liquid state [11]. As evident from figure 10b and previous comparisons, the thermal conductivity of solid tungsten is accurately described, while it is mainly underestimated for liquid tungsten. The deviations increase towards the boiling point but never exceed 20%. E. The mass density Solid tungsten. The analysis of White and Minges is based on a synthetic dataset constructed from eleven sets of measurements above 300 K and three sets of measurements below 300 K [47]. They provide a least-squares polynomial fit for the linear expansion coefficient α l = (1/l 0 )(dl/dT ), where l 0 is the length measured at the room temperature (T 0 = 293.15 K), that is valid from 300 K up to 3500 K. The linear expansion coefficient fit reads as The normalized linear dimension fit reads as In the case of isotropic thermal expansion for a cubic metal such as tungsten, we have V /V 0 = (l/l 0 ) 3 for the volume expansion. The specific volume fit reads as Finally, the dependence of the mass density of solid tungsten on the temperature can be evaluated by employing ρ m0 = 19.25 g cm −3 for the room temperature mass density and V /V 0 = ρ m0 /ρ m as imposed by mass conservation. The White-Minges fit reads as where ρ m is measured in g cm −3 . Liquid tungsten. So far we have employed the Seydel and Fucke measurements [50] for the electrical resistivity and the thermal conductivity of liquid tungsten. It would be consistent to employ the volume expansion data originating from the same experimental group, provided of course that they are reliable. (i) Seydel and Kitzel have provided thermal volume expansion data for five refractory metals (Ti, V, Mo, Pd, W) from their melting up to their boiling point [72]. They have successfully fitted the specific volume of tungsten to a quadratic polynomial. The Seydel-Kitzel fit reads as where V 0 is the tungsten specific volume in room temperature. It is worth noting that the Seydel-Kitzel fit has been singled out as the recommended expression in specialized reviews [73]. (ii) Hixson and Winkler have measured the specific volume of liquid tungsten in the range 3695 ≤ T (K) ≤ 5700 [27]. They have provided linear expressions for the specific volume as a function of the enthalpy and for the enthalpy as a function of the temperature. Combining their expressions, we acquire the Hixson-Winkler fit that reads as where V 0 is the tungsten specific volume in room temperature. (iii) Kaschnitz, Pottlacher and Windholz have carried out similar measurements without providing fitting expressions [28]. However, the analytical fit of the specific volume as a function of the temperature has been plotted in a figure. We digitized this figure in the temperature range 3695 ≤ T (K) ≤ 6000 with steps of 100 K and we least-square fitted the resulting dataset to a quadratic polynomial. The Kaschnitz-Pottlacher-Windholz fit reads as where again V 0 is the tungsten specific volume in room temperature. The mean value of the absolute relative fitting error is 0.05%. (iv) Hüpf et al. have also measured the volume expansion of five refractory liquid metals (V, Nb, Ta, Mo, W) [52]. We note that the authors provided a fit for the quantity D 2 /D 2 0 as a function of the temperature, where D denotes the wire diameter. Under rapid heating the melted wire expands solely in the radial direction, which implies that its volume is proportional to the cross-section and thus V /V 0 = D 2 /D 2 0 [66,74]. The fitting expression consists of two polynomial branches, but it is continuous at the branch point. The Hüpf fit reads as where again V 0 is the tungsten specific volume in room temperature. The four fits are illustrated in figure 11a. It is evident that the Seydel-Kitzel fit greatly overestimates the volume expansion for very high temperatures with the deviations from the other curves starting from 4500 K. The cause of this overestimation was investigated in a seminal paper by Ivanov, Lebedev and Savvatimskii [74]; All the aforementioned experiments were based on the resistive pulse heating technique and the volume expansion measurements were carried out by recording the temporal evolution of the shadow the sample produced after illumination with a radiation source either in a dense gas or in a liquid. Only Seydel and Kitzel performed their experiments in water [72]. In that case, a layer of vapour surrounded the sample with its thickness determined by the sample temperature and its rapid evolution. Since vapor possesses a refractive index smaller than that of water, the vapor layer caused the shadow image to expand and was responsible for the overestimation. The correctness of the other fits was confirmed by the same authors by measurements of the thermal expansion of liquid tungsten with two alternative independent techniques, the capillary method and the probe method [72]. From figure 11a, it is also evident that, close to the melting point, the Hüpf fit deviates from the other curves. Combining the above and considering the more limited temperature range of the Hixson-Winkler fit, we conclude that the Kaschnitz-Pottlacher-Windholz fit is the most appropriate. It is preferable to convert this fit to an analytical expression for the mass density. Using ρ m0 = 19.25 g cm −3 for the room temperature mass density of tungsten and V /V 0 = ρ m0 /ρ m , we acquire the Kaschnitz-Pottlacher-Windholz fit for the mass density where ρ m is measured in g cm −3 . This fit is illustrated in figure 11. The density of liquid tungsten at the melting point is ρ l m = 16.267 g cm −3 , which is very close to typical values recommended in handbooks. Recommended description. In order to complete the description, it is necessary to verify that the White-Minges fit is reliable at high temperatures close to the melting point. Miiller and Cezairliyan had employed a precise highspeed interferometric technique for the measurement of the thermal expansion of tungsten from 1500 K up to the melting point [75]. The maximum uncertainty in the measured linear expansion was estimated to be ∼ 1% at 2000 K and ∼ 2% at 3600 K. From figure 12a, it is clear that their experimental results are nearly indistinguishable from the White-Minges fit. Overall, the recommended description comprises of the White-Minges fit in the temperature range 300 < T (K) < 3695 and the Kaschnitz-Pottlacher-Windholz fit in the temperature range 3695 < T (K) < 6000. See figure 12b for an illustration. From the above, we have ρ s m = 17.934 g cm −3 and ρ l m = 16.267 g cm −3 . The resulting discontinuity at the liquid-solid phase transition is ∆ρ m = 1.667 g cm −3 . As expected we have ρ s m > ρ l m similar to most metals [49]. It is worth noting that the large relative magnitude of the discontinuity implies a rather large volume expansion during melting compared to other bcc metals [49]. [75]. (b) The complete recommended analytical description of the tungsten mass density from 300 to 6000 K. F. The surface tension Significance. The surface tension is a fundamental physical quantity in various plasma-material interaction phenomena that are important for fusion devices: (i) Droplet generation. The velocity difference at the interface between the edge plasma and the melt layer leads to the development of the Kelvin-Helmholtz instability and the growth of surface waves whose subsequent breakup can result to metallic droplet ejection into the plasma [76,77]. The surface tension impedes the growth of the K-H instability by providing the restoring force that stabilizes short wavelength perturbations [78]. (ii) Droplet disintegration. The shape of charged spherical droplets is subject to distortions due to electrostatic pressure [79]. The surface tension counteracts the electrostatic pressure which tends to rip the droplets apart. The application of the classical Rayleigh linear analysis for metallic droplets embedded in fusion plasmas leads to a threshold radius below which electrostatic disruption occurs and whose value is inversely proportional to the surface tension [80]. (iii) Melt-layer motion. Surface tension gradients stemming from surface temperature gradients naturally result to thermo-capillary flows that can influence macroscopic melt-layer motion. Since surface tension enters the mathematical description through the boundary condition that expresses the balance between the tangential hydrodynamic stress and the surface tension gradient, its effect is more transparent when inspecting the Navier-Stokes system within the shallow water approximation, where it contributes a source term proportional to (∂σ/∂T ) (∇T ) to the non-normal liquid metal velocity components [11]. Liquid metals. Conventional techniques can be utilized for the measurement of the surface tension of liquid metals such as the maximum bubble pressure method, the sessile drop method and the pendant drop -drop weight method [81]. For melts of refractory metals, container-less (or non-contact) methods and particularly levitating drop methods are required in order to eliminate the possibility of chemical reactions between the melt and crucibles or substrates [82][83][84]. Different variants of the levitating drop method have been developed such as aerodynamic, optical, electrostatic and electromagnetic levitation [83,84]. The experimental results originating from electrostatic levitation measurements are generally considered to be more accurate [82] due to the inherent advantages of this method [84,85]. The electrostatic levitation method is based on lifting a small charged material sample with the aid of electrostatic fields, melting the sample with the aid of lasers, inducing shape oscillations by applying a small amplitude ac modulation to the field, recording the oscillating frequency as well as the amplitude damping of the drop shape profile, which provide the surface tension and the viscosity [85]. It is worth noting that the temperature dependence of the liquid metal surface tension has also been extensively studied because of the aforementioned thermo-capillary Marangoni flows. In general, it is assumed that the dependence of the surface tension of pure liquid metals on the temperature is linear [81,82,84,86]. This linearity is not imposed by generic theoretical arguments, but more likely stems from the limited temperature range of the experiments and the insufficient accuracy of the measurements. The basic constraint imposed by thermodynamics is that the surface tension reduces to zero at the critical point [49]. These remarks imply that the temperature coefficient is always negative; positive values have been measured but -most of the times -they can be attributed to impurity effects or non-equilibrium conditions [49]. Liquid tungsten. Numerous reviews dedicated to experimental measurements of the surface tension of liquid metals can be encountered in the literature [81,82,87,88]. In these compilations, very few data can be found for the surface tension of tungsten at the melting point and no measurements can be found for the temperature dependence of the tungsten surface tension. Fortunately, very recent experiments were carried out by Paradis et al. with the electrostatic levitation method [89]. The surface tension was measured for liquid tungsten barely above the melting point and in the under-cooled phase, 3360 < T (K) < 3700. The temperature interval of 350 K can be considered as adequate for [91] has been corrected for the liquid mass density following Ref. [93], since the exact experimental output in the pendant drop -drop weight method is the ratio σ/ρm and the room-temperature tungsten density was employed in the original work. Investigators Reference the determination of the temperature coefficient. A linear fit of the form σ = σ m − β(T − T m ) provided an accurate description of the data, which -in absence of other measurements -needs to be extrapolated in the entire liquid phase. The Paradis fit reads as [89] where σ is measured in N/m. The uncertainties in the least square fit coefficients are ∼ 10% (σ m ) and ∼ 25% (β). The surface tension at the melting point σ m displays a strong agreement with previous measurements, as seen in Table III. We shall check how physical is the experimental value of the linear coefficient β by extrapolating at very high temperatures and determining the critical point temperature from σ = 0. The result is T c ≃ 11700 K. There is a remarkable agreement with numerous estimates of the tungsten critical point. In particular; the Guldberg rule leads to 12277 K, the Likalter equation of state leads to 12466 K, the Goldstein scaling leads to 11852 K and dynamic experiments using exploding wires lead on average to 12195 K [94]. G. The dynamic viscosity Liquid metals. Conventional experimental techniques can be employed for the measurement of the dynamic viscosity of liquid metals such as the capillary method, the oscillating vessel method, the rotating cylinder method [49,95,96]. For melts of refractory metals, non-contact techniques such as the electrostatic levitation method are preferred due to the high melting temperatures and the enhanced reactivity at elevated temperatures [97,98]. In general, it is assumed that the dependence of the dynamic viscosity of pure liquid metals on the temperature is of the Arrhenius form [49,86], i.e. µ(T ) = µ 0 exp [E a /(RT )] with E a the activation energy for viscous flow, µ 0 the pre-exponential viscosity and R = 8.314 J/(mol·K) the ideal gas constant. It should also be emphasized that, within some limitations, the dynamic viscosity and the surface tension are connected by a rigorous statistical mechanics relation. The Fowler formula for the surface tension of liquids reads as σ(T ) = (πn 2 /8) ∞ 0 r 4 g(r, T )[dφ(r, T )/dr]dr, where g(r, T ) is the pair correlation function, φ(r, T ) is the effective pair interaction potential and n is the particle number density [99,100]. The Born-Green formula for the viscosity of liquids reads as µ(T ) = m/(k B T )(2πn 2 /15) ∞ 0 r 4 g(r, T )[dφ(r, T )/dr]dr, with m the particle atomic mass [101]. Dividing by parts, the Fowler-Born-Green formula emerges, µ(T ) = (16/15) m/(k B T )σ(T ) [102][103][104]. The fundamental assumptions behind the Fowler formula and the Born-Green formula determine the applicability range of this simple elegant formula [104], which has proved to be very accurate for elemental liquid metals [49]. Liquid tungsten. Numerous works that have reviewed experimental data for the viscosity of liquid metals, elemental but also alloys, can be encountered in the literature [42,86,[105][106][107]. Similar to the case of surface tension, in these compilations, very few data can be found for the viscosity of tungsten at the melting point and nearly no measurements for its temperature dependence. The only exception are very recent experiments that were carried out by Ishikawa et al. with the electrostatic levitation method [108,109]. The measurements reported in Ref. [109] will be considered in greater detail, since it has been concluded that the measurements of Ref. [108] were affected by sample positioning forces. In Ref. [109], the viscosity was measured for liquid tungsten in the under-cooled phase, 3155 < T (K) < 3634. The temperature interval of 480 K can be considered as adequate for the determination of the temperature dependence. An Arrhenius fit of the form µ = µ 0 exp [E a /(RT )] provided an accurate description of the data, which -in absence of other measurements -needs to be extrapolated in the entire liquid phase. The Ishikawa fit reads as [109] where µ is measured in Pa s. This expression corresponds to an activation energy E a = 122 × 10 3 J/mol which has been determined by least square fitting with a 20% uncertainty. The extrapolated value of the viscosity at the melting point is µ(T m ) = 8.5 × 10 −3 Pa s close to the experimental value µ(T m ) = 7.0 × 10 −3 Pa s provided in the literature [86]. Both measurements of Ishikawa et al. [108,109] are illustrated in figure 13a together with the least square fitted Arrhenius expressions. In the absence of other experimental results, the Fowler-Born-Green formula provides the only way to cross-check the adopted measurements. In figure 13b, the ratio of σ(T )/µ(T ), where σ(T ) follows the Paradis fit and µ(T ) follows the Ishikawa fit, is expressed in units of (15/16) (k B T )/m and plotted as a function of the temperature. The quantity does not diverge strongly from unity, especially taking into account the experimental uncertainties and the wide extrapolations in the viscosity as well as the surface tension. It is worth investigating whether fitting expressions other than the pure Arrhenius form can equally fit the experimental data, but provide a better agreement with the Fowler-Born-Green formula. A cubic Arrhenius fit, where µ is measured in Pa s, fulfills these two criteria. It is impossible to determine whether the extrapolated Arrhenius fit for the viscosity or the extrapolated linear fit for the surface tension are responsible for the deviations from the Fowler-Born-Green formula, not to mention that this formula should not be exactly obeyed across the liquid phase. Therefore, we still recommend the use of the pure Arrhenius fit. The aim of this comparison was to highlight the need for tungsten surface tension and viscosity measurements in larger temperature ranges and for temperatures exceeding the melting point. A. Complications in burning fusion plasma environments The recommended analytical expressions are nearly exclusively based on experimental results for pure polycrystalline tungsten. Nevertheless, unless rather extreme cases are considered, microstructural details and impurity concentrations should have a negligible influence on the thermophysical properties of interest. Even in the case of pure surface quantities that are very sensitive to adsorbates, such as the surface tension in the liquid phase, the volatility of low-Z contaminants at elevated temperatures guarantees a limited effect. Considering the hostile edge plasma environment of magnetic fusion reactors, it is inevitable that complications arise which should be discussed in further detail. These mainly concern the possible impact of external magnetic fields, plasma contamination, beryllium-tungsten alloying and neutron irradiation. (a) Magnetic field effects. The prominent role of the de-localized valence electrons in charge and heat transport implies that strong external magnetic fields could influence the magnitude and alter the isotropic nature of thermophysical properties such as the thermal conductivity or the electrical resistivity. However, even for high field strengths, magnetic field effects can be expected to be very weak for tungsten, since the mean free paths are much smaller than the Larmor radii due to the enormous density of the scattering centers. Order of magnitude estimates can be performed with the aid of the elementary Drude model, i.e. a single particle description with friction described by the relaxation time approximation [59]. The valence electron density is n e ≃ 1.3 × 10 29 m −3 and the mean time between collisions is given by τ e = m e /(n e e 2 ρ el ), which lead to ω ce τ e = B/(en e ρ el ), where ω ce denotes the cyclotron frequency of the valence electrons. For B = 6 T and room temperature, we have ω ce τ e ∼ 5 × 10 −3 . Within the Drude model, the relative electrical resistivity increase is ∆ρ el /ρ el = (ω ce τ e ) 2 [59] which is clearly negligible. (b) Plasma contaminants. As analyzed in the previous section, for high impurity or imperfection concentrations, interaction with defects may dominate the valence electron transport and thus drastically modify quantities such as the thermal conductivity or the electrical resistivity. High hydrogen and helium atom concentrations are unavoidable in the surface proximity of plasma-facing components, owing to the implantation and trapping of the incident plasma ions. It has been documented in the literature that helium bubble and tungsten fuzz formation lead to the degradation of the local thermal properties [110,111], as expected from the porous or fiber-like surface morphology. In particular, recent thermal conductivity measurements for tungsten damaged by high flux -low energy helium plasma revealed a 80% reduction [112]. Such phenomena may have an impact on the PFC power-handling capabilities and more systematic measurements need to be carried out in order to document the extent of the thermally degraded near-surface region. However, they are most likely not relevant for repetitive transient melting events. At elevated temperatures (still well below the melting temperature), trapped gas desorption accompanied by nano-structure annealing [111,113] can be expected to strongly limit the effect of plasma contamination from the beginning of the ELM cycle. (c) Be-W alloying effects. Beryllium erosion from the first wall and its transport to the divertor has been wellunderstood and documented in JET [114,115]. Moreover, the ITER divertor surface is expected to be covered by a thin beryllium layer [111]. Under the appropriate plasma conditions (so that significant concentrations of beryllium remain locally deposited) and surface temperatures (so that element inter-diffusion is significant) beryllium-tungsten alloys can form, for instance Be 2 W with T m ∼ 2520 K or Be 12 W with T m ∼ 1780 K [111,[116][117][118]. As exhibited by the much lower melting points, mixed beryllium-tungsten materials are characterized by thermophysical properties that strongly depend on the alloy stoichiometry. Optimized conditions for the growth of Be-W alloys might occur near the strike point [119]. Further R&D is necessary to quantify the local extent of Be-W alloy formation. It is evident though that, unless the thickness of the alloy layer is significant, its presence should not be important for thermal or hydrodynamic modelling in spite of the totally different thermophysical properties compared to tungsten. (d) Neutron irradiation effects. The penetration depth of neutrons in condensed matter is several orders of magnitude larger than the penetration depth of electrons, ions or photons of comparable incident energy due to the absence of Coulomb interactions with bound electrons and the smallness of the nuclear cross-sections [120]. The penetration depth of fusion ions / electrons in tungsten is ∼ 1 − 10 nm, whereas the penetration depth of D-T fusion neutrons in tungsten is ∼ 1 − 10 cm [121]. Thus, neutron-induced damage is much more extended in volume than plasma-induced damage, even when accounting for bulk diffusion. Neutron irradiation can significantly modify the thermophysical properties of tungsten and particularly the thermal conductivity [121][122][123][124][125][126], as a consequence of atomic displacements (electron-defect interaction term) and nuclear transmutation (electron-phonon interaction term). The strength of this modification depends on the neutron spectrum, the neutron fluence and the irradiation temperature [121]. Unfortunately, experimental works on the subject are still limited [127,128]. The effect of atomic displacements is hard to quantify, especially because of mitigation by annealing at high temperatures. However, a further investigation of the effect of transmutation in the tungsten power handling capabilities is viable. The principal tungsten transmutation products due to bombardment with D-T fusion generated neutrons are rhenium and osmium [129]. At 300 K, the thermal conductivities of W, Re and Os are 174, 47.9 and 87.6 W/(m K), respectively [63]. We shall focus on rhenium owing to its smaller thermal conductivity and its larger solubility limit in tungsten but also because tungsten-rhenium alloys have been extensively studied due to their applications in high temperature thermocouples. The solid solubility limit of Re in W increases with the temperature (∼ 28% at 2000 K and ∼ 37% at 3300 K) and, apart from the solid solutions, two homogeneous intermetallic phases exist (broad WRe σ−phase, narrow WRe 3 χ−phase) [130]. Numerous works have measured the temperature dependence of the electrical resistivity and thermal conductivity of pure Re [131][132][133][134] as well as W-Re alloys [135][136][137][138][139][140][141] in the solid and liquid phase. Some selected datasets are illustrated in figure 14. The generic picture emerging is summarized in the following: (i) As the Re concentration increases, the thermal conductivity of the alloy monotonically decreases. The rate of decrease is rapid up to roughly 10% Re but it saturates around 20%. (ii) In contrast to pure W, the thermal conductivity of solid W-Re alloys is monotonically increasing at elevated temperatures. Consequently, the thermal conductivity deviation from pure W is strongly reduced compared to the room temperature. (iii) In the liquid phase, the thermal conductivity of W-Re alloys is very close to W. In fact, even the differences between pure W and pure Re are very small above the melting points. Overall, W transmutation to Re alone can lead to a drastic reduction of the room temperature thermal conductivity up to ∼ 70% which becomes progressively lower as the temperature increases and eventually vanishes at the W melting point. It is worth pointing out that transmutation is estimated to be very limited in ITER but is a primary concern for DEMO [142]. 14: (a-insert) The room temperature electrical resistivity as a function of the rhenium content for typical W-Re alloys. The measurements are adopted from Refs. [38,139] and the solid curve is drawn to guide the eye. (a-main) The thermal conductivity as a function of the temperature in the 1200 − 2500 K range for pure W and Re as well as several W-Re alloys. Pure rhenium: the recommended dataset of Ho, Powell and Liley [63] in the range 1200−2600 K has been employed for quadratic polynomial fits. Rhenium alloys: the measurements of Vertogradskii and Chekhovskoi [135] in the range 1200 − 3000 K have been extracted from plots and fitted to quadratic polynomials. Pure tungsten: the modified Hust-Lankford fit has been employed. (b) The thermal conductivity as a function of the temperature in the 2500 − 4500 K range for pure W and Re as well as several W-Re alloys. Pure rhenium: The tabulated experimental data of Thévenin, Arlés, Boivineau and Vermeulen [134] for the uncorrected electrical resistivity and the thermal volume expansion have been employed for the determination of the electrical resistivity which was then converted to the thermal conductivity with the aid of the Wiedemann-Franz law. The resulting dataset has been fitted with quadratic polynomials in the temperature ranges 2500 − 3453 K (solid) and 3453 − 4500 K (liquid). Rhenium alloys: Linear fits to the thermal conductivity measurements of Seifter, Didoukh and Pottlacher [138] in the temperature range 2500 − 4400 K were employed. The alloy melting ranges are 3325 − 3395 K for W−4% Re and 3319 − 3421 K for W−31% Re. Pure tungsten: the recommended analytical description (the modified Hust-Lankford fit for the solid state and the Seydel-Fucke fit for the liquid state) has been employed. To sum up, the thermophysical properties of tungsten are barely affected even by strong fusion-relevant magnetic fields. Plasma contaminants and beryllium-tungsten alloying can substantially alter the W thermophysical properties but only in a relatively thin "unstable" surface layer, which implies that they can be neglected in the modelling of bulk PFCs. It should be pointed out though that the degradation of the thermal properties of such thin layers needs to be considered in the analysis of IR camera measurements [143][144][145][146], which might otherwise strongly overestimate the incident plasma heat flux [143]. On the other hand, neutron irradiation can substantially modify the thermophysical properties of W in an extended volume but becomes important for high neutron fluences relevant for DEMO but not for ITER. B. Status of the experimental datasets As a consequence of its extensive use in high temperature technological applications as well as due to its high melting point and very extended liquid range, pure polycrystalline tungsten can be considered as a standard reference material in the metrology of thermophysical quantities. The development of dynamic pulse calorimetry (starting from the 70s) has allowed for accurate measurements of the latent heat of fusion, the electrical resistivity, the specific isobaric heat capacity, the thermal conductivity and the mass density across the solid and liquid state. The development of levitation calorimetry (starting from the 80s) has allowed for accurate measurements of the surface tension and the dynamic viscosity at the beginning of the liquid state. Hence, it has been possible to provide accurate analytical expressions for the temperature dependence of most properties of interest based solely on experimental data and without the need for any extrapolations. The only exceptions are the surface tension and dynamic viscosity of liquid tungsten, where wide extrapolations had to be carried out beyond the melting point, since the only experimental sources on the temperature dependence referred to under-cooled liquid tungsten specimen. In spite of these limitations, the extrapolated analytical expressions performed very well against constraints imposed by rigorous statistical mechanics relations. Further measurements of the surface tension and viscosity in the unexplored temperature range are certainly desirable, but the proposed expressions are expected to be fairly accurate. Finally, the effects of plasma contamination, impurity alloying and neutron irradiation in the thermophysical properties of tungsten have been relatively poorly investigated. The sparse measurements available only account for a small part of the vast fusion-relevant parameter space, since not only these effects strongly depend on the incident plasma / impurity / neutron energies and fluences but also because they most probably operate synergetically. However, recent experiments have demonstrated that these effects can severely degrade the power handling capabilities of tungsten. Further experiments in material testing facilities are certainly required in order to evaluate the consequences for ITER and to assess the suitability of tungsten as a plasma-facing component in future fusion reactors. C. Recommended analytical expressions The thermophysical properties analyzed in this work constitute input for simulations of the thermal and hydrodynamic response of tungsten plasma-facing components, dust and droplets to incident plasma particle and heat fluxes. For this reason, in this concluding paragraph, it was judged to be more practical and convenient for the specialized reader that we simply gather the recommended analytical expressions for the temperature dependence of the thermophysical properties of pure solid and liquid tungsten. Before proceeding, it is worth mentioning that the melt layer motion code MEMOS [12,45] and the dust dynamics code MIGRAINe [15,147] have already been updated following the present recommendations. For the latent heat of fusion, we recommend the typical literature value of where µ is measured in Pa s.
14,445.2
2017-03-18T00:00:00.000
[ "Chemistry" ]
Origin of the Solar Rotation Harmonics Seen in the EUV and UV Irradiance Long-term periodicities in the solar irradiance are often observed with periods proportional to the solar rotational period of 27 days. These periods are linked either to some internal mechanism in the Sun or said to be higher harmonics of the rotation without further discussion of their origin. In this article, the origin of the peaks in periodicities seen in the solar extreme ultraviolet (EUV) and ultraviolet (UV) irradiance around the 7, 9, and 14 days periods is discussed. Maps of the active regions and coronal holes are produced from six images per day using the Spatial Possibilistic Clustering Algorithm (SPoCA), a segmentation algorithm. Spectral irradiance at coronal, transition-region/chromospheric, and photospheric levels are extracted for each feature as well as for the full disk by applying the maps to full-disk images (at 19.3, 30.4, and 170 nm sampling in the corona/hot flare plasma, the chromosphere/transition region, and the photosphere, respectively) from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) from January 2011 to December 2018. The peaks in periodicities at 7, 9, and 14 days as well as the solar rotation around 27 days can be seen in almost all of the solar irradiance time series. The segmentation also provided time series of the active regions and coronal holes visible area (i.e. in the area observed in the AIA images, not corrected for the line-of-sight effect with respect to the solar surface), which also show similar peaks in periodicities, indicating that the periodicities are due to the change in area of the features on the solar disk rather than to their absolute irradiance. A simple model was created to reproduce the power spectral density of the area covered by active regions also showing the same peaks in periodicities. Segmentation of solar images allows us to determine that the peaks in periodicities seen in solar EUV/UV irradiance from a few days to a month are due to the change in area of the solar features, in particular, active regions, as they are the main contributors to the total full-disk irradiance variability. The higher harmonics of the solar rotation are caused by the clipping of the area signal as the regions rotate behind the solar limb. Introduction Investigation of periodicities in solar observational data from both ground-based and satellite space missions has long been of high interest to understand the solar variability. The Extended author information available on the last page of the article existence of a long-term cycle of about 11 years and short-term period of 27 days has been well established using numerous solar indices. These periods are associated with the solar magnetic activity and the modulation of solar features on the surface due to the solar rotation. It is important to search for other possible periodicities in solar data, since the detection of any periodicity in active phenomena would have fundamental significance in understanding of the solar activity. Further, it is assumed that solar irradiance is the main driver for the energy budget to affect the Earth's climate and space weather, thus these investigations may improve the understanding of the solar-terrestrial effects and relationships. Using time-series analysis of various solar indices, many authors have found the 27day period in relative sunspot numbers, 10.7 cm radio flux data, solar UV irradiance, and geomagnetic indices (e.g., Simon, 1982;Donnelly, Heath, and Lean, 1982;Donnelly et al., 1983;Rottman and London, 1984;Simon et al., 1987;Lean and Brueckner, 1988;Tobiska and Bouwer, 1989;Barth, Tobiska, and Rottman, 1990). In addition, it has been reported by several authors that the ultraviolet spectral irradiance shows a prominent period around 13.5 days, but it has not been shown in the 10.7 cm radio flux (Donnelly, Heath, and Lean, 1982;Donnelly et al., 1983). However, the real physical origin of these periods needs still to be known and it requires further detailed investigations using long based time-series data. The previous results were based on data sets that are not long enough to get any statistically significant periods, and these long periods may originate from the computational techniques used, such as the way of data detrending and smoothing. The existence of these periods depends strongly on the solar activity and the time interval that has been investigated. This would suggest a quasi-periodic or time-varying behavior rather than a real cyclic one of various solar data. The various periods have been derived from the integrated irradiance observations but not from spatially resolved full-disk segmented features from the images. As it is important to show the real origin of the longer periodicities, further studies are required using spatially resolved images of the Sun obtained at different wavelengths and the corresponding different segmented magnetic features, particularly, for longer-period observations. Recently, we have published several papers on the segmentation of coronal and photospheric features and their time series using spatially resolved full-disk images of the Sun observed from the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI), both on board the Solar Dynamics Observatory spacecraft (SDO), as well as the Sun Watcher using APS and Image Processing (SWAP) and the Large Yield Radiometer (LYRA), both on board the PROBA2 spacecraft to understand the extreme ultraviolet (EUV) and ultraviolet (UV) irradiance variability (Kumara et al., 2012Zender et al., 2017). In this article, using SDO/AIA data, we discuss the existence of different periodicities observed in the EUV and UV irradiance from the different atmospheric layers, from the photosphere to the corona. Section 2 describes the data processing and segmentation method applied on full-disk images used for this analysis. The spectral analysis presented in Section 3 shows similar peaks of periodicities at 7, 9, and 14 days in the solar irradiance. However, Section 4 provides evidence that the periodicities are induced by the variability in the total area covered by features on the solar surface (mainly active regions but also coronal holes). A wavelet analysis presented in Section 5 also reveals that the periodicities are not always present at all times, and a second spectral analysis did not actually reveal any particular underlying periodic pattern, indicating that the solar features are generated following a random process (on time-scales up to a month). Finally, using simple models in Section 6, we demonstrate that most of the observed peaks in periodicities are naturally derived as higher-order harmonics of the solar rotation introduced by the clipped signal of the visible area of the features, as they go outside the observer's field-of-view (i.e. behind the Sun). Data Processing Full-disk images from the Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) on board the Solar Dynamics Observatory (SDO, Pesnell, Thompson, and Chamberlin, 2011) were used for the analysis. Three spectral pass-bands were selected in the EUV and UV: 19.3,30.4,and 170 nm to sample the corona/hot flare plasma (1.2 × 10 6 and 2 × 10 7 K), the chromosphere/transition region (5 × 10 4 K), and the photosphere (5 × 10 3 K), respectively. Six of each images were taken per day (i.e. 4 h cadence) from 1 January 2011 to 31 December 2018. The Spatial Possibilistic Clustering Algorithm (SPoCA, Barra, Delouille, and Hochedez, 2008;Barra et al., 2009;Verbeeck et al., 2014) was applied to obtain maps of the active regions and coronal holes. SPoCA uses AIA images from two coronal emission lines for the segmentation: 17.1 nm, where the active regions are prominently seen, and 19.3 nm, where the coronal holes can be clearly observed. The level 1.5 AIA images of 4096 by 4096 pixels were used as input for the SPoCA algorithm in our own data pipeline (executed daily on our local server at the European Space Research and Technology Center (ESTEC) since 2012, using the version of SPoCA classification algorithm from May 2012). The maps were then scaled down to 1024 by 1024 pixels at 19.3, 30.4 and 170 nm for the analysis. The reader is invited to look through a paper by Kumara et al. (2014) for a visual example of the segmented images. A quiet-Sun region was defined as all pixels inside the solar disk but not belonging to either the active regions or the coronal holes maps, and the full disk was taken as the entire integrated image. The total number of counts extracted can then be converted into photon flux by dividing by the AIA peak channel response taken from the IDL AIA SolarSoft (SSW) package (aia_get_response routine) and by the image exposure time taken from the fits header. The signal was further corrected for the degradation of the throughput over time also tabulated in the IDL AIA SSW package (aia_bp_get_corrections routine). The resulting time-series were also corrected for discontinuities, which occurred several times during the 7-year period due to changes in the flat-field (AIA team, private communication). These discontinuities are mostly seen in the 30.4 and 170 nm channels, for which they were corrected by imposing continuity of the signal. Finally, to account for the changing distance between the Earth and the Sun during the year, the measured flux was normalized to a distance of 1 AU using the Horizons Ephemeris Service. 1 Figure 1 shows the obtained time-series for the total flux of the full disk and of the segmented features (active regions, coronal holes, and quiet Sun) for three different atmospheric layers. It is important to point out that the AIA images fed to the SPoCA algorithm were not corrected for the degradation, which might have slightly affected the segmentation towards the end of the 2011 -2018 period. Periodicity in the Solar Irradiance A spectral analysis was performed on each of the time-series shown in Figure 1. The power spectral density (PSD) was computed by the Welch method: the entire timeline was subdivided in half-overlapping segments ≈200 days long (1/8 of the total timeline), a Fourier transform was performed on each of the segments after detrending (i.e. subtraction of a linear fitting), a Hann window was applied to the time series. Using the Hann window reduces Figure 2 Normalized power spectral density from the total irradiance time series of each of the three atmospheric layers (corona in red, chromosphere in orange, and photosphere in yellow) for the full disk, active regions, coronal holes, and the quiet Sun from top to bottom, respectively. The error bars show the 95% confidence level (i.e. 2σ ). The error bars of the different layers are staggered to avoid visual overlap. The vertical dashed lines indicate the periods of 7, 9, 14, and 27 days. the artifacts due to discontinuities of the signal at the edges of the window, while the detrending helps remove artificial power at 0 Hz (i.e. offset) from the spectral analysis. The final PSD is obtained by combining the power spectral density of all segments. Although this method reduces the longer periodicity available, it provides a better estimation of the PSD. To compare different atmospheric layers, the time series were normalized before performing the spectral analysis. This was achieved by removing the mean of each time series and dividing by its standard deviation. The power spectral densities for the full disk, active regions, Figure 3 Top panel: Fraction of the full disk taken by the active regions (red shaded area), the coronal holes (magenta shaded area), and the quiet Sun (blue shaded area). Note that the y-axis is discontinuous, as the quiet Sun always covers more than 80% of the disk total area. Bottom panel: Normalized power spectral density from the area time-series of active regions (red), coronal holes (magenta), and the quiet Sun (blue). Note that the periodicities in the quiet-Sun area are by design introduced by the periodicities in the active regions and coronal holes. The vertical dashed lines indicate the periods of 7, 9, 14, and 27 days. coronal holes, and quiet Sun are shown in Figure 2. A significant amplitude for periods at 7, 9, and 27 days can be observed in the full-disk irradiance of the corona, while peaks at 9, 14, and 27 days are more prominent in the full-disk irradiance of the chromosphere and the photosphere. On one hand, apart from the peak in 27 days seen in all layers, only the peaks coming from active regions in the corona at 7 and 9 days can be seen in the irradiance, while at 14 days the power increases in both the chromosphere and the photosphere. On the other hand, peaks at 7, 9, 14, and 27 days are seen in the irradiance from coronal holes and the quiet Sun in all three layers. Notice that the broadening of the peaks is partially a side effect of using the Hann window. Such periodicities are commonly seen in the literature: the broad peak around 27 days corresponds to the solar rotation (between 24 days at the equator and up to 38 days at the poles, averaging at around 27 days), and the lower periods are equal to its higher harmonics (27/2 = 13.5 days, 27/3 = 9 days, and 27/4 = 6.5 days). Some authors (Emery et al., 2011;Efremov, Parfinenko, and Solov'ev, 2018;Pap, Tobiska, and Bouwer, 1990;Nikonova, Klocheck, and Palamarchuk, 1998) interpreted these lower periods and related them to the solar interior and to possible gravity modes of the Sun (g-modes). The segmentation of the images provides a much simpler explanation to these high-order harmonics as demonstrated hereafter. Investigating the Higher-Order Harmonics The segmentation analysis also provides time series of the area covered by active regions and coronal holes. The top panel of Figure 3 shows the fraction of the total solar disk covered by the two, while the remaining part belongs to the quiet Sun. Using the same spectral analysis on these time series reveals similar peaks in each periodicity as shown in the bottom panel of Figure 3. This suggests that the periodicities are not so much linked to the irradiance output of the Sun per se but rather to the change in area of the feature on its surface. This would directly explain the periodicities seen in the irradiance of the active regions and coronal holes, and indirectly of the quiet Sun (as the total area of all three are linked by construction in our analysis). Furthermore, the connection between the periodicities of the active region areas and the full disk irradiance can be explained by the correlation between the total irradiance time series of the different features. Figure 4 shows the correlation coefficients for the three different atmospheric layers. One can see the relatively high correlations between the active regions and the full disk (0.84, 0.75, and 0.55 for the corona, chromosphere and photosphere, respectively). This can be interpreted by the variability of the active regions as the main contributors to the variability of the full disk. Hence, one can say that observing the full disk is a relatively good proxy for assessing the variability of the active regions, which are directly affected by their area on the disk. Note that the quiet Sun also appears to have a relatively high correlation with the full disk at coronal and chromospheric levels (0.93 and 0.73, respectively). This can be a result of the uncertainty in the segmentation, not classifying all ephemeral regions/bright points as active regions (thus being added to the quiet Sun), which could also contribute to the irradiance variability of the full disk. The low correlation coefficient between the full disk and the quiet Sun in the photosphere supports this assumption, as the ephemeral regions/bright points are only prominently seen above the photosphere. In summary, it appears that the periodicities seen in the full disk total irradiance can be attributed to the change in area of active regions on the solar surface. This is supported by the fact that the same periodicities are seen in the time series of the total area covered by active regions and that the total irradiance of active regions is correlated to the total irradiance of the full disk. Looking for Underlying Periodicites of the Periodicity Peaks The nature of the periodicity peaks seen in the bottom panel of Figure 3 is investigated further by performing a Morse wavelet analysis of the two time series (continuous wavelet transform (cwt) MATLAB package, Lilly and Olhede, 2012) for the area of active regions and coronal holes, respectively. The wavelet analysis shown in Figure 5 reveal that the peaks in periodicities at 27, 14, 9, and 7 days are not continuously present. Extracting a time series of the power for each peak and performing a spectral analysis reveal a power spectral density without particular peaks as shown in Figure 6. This is very similar to a random signal; thus, it can be concluded that there are no underlying periodicities governing the occurrence of a periodicity peak in the areas of active regions or coronal holes. Modeling the Periodicities A simple toy-model was created to reproduce periodicities seen in the area variation of the active regions. An active region was represented as a dot on a rotating circle, and the change in area as the dot moves in front of the observer was calculated as the cosine of the line-ofsight angle. The area for the dot is set to zero when it is not on the observer's side. This is a very simple mathematical approximation for active regions, but it is definitely quite close to reality. The only difference is that all dots have the same size in the toy model, whereas, in practice, the area of active regions can largely vary. Figure 7 shows the area signals and power spectral densities for three different cases of rotating active regions: one single active region, two active regions separated by 180 • each, and three active regions separated by 120 • each. The higher-frequency harmonics of the rotation are clearly seen and are due Figure 6 Normalized power spectral density from the periodicity time-series at 14, 9, and 7 days extracted from Figure 5. The vertical dashed lines indicate the periods of 7, 9, 14, and 27 days. No particular periodicity peaks are seen, which indicates that the underlying mechanism producing them is likely to be random. The error bars show the 95% confidence level (i.e., 2σ ). The error bars of the different layers are staggered to avoid visual overlap. to the clipped signal of the visible area. The visible area of a given active region goes to zero when it goes out of the observer's field of view, thus cropping the perfect sine wave of the area signal for this active region. This cropped sine-wave gives rise to the higher-order harmonics. Adding additional active regions around the disk removes the lower frequency peak, starting from the rotation frequency itself, but maintains the higher order peaks. Interestingly, only even and not odd harmonics are seen (i.e. at 14 days, 7 days, etc., but not 9 days, 5.4 days, etc.). However, the odd harmonics can be induced by changing the exponent of the cosine, i.e. the projection function of the active region. This is shown in Figure 8, where a single spot is simulated rotating around the circle with various powers of the cosine function from 1 (i.e. same as Figure 7) down to 0.3. Lower exponents also seem to increase the power of the peaks at higher frequencies. In practice, this exponent can be related to the line-of-sight and limb brightening/darkening effects, affecting the segmentation methods from which the area of the active regions are obtained. A particular example is the vertically extending features of the active region in the corona, such as loops and prominences, which are included in the segmentation. Such features do not increase the area at the disk center, where they are seen from above, but can contribute to a larger segmented area as the active regions moves towards the limb (until a cut-off at the limb, as regions beyond the solar disk are not considered in the analysis). This simple toy-model was further refined in an attempt to reproduce the time series of the active region areas observed by AIA. In order to reproduce the solar cycle variation seen from 2011 to 2018, the number of active regions around the disk was adjusted to match the variation from the total sunspot number during this period (daily sunspot number taken from the Sunspot Index and Long-term Solar Observations (SILSO) World Data Center (2021) smoothed with a 20-day moving average filter). A factor of 1/6 was applied, as most of the active regions segmented by SPoCA in the corona often include multiple sunspots. The model was initialized by adding a number of active regions matching the sunspot number from 1 January 2011 randomly distributed around the disk. Then, for each consecutive days, new active regions were added if the number of active region still present on the disk was less than the observed sunspot number for this given day. This way, the number of active regions simulated always followed the actual observed sunspot number on the 2011 -2018 period. The bottom panel of Figure 9 shows the number of simulated active regions; approximately Table 1 in van Driel-Gesztelyi and Green (2015). The red dots and dashed lines show the three points considered for the fitting: 1, 21, and 90 days corresponding to fluxes of 1 × 10 20 , 5 × 10 21 , and 3 × 10 22 Mx, respectively. Bottom-panel: Number of simulated active regions around the disk (blue line), compared to the total daily sunspot number observed from 2011 to 2018 (red line, x1/6 taken from the Sunspot Index and Long-term Solar Observations (SILSO) World Data Center (2021) smoothed with a 20-day moving average filter). 25 of them were present around the full simulated disk during solar maximum, a number comparable to the number of active regions observed in the segmented maps. The lifetime of each of the generated active regions was randomly drawn from a probability distribution based on Figure 3 from van Driel-Gesztelyi and Green (2015), where the daily probability of observing an active region with a given magnetic flux is presented. Table 1 in van Driel-Gesztelyi and Green (2015) indicates that small active regions with magnetic flux between 1 × 10 20 and 5 × 10 21 Mx have lifetimes from days to weeks, while large active regions with flux from 5 × 10 21 to 3 × 10 22 Mx have lifetimes from weeks to months. Considering "day" as 1 day, "weeks" as 21 days, and "months" as 90 days, a logarithmic relationship between the active regions lifetimes and their magnetic flux was estimated (shown in the top-right panel of Figure 9). This relationship was used to convert the daily number of active regions with a given magnetic flux into the daily number of active regions with a given lifetime. The probability of an active region to have a given lifetime is then calculated by dividing by the total daily number of active regions around the disk. The daily number of active regions varies during the solar cycle (see bottom panel of Figure 9), but, here, for simplicity its average value of ≈12 for the 2011 -2018 period was considered. A simple approach to obtain this probability distribution is to fit a log-log linear relationship directly between the upper and lower limits observed in Figure 3 in van Driel-Gesztelyi and Green (2015). However, it was observed that the power spectral density resulting from using such probability distribution does not match the observed power spectral density for the area of active regions. Indeed, in that case a higher power is seen at shorter periods (≤5 days) while less power is observed at longer periods (≥20 days). The better match was found with a log-logistic distribution with α = 1 and β = 4/3. This probability distribution function has a less inclined tail for periods ≥10 days and also tapers off for periods below 5 days compared to the log-log linear probability distribution. This consequently increases the number of active regions with longer lifetimes thus contributing to higher power at periods ≥20 days, reducing the number of short-living active regions and decreasing the power at periods ≤5 days. The top-left panel of Figure 9 shows the used log-logistic distribution compared to the observed upper and lower log-log linear limits from Figure 3 in van Driel-Gesztelyi and Green (2015). Notice that the probability distribution function was truncated for the values below 2 days, as active regions with shorter lifetime were not considered here. It is important to point out that there are no particular physical motivations for using a loglogistic distribution besides its fitting shape. However, its simple parametrization makes it easy to use and reproduce. Finally, an exponent of 0.5 on the cosine for the line-of-sight projection was used to obtain the contribution of the active regions to the simulated area signal. The resulting time series was then normalized to the same mean as the observed active region areas from 2011 to 2018. Notice that in order to keep the model simple, the size/weight of the active regions was kept constant regardless of their lifetime. In practice, this is probably different, as longer living active regions have larger magnetic flux and therefore likely larger area. However, a clear relationship between magnetic field and active region areas is difficult to estimate because active regions in the corona might be complex (e.g., they might have multiple loops), and they would also evolve and merge during their lifetime. The simulation was repeated N = 10 3 times, each time a different time series for the area of active regions was generated, for which a spectral analysis was conducted. In conclusion, a similar power spectral density as observed for the active region areas could be reproduced as shown in Figure 10. The model reproduces periodicity peaks at 7 and 9 days within a 3σ overlap with the observed peaks and also indicates that such largepower peaks are unlikely to appear over longer periods of time. Indeed, apart from the main peak at 27 days, the signature of the other peaks in periodicities is very weak in the mean power spectral density out of the N = 10 3 simulations. Although the model is by no means a perfectly accurate description of how active regions emerge and evolve, it provides additional insights on the nature of power spectral densities of active regions. Firstly, the presence of short living active regions (≤7 days) appeared to be important to account for the overall power at low periods. Secondly, although the longliving active regions (≥14 days) are the main contributors to the higher harmonics of the solar rotation, it was observed that a too large number of them around the disk actually damps the power of these peaks, i.e. dilutes the effect. Conclusions Long-term periodicities in the solar irradiance are often observed with periods proportional to the solar rotational period of 27 days. The origin of these periods are linked to the solar interior and possibly to gravity modes of the Sun as argued by some authors (Emery et al., Figure 10 Top panel: Example of area signal generated with the refined model (red, time series with the best matching power spectral density) compared to the observed active region areas (blue). The average area out of the N = 10 3 time series is shown in green. Bottom panel: Normalized power spectral density of the observed active region areas from 2011 to 2018 (blue) compared to the best matching power spectral density (red) and the average power spectral density (green) obtained from the N = 10 3 simulated time series. The shaded areas show the 3σ error interval (i.e. 99.97%). The vertical dashed lines indicate the periods of 7, 9, 14, and 27 days. 2011;Efremov, Parfinenko, and Solov'ev, 2018;Pap, Tobiska, and Bouwer, 1990;Nikonova, Klocheck, and Palamarchuk, 1998). However, these periods are also direct higher harmonics of the rotation, but their origin, as such, is rarely discussed further. These periodicities at 27, 14, 9, and 7 days were observed in the various bands of SDO/AIA from corona to chromosphere and photosphere (19.3, 30.4, and 170 nm, respectively) when integrating full disk images for a 7-year period from 2011 to 2018 (6 images per day). For each of these images, the SPoCA segmentation software was used to produce maps of the active regions and coronal holes. This provided a way to extract the irradiance emitted from these two classes of features and to obtain time series of the observed area covered by both classes during the 7-year period. A spectral analysis of these area time-series revealed similar peaks in periodicities, indicating that the higher harmonics seen in the full-disk irradiance are introduced by the change in area of the rotating features on the solar surface. This explanation is further supported by the relatively high correlation between the total flux of active regions and the full disk total flux, in particular, at chromospehric and coronal heights. This can be interpreted as that the active region variability is the main contributor to the full disk variability as it was also reported previously by Kumara et al. (2014). A simple toy-model was made to investigate the periodicities seen in the area time-series. Active regions were represented as a spot on a rotating circle and generated based on the total sunspot number from 2011 to 2018 with a lifetime randomly drawn from a probability distribution (taken as a log-logistic distribution with α = 1, β = 4/3, truncated for the values below 2 days), which matched the distribution of active regions sizes observed by other authors. The change in area for each active region, i.e. projection function, was calculated as the cosine of the angle to the fixed observer taken to the power of 0.5 and set to zero when outside the field-of-view. Adjusting the projection function with an exponent was observed to be important, as the odd harmonics (e.g., 9 days) would not otherwise appear. In practice, this exponent can be understood as representing line-of-sight and limb brightening/darkening effects, affecting the segmentation method, from which the area of the active regions is obtained. Using this simplified model for the emergence and evolution of active regions, similar peaks in the periodicities as seen in the observations could be reproduced. In summary, the periodicities in the order of days seen in the solar irradiance are higherharmonics of the solar rotation, which are induced by the change in area of the features on the solar disk; this are detected in the area of the active regions from 2011 to 2018 and reproduced with a simple modeling of the active region lifetimes around the Sun. The harmonics arise due to the clipping of the signal as active regions disappear behind the Sun. Moreover, we did not detect any underlying periodicity, driving the occurrence of a peak, indicating that for periods below one solar rotation, the formation of active regions follows a random process. It, therefore, seems unlikely that these periods are related to processes in the interior of the Sun.
7,214.2
2021-11-01T00:00:00.000
[ "Physics", "Environmental Science" ]
An approach to develop a green building technology database for residential buildings Buildings consume approximately 39% of the total energy used in US, of which 53% is consumed by residential buildings. Besides, indoor air quality (IAQ) have significant impacts on occupant health since people spend on average around 90% of their time indoors. Nowadays, a great number of green building technologies (GBTs) have been developed and implemented in buildings for reducing energy consumption and improving IAQ. This paper proposes an approach to develop a green building technology database for residential buildings including their the technology’s feature and performance for energy saving and IAQ improvement under different building configuration and climate conditions. The GBTs are collected from case study buildings. For each study case, the GBTs are classified by the Virtual Design Studio (VDS) building assessment method. A local reference building is first defined for the region where the case building is constructed. Both forward-step evaluation of a proposed GBT to a reference building and backward-step tracking of the contribution of the technology to the case building are conducted. A scalability analysis is also conducted to understand the practical application of the performance parameters to other cases with different building design. EnergyPlus and CHAMPS-Multizone are used to analyse the energy and IAQ performance for each technology. The approach is verified by a case study of two single-family houses in US. Introduction Building energy consumption contributes approximately 39% of the total energy used in US, of which 53% consumed by residential buildings and 47% by commercial buildings [1]. Large energy saving potential exists for residential buildings. Besides, indoor air quality (IAQ) can have significant impacts on occupant health as people spend average around 90% of their time indoors [2]. Human exposure to indoor pollutants like particulate matters (PM), ozone and VOCs, has been proved to be associated with the increases in respiratory-related morbidity, cardiovascular morbidity and premature mortality as well as sick building syndrome and huge loss in productivity [3][4][5][6][7]. Nowadays, a great number of green building technologies (GBTs) like super insulation, enclosure air-tightening, energy recovery, green roof, ground source heat pump, solar panel, demand-based ventilation has been developed and implemented in buildings for reducing energy consumption and improving IAQ. Several performance assessment systems have been developed by different countries/institutions to support the design of high-performance buildings, including LEED, ASHRAE 189.1, BERRAM, WELL, DGNB and WBDG. While these assessment systems provide criteria and pathways to achieve a certain level of performance, they do not provide guidance on how each GBT would improve or contribute to the overall performance directly. A systematic method for evaluating for the performance potential of different GBTs for energy saving and IAQ improvement is needed. Such a method can then be applied to establish a green building technology database by studying adequate green building cases, which can work as a guideline for the green design of new buildings and green retrofit of existing buildings, and further used in modular-based green building design strategy. This paper proposes an approach to develop a GBT database for residential buildings based on the systematic analysis of green building case studies. For each study case, the GBTs are classified by the Virtual Design Studio (VDS) building assessment method [8,9]. Then, both forward step evaluation of a proposed GBT to a reference building and backward track of the contribution of the technology to the case building are conducted. EnergyPlus and CHAMPS-Multizone are used to analyse the performance for energy consumption and IAQ, respectively. The approach is verified by a case study of two single-family houses in US. Local reference building GBTs may have quite different performance in different locations, e.g. the shading system used in cold region may not be as efficient as the one used in hot region. The local climate condition and local design strategy may have significant impacts on building performance. Therefore, the performance of each GBT on the studied building will be evaluated by comparing to a local reference building. A local reference building is defined as a building with the same design as the studied building but with the GBTs being replaced by the typical technologies/features commonly used in the local practice per local building code requirement. The local reference building should be defined in accordance with the typical local design strategy as well as the local criteria in each region and climate zone. Different regions usually have different design strategies and criteria for each type of building. For single-family houses in US, the National Renewable Energy Laboratory (NREL) has developed the benchmark for each climate zone (Building America B10 Benchmark), which is generally consistent with the 2009 International Energy Conservation Code. ASHRAE 90.2 and 62.2 defined the design criteria like envelope construction and ventilation requirement for single-family house in each climate zone, which can be used to define the local reference house as discussed by Liu et al. [10]. Green building technology collection and classification GBTs in this study are collected from the existing green buildings. In this study, GBTs are defined as the building technologies and features which can improve the building performance compared to the local reference building. GBTs are collected and classified using the building performance analysis methodology defined in our previous work of the Virtual Design Studio (VDS) assessment method [8,9], which focuses on 10 design factors (Table 1), i.e. site and climate (SC), form and massing (FM), internal configuration (IC), external enclosure (EE), environment system (ENV), energy system (ENG), water system (WS), material and embodied energy (ME), lighting and daylighting (LD), as well as system interdependence (SI). Each technology can be related to more than one design factor. Green performance assessment approach A performance analysis approach is developed to assess the physical performance of the collected GBTs, i.e. energy consumption and indoor air quality (IAQ). As shown in Figure 1, the forward-step evaluation is applied to understand the potential of an individual technology for performance improvement over the reference house, and the backward-step tracking analysis is to assess the contribution of an individual technology to the performance of the case building in which all identified GBTs are used. In addition, a scalability analysis is conducted to get the practical application of the individual performance potential of each technology to other cases with different building design from the reference building. A forward-step evaluation of an individual GBT relative to a reference building would be conducted first to obtain the technology's performance improvement potential (Pp,i) over the reference performance effect. In this study, Pi indicates the potential of performance improvement using a specific technology i in the reference building, which can be calculated by where Eref is the performance effect (energy consumption or IAQ performance) of the local reference building, Eref+i is the effect of a local reference building with the studied technology i, Ei (= Eref -Eref+i) indicates the individual effect potential of the studied technology i and Pp,i is the performance improvement potential of the individual technology over the reference building. If Pp,i > 0, it means that the studied technology i can improve the building performance, while Pp,i < 0 means that the technology would even deteriorate the performance. When combining different technologies together in a building, the performance contribution of each technology may differ from the performance improvement potential over the reference building. Therefore, a backward-step tracking analysis is needed to evaluate the of the contribution of the individual technology to a target case building in which all the technologies are applied. This analysis would help understand the performance contribution of the studied technology when combined with other technologies. The performance contribution (αi) of the studied technology can be calculated by where Pc,i is the performance contribution of technology i over the target building with combined technologies, Eref+Σ is the effect of the target building with all the technologies applied, Eref+Σ-i is the effect of the target building with all the technologies except the studied technology i and Ei' is the individual effect of the technology i. The ratio between Pc,i and Pp,i represents the how much the potential of technology i is amplified (if larger than 1) or discounted (if less than 1) due to the synergistic effects with the other technologies applied concurrently: The individual performance potential of the studied technology is obtained from the analysis to a local reference building, which can be calculated by However, when the studied technology is implemented in a target building with different area, geometry and configuration from the reference building, the performance potential may vary as well. A scalability evaluation is therefore conducted to the studied technology. The scaling factor (βi) can be derived as where βi is the scaling factor, Pi' is the performance potential of the studied technology implemented in the target building and Pi is the performance potential of the studied technology. Each GBT should have its own scaling factor. The ovrall scaling factor of a technology can be calculated by multiplying all the scaling factors of the technology. For some technologies, the scaling factor is an internal integrated parameter of the technologies. The combined performance potential of multiple technologies to a target building then can be obtained by equation (6). It should be mentioned that the performance contribution αi and scaling factor βi in this study could be either constant or variable. Performance simulation tools The performance of the studied building technology is obtained by simulation. EnergyPlus is used to analyze the energy performance and CHAMPS-Multizone is used to analyze the IAQ performance. The energy and IAQ performance parameters such as heating energy consumption, cooling energy consumption, lighting consumption, water heating consumption, ventilation consumption, water use, and indoor pollutant level, would be analyzed using the simulation tools. EnergyPlus is a widely used simulation tool to model both energy consumption and water use in buildings, which is released by DOE. CHAMPS-Multizone is a multizone indoor pollutant transport simulation tool developed by BEESL, Syracuse University. Case study A case study is implemented to verify the proposed approach. Two single-family houses in New York City (climate zone: 4A) and Miami (climate zone: 2A) are analyzed. The local reference buildings are defined following the protocols presented by NREL [11], which is a two-story house with pitched roof and unconditioned attic. Three technologies are analyzed on the reference buildings in these two locations using the presented approach, i.e. high-insulation exterior wall, controlled shading system and air-tight wall assembly. The total building area is 334.61 m 2 with 223.07 m 2 conditioned area. Results and discussion The annual heating and cooling energy consumptions of the study cases are simulated by EnergyPlus. The annual heating and cooling consumption of the local reference buildings in two cities are shown in Table 2. Then the performance potential Pi and performance contribution αi of each technology is obtained ( Table 3). The scalability evaluation is not conducted in this paper. The performance potential shows that airtight wall contribute more to the performance improvement compared to the other two technologies, particularly for heating consumption. The performance contribution to combined technologies indicates that these three technologies are basically independent because the contribution coefficients for each technology are very close to 1. But the air-tight wall may not work well when combined with the other technologies for cooling condition in New York City. Another reason for this situation may be the elevated calculation error caused by the very small value of the performance potential for air-tight wall in cooling condition (0.0004). Using the obtained performance potential and contribution coefficients, the combined performance potential estimated by equation (4) is consistent with the performance potential simulated by EnergyPlus since the estimation error is basically within 5%. Therefore, in this paper, the GBT performance assessment approach can perform well to present the performance potential and contribution of different technologies. A GBT database built by this approach would be a good tool to understand the performance potential of different technologies.
2,765
2019-10-23T00:00:00.000
[ "Environmental Science", "Engineering" ]
General considerations for the miniaturization of radiating antennae The small size of plasmonic nanostructures compared to the wavelength of light is one of their most distinct and defining characteristics. It results in the strong compression of an incident wave to intense hot spots which have been used most remarkably for molecular sensing and nanoscale lasers. But another important direction for research is to use this ability to design miniaturized interconnects and modulators between fast, loss-less photonic components. In this situation one is looking for the smallest optical nanostructure possible while trying to mitigate losses. Here we show that despite their high absorption, conductors are still the best materials to reach the sub-wavelength regime for optical antennae when compared to polar crystals and high-index dielectrics, two classes of material which have shown a lot of potential recently for nanophotonic applications. It is demonstrated through both Mie theory and numerical calculations that the smallest possible, efficient, radiating antenna has a length L > λres/20 in all cases (this length is typically L = λres/2 in microwave engineering), including the redshifting mechanism induced by a background or substrate refractive index, the effect of material loss and that of shape. In addition, we show that although the assembly of individual particles can further increase the miniaturization factor, it strongly increases the size-mismatch in detriment of the overall efficiency, thus making this method unfit for radiating antennae. By identifying the relevant dimensionless properties for conductors, polar materials and high index dielectrics, we present an unified understanding of the behaviour of sub-wavelength nanostructures which are at the heart of current nanophotonic research and cast the upper achievable limits for optical antennae crucial to the development of real-life implementation. © 2015 Optical Society of America OCIS codes: (350.4238) Nanophotonics and photonic crystals, (160.4670) Optical materials, (290.4020) Mie theory. References and links 1. E. Ozbay, “Plasmonics: Merging photonics and electronics at nanoscale dimensions,” Science 311, 189–193 (2006). 2. W. L. Barnes, A. Dereux, and T. W. Ebbesen, “Surface plasmon subwavelength optics,” Nature 424, 824–830 (2003). 3. A. Alù and N. Engheta, “Wireless at the nanoscale: Optical interconnects using matched nanoantennas,” Phys. Rev. Lett. 104, 213902 (2010). 4. P. Biagioni, J.-S. Huang, and B. Hecht, “Nanoantennas for visible and infrared radiation,” Reports on Progress in Physics 75, 024402 (2012). 5. M. Agio, “Optical antennas as nanoscale resonators,” Nanoscale 4, 692–706 (2012). 6. L. Novotny and N. van Hulst, “Antennas for light,” Nature Photonics 5, 83–90 (2011). 7. V. Giannini, A. I. Fernández-Domı́nguez, S. C. Heck, and S. A. Maier, “Plasmonic nanoantennas: Fundamentals and their use in controlling the radiative properties of nanoemitters,” Chemical Reviews 111, 3888–3912 (2011). 8. F. Neubrech, A. Pucci, T. W. Cornelius, S. Karim, A. Garcia-Etxarri, and J. Aizpurua, “Resonant plasmonic and vibrational coupling in a tailored nanoantenna for infrared detection,” Physical Review Letters 101 (2008). 9. M. D. Sonntag, J. M. Klingsporn, A. B. Zrimsek, B. Sharma, L. K. Ruvuna, and R. P. Van Duyne, “Molecular plasmonics for nanoscale spectroscopy,” Chem. Soc. Rev. 43, 1230–1247 (2014). 10. R. Bardhan, S. Lal, A. Joshi, and N. J. Halas, “Theranostic Nanoshells: From Probe Design to Imaging and Treatment of Cancer,” Accounts of Chemical Research 44, 936–946 (2011). 11. J. M. Luther, P. K. Jain, T. Ewers, and A. P. Alivisatos, “Localized surface plasmon resonances arising from free carriers in doped quantum dots,” Nature Materials 10, 361–366 (2011). 12. G. Georgiou, H. K. Tyagi, P. Mulder, G. J. Bauhuis, J. J. Schermer, and J. G. Rivas, “Photo-generated THz antennas,” Scientific Reports 4 (2014). 13. F. H. L. Koppens, D. E. Chang, and F. J. Garcı́a de Abajo, “Graphene plasmonics: A platform for strong lightmatter interactions,” Nano Letters 11, 3370–3377 (2011). 14. J. Chen, M. Badioli, P. Alonso-Gonzalez, S. Thongrattanasiri, F. Huth, J. Osmond, M. Spasenovic, A. Centeno, A. Pesquera, P. Godignon, A. Zurutuza Elorza, N. Camara, F. Javier Garcı́a de Abajo, R. Hillenbrand, and F. H. L. Koppens, “Optical nano-imaging of gate-tunable graphene plasmons,” Nature 487, 77–81 (2012). 15. Z. Fei, A. S. Rodin, G. O. Andreev, W. Bao, A. S. McLeod, M. Wagner, L. M. Zhang, Z. Zhao, M. Thiemens, G. Dominguez, M. M. Fogler, A. H. C. Neto, C. N. Lau, F. Keilmann, and D. N. Basov, “Gate-tuning of graphene plasmons revealed by infrared nano-imaging,” Nature 487, 82–85 (2012). 16. J. A. Schuller, T. Taubner, and M. L. Brongersma, “Optical antenna thermal emitters,” Nature Photonics 3, 658– 661 (2009). 17. Y. Chen, Y. Francescato, J. D. Caldwell, V. Giannini, T. W. W. Ma, O. J. Glembocki, F. J. Bezares, T. Taubner, R. Kasica, M. Hong, and S. A. Maier, “Spectral tuning of localized surface phonon polariton resonators for low-loss mid-ir applications,” ACS Photonics 1, 718–724 (2014). 18. J. A. Schuller and M. L. Brongersma, “General properties of dielectric optical antennas,” Opt. Express 17, 24084– 24095 (2009). 19. A. Garcı́a-Etxarri, R. Gómez-Medina, L. S. Froufe-Pérez, C. López, L. Chantada, F. Scheffold, J. Aizpurua, M. Nieto-Vesperinas, and J. J. Sáenz, “Strong magnetic response of submicron silicon particles in the infrared,” Opt. Express 19, 4815–4826 (2011). 20. A. E. Krasnok, A. E. Miroshnichenko, P. A. Belov, and Y. S. Kivshar, “All-dielectric optical nanoantennas,” Optics Express 20, 20599–20604 (2012). 21. Y. H. Fu, A. I. Kuznetsov, A. E. Miroshnichenko, Y. F. Yu, and B. Luk’yanchuk, “Directional visible light scattering by silicon nanoparticles,” Nature Communications 4 (2013). 22. L. Novotny, “Effective wavelength scaling for optical antennas,” Phys. Rev. Lett. 98, 266802 (2007). 23. L. Cao, P. Fan, E. S. Barnard, A. M. Brown, and M. L. Brongersma, “Tuning the color of silicon nanostructures,” Nano Letters 10, 2649–2654 (2010). PMID: 20507083. 24. J. M. Geffrin, B. Garcia-Camara, R. Gomez-Medina, P. Albella, L. S. Froufe-Perez, C. Eyraud, A. Litman, R. Vaillon, F. Gonzalez, M. Nieto-Vesperinas, J. J. Saenz, and F. Moreno, “Magnetic and electric coherence in forwardand back-scattered electromagnetic waves by a single dielectric subwavelength sphere,” Nature Communications 3 (2012). 25. P. Albella, M. A. Poyli, M. K. Schmidt, S. A. Maier, F. Moreno, J. J. Senz, and J. Aizpurua, “Low-loss electric and magnetic field-enhanced spectroscopy with subwavelength silicon dimers,” The Journal of Physical Chemistry C 117, 13573–13584 (2013). 26. T. G. Habteyes, I. Staude, K. E. Chong, J. Dominguez, M. Decker, A. Miroshnichenko, Y. Kivshar, and I. Brener, “Near-field mapping of optical modes on all-dielectric silicon nanodisks,” ACS Photonics 1, 794–798 (2014). 27. A. E. Krasnok, C. R. Simovski, P. A. Belov, and Y. S. Kivshar, “Superdirective dielectric nanoantennas,” Nanoscale 6, 7354–7361 (2014). 28. Y. Yang, W. Wang, P. Moitra, I. I. Kravchenko, D. P. Briggs, and J. Valentine, “Dielectric meta-reflectarray for broadband linear polarization conversion and optical vortex generation,” Nano Letters 14, 1394–1399 (2014). PMID: 24547692. 29. U. Zywietz, A. B. Evlyukhin, C. Reinhardt, and B. N. Chichkov, “Laser printing of silicon nanoparticles with resonant optical electric and magnetic responses,” Nature Communications 5 (2014). 30. P. Albella, R. Alcaraz de la Osa, F. Moreno, and S. A. Maier, “Electric and magnetic field enhancement with ultralow heat radiation dielectric nanoantennas: Considerations for surface-enhanced spectroscopies,” ACS Photonics 1, 524–529 (2014). 31. I. Staude, A. E. Miroshnichenko, M. Decker, N. T. Fofang, S. Liu, E. Gonzales, J. Dominguez, T. S. Luk, D. N. Neshev, I. Brener, and Y. Kivshar, “Tailoring directional scattering through magnetic and electric resonances in subwavelength silicon nanodisks,” ACS Nano 7, 7824–7832 (2013). 32. R. Hillenbrand, T. Taubner, and F. Keilmann, “Phonon-enhanced light-matter interaction at the nanometre scale,” Nature(London) 418, 159–162 (2002). 33. C. Kittel, Introduction to solid state physics (John Wiley & Sons, Inc., New York, 1996). 34. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley-VCH Verlag GmbH, 1998). 35. A. von Hippel, R. G. Breckenridge, F. G. Chesley, and L. Tisza, “High dielectric constant ceramics,” Industrial and Engineering Chemistry 38, 1097–1109 (1946). 36. T. Hamano, D. J. Towner, and B. W. Wessels, “Relative dielectric constant of epitaxial batio3 thin films in the ghz frequency range,” Applied Physics Letters 83 (2003). 37. A. Demetriadou and O. Hess, “Analytic theory of optical nanoplasmonic metamaterials,” Phys. Rev. B 87, 161101 (2013). 38. J. van de Groep and A. Polman, “Designing dielectric resonators on substrates: Combining magnetic and electric resonances,” Opt. Express 21, 26285–26302 (2013). 39. J. Dorfmuller, R. Vogelgesang, W. Khunsin, C. Rockstuhl, C. Etrich, and K. Kern, “Plasmonic nanowire antennas: Experiment, simulation, and theory,” Nano Letters 10, 3596–3603 (2010). 40. N. Verellen, F. Lpez-Tejeira, R. Paniagua-Domnguez, D. Vercruysse, D. Denkova, L. Lagae, P. Van Dorpe, V. V. Moshchalkov, and J. A. Snchez-Gil, “Mode parity-controlled fanoand lorentz-like line shapes arising in plasmonic nanorods,” Nano Letters 14, 2322–2329 (2014). 41. Z. Li, S. Butun, and K. Aydin, “Touching gold nanoparticle chain based plasmonic antenna arrays and optical metamaterials,” ACS Photonics 1, 228–234 (2014). 42. V. Giannini, A. Berrier, S. M. Maier, J. Antonio Sanchez-Gil, and J. G. Rivas, “Scattering efficiency and near field enhancement of active semiconductor plasmonic antennas at terahertz frequencies,” Optics Express 18, 2797–2807 (2010). Introduction Plasmonics is a relatively new and striving field of research of which one important goal is that of merging photonic technology to electronic components [1]. This is traditionally achieved by using the field confinement capability of nanostructured metal where the coupling between the charge density fluctuations of the conduction electrons and electromagnetic waves results in bound modes called surface plasmon polaritons [2]. These extremely appealing excitations combine the properties of both photons and electric currents and are therefore foreseen as the most promising entities to achieve the aforementioned merging [3]. This has led to the concept of "optical" antennae which represents a scaling of the radio-frequency antennae down to submicrometer sizes in order to react to the wavelength of visible light [4,5]. There is a strong deviation from microwave antenna theory as the driving frequency is increased caused by the so-called skin effect, which is an increased complex character of the conductivity of a metal [6]. This allows for an additional miniaturization which cannot take place in a near-perfect conductor and is the source of some of the most striking plasmonic applications, such as emitters engineering [7], surface-enhanced spectroscopy and molecular sensing [8,9] or photothermal therapy [10], where optical antennae serve as an interface between the wavelength of light and the size of molecules. However, as the size-mismatch increases between the wavelength and the antenna size, the efficiency of the latter decreases in proportion; it results in smaller components but at the cost of additional dissipation. In the present article, we will explore the trade-off regime across a wide range of frequencies, looking for the smallest possible antenna with a workable efficiency which we define by dominating radiating properties compared to the absorption. In recent years, researchers have identified alternative avenues to metals as a way to achieve light confinement. For instance, similarly to noble metals, semiconductors exhibit plasmonic properties which can be tuned across a wide spectral range, from the near-infrared down to the terahertz regime, simply by varying their carrier concentration [11,12]. This is also the case for the promising 2D material graphene, where the atomic thickness can produce an unprecedented compression of the field [13][14][15]. But polaritons, these surface modes binding light to interfaces, can arise in polar crystals alike thanks to the stimulation of charged transverse optical phonons [16,17]. Finally, the use of high refractive index dielectrics can give rise to gigantic sub-wavelength hot spots [18][19][20][21]. Because the compression of light in these systems originates from different physical mechanisms, we can expect different figures of merit for the miniaturization of antennae and therefore we will explore those in detail in the following. We will start by using Mie theory, investigating the first order resonance supported by small spheres made out of these materials. This will allow to analyze in detail the effect of the material properties as well as the achievable radiation efficiency and miniaturization of these systems. Next, we will consider the case of elongated particle through finite-difference time domain (FDTD) calculations and making use of the effective wavelength theory developed by Novotny [22]. Last, we will study the influence of the background refractive index and the redshift caused by the assembling of particles. Materials under investigation We choose to investigate here the performance of three categories of materials in which the compression of free-space light is achieved through completely different physical effects in order to cover as vast a ground as possible. First of all, we consider the modes supported by high-index dielectrics (HID) which have experienced recently a surge in popularity within the nanophotonic community [23][24][25][26][27][28][29]. This stems in large part because of their extremely small losses in the visible range making them attractive alternative to the predominant noble metals [30]. However, as we will see, as the wavelength of radiation increases dissipation in dielectrics can become relatively high although it is typically accompanied by large rise in the dielectric constant as well. The lowest order mode of a spherical dielectric particle is named the magnetic dipole (MD) and can be found at the wavelength λ res = nD, where n is the index of the dielectric and D its diameter. This means that HID resonators can never beat the diffraction limit, however extremely high index can allow deeply sub-wavelength elements. Note that for low aspect ratio particles the electric dipole can even be located at higher frequency than the corresponding magnetic dipole [31]. More generally, the following relation describes the complex indexñ of where the dielectric constant ε and the loss tangent tan δ are used in a similar fashion for high-k ceramics in microelectronics. By far the most widely used materials for nanophotonics, conductors have risen as a powerful ingredients for sub-diffraction applications [2]. This originates from the stimulation of surface plasmons below the plasma frequency ω 2 p = Nq 2 e /ε 0 m * , with N the charge carrier concentration, q e and m * the charge of the electron and effective mass of the charge carrier, and ε 0 the permittivity of free-space, which cause a formidable confinement of the incident light. The behaviour of conductors is well reproduced by a Drude model in the absence of interband transitionsñ with ε ∞ the permittivity due to the ion background and Γ = 2π/τ the charge carrier scattering rate (τ being the scattering time). While the performance of conductors decrease fast away from ℜ{ε} = −2, i.e. above 600 nm for noble metals, the possibility to tune the carrier concentration N in semiconductors allow to span a large range of frequency from the near-infrared down to the terahertz regime. Last, another class of emerging materials is that of polar crystals in which photons can couple to surface phonons rather than surface plasmons thanks to the charge carried by their transverse lattice oscillations [32]. These are foreseen as promising components for plasmonic-like capabilities within the terahertz to mid-infrared ranges in which the optical phonons can be stimulated. They suffer from reduced absorption compared to conductors because of the absence of Joule heating, leading to scattering time two orders of magnitude longer; it is therefore important to realize that those materials exceed the performance of noble metals only when their operating frequency is more than a hundredth that of the visible range. A Drude-Lorentz model is used to describe their dielectric functioñ with ω LO and ω T O the longitudinal and transverse optical phonon frequencies. We see that the permittivity is only negative, i.e. with a plasmonic-like behaviour, in between these two phonon modes which consists in the most severe limitation of polar crystals. Indeed, that frequency range is fixed for each material and rather reduced, as attested by the Lyddane-Sachs-Teller . Although it can be as high as 9 for very ionic materials such as fluorides, bromides or chlorides, it is way below 2 for most practical cases like silicon carbide or boron nitride. Given the strong absorption band at the TO frequency, we can already see that any redshifting mechanism will be detrimental to this category of material and it is therefore most useful for deeply sub-wavelength resonators. Results and discussion The total amount of light interacting with a particle is termed the extinction cross-section and is given by C ext = C abs + C sca , with C abs and C sca the absorption and scattering contributions. One also refers to the optical efficiencies Q i = C i /A which quantify the relative capture area of a particle relative to its physical cross-section A [7]. While it is well-known that in very small size "plasmonic" nanoparticles, most of the captured light is absorbed, we will focus here on antenna applications. This implies that the radiative contribution C sca should dominate the optical process, i.e. that the radiative efficiency η = C sca /C ext ≥ 50%, which puts a limit on the miniaturization. We note however that small particles remain crucial for schemes such as molecular sensing or photothermal therapy. Spherical particles We consider first the case of spherical particles as it allows us to make use of Mie theory [34]. As we will see, this analytical framework provides us with a good general understanding of the effect of losses, size and the behaviour of the dielectric function. We will turn to the case of elongated particles, namely rods, in the subsequent subsection. Fig. 1a reports the lowest energy resonance wavelength of dielectric spheres with radius R = 10 µm and permittivity given by equation 1 with the real and imaginary part varying along the horizontal and vertical axis respectively. More precisely, the colormap represents the dimensionless quantity λ res /2R which defines the miniaturization factor. These results are therefore scalable across the whole electromagnetic spectrum as long as the value of the complex permittivity is similar. Furthermore, the benchmark value for this factor is λ res /L = 2 corresponding to the resonance condition of the conventional half-wave dipole antenna used in microwave engineering. We note that the most common dielectrics and semiconductors have a dielectric constant hardly in excess of 10 giving rise to confinement at resonance smaller than 10. Similarly, although λ res /D > 50 is theoretically possible for ε > 3000 no such natural material is available to the best of our knowledge. However, very high indices are accessible in the microwave regime close to polar resonances in ferroelectrics or the Reststrahlen band in polar crystals [35,36]. As stated earlier, we are interested in those resonances for which η ≥ 50%, a condition which modifies considerably the picture as shown in Fig. 1b. Note that this corresponds to a ratio Q scat /Q abs ∼ 1. One can see that the best miniaturization is achieved at low losses, for the magnetic dipole mode, and is maximum at λ res /D ∼ 16. At this maximum value, the extinction efficiency Q ext can be as high as 80. A more efficient system with η ≥ 90% (Q scat /Q abs ∼ 10) is also shown in Fig. 1c which reduces even further the achievable miniaturization, to about λ res /D ∼ 7 (Q ext ∼ 50). For lower tolerances on loss, such as η ≥ 99%, there is close to no acceptable tan δ for the parameter space probed and one is limited by the quality of materials. This condition brings also the miniaturization closer to the microwave limit of λ res /D ∼ 2 so that there is little gain compared to using the sophisticated designs which have been developed for perfect electric conductor (PEC) at low frequencies. This conclusion also stands for conductors and polar materials but there the range of acceptable parameters is much more extended or non-existent making them quite robust or unsuitable, respectively. Next we study the potential of conductors according to equation 2 with ε ∞ = 1 and ω p = 2π · 1200 THz (λ p = 0.25 µm) varying the size of the particle, see Fig. 2a. One can see that the maximum confinement is much more limited than for HID, nonetheless the global result when imposing η ≥ 50% and 90%, see Fig. 2b and c, is very comparable at about λ res /D ∼ 16 for D = 0.12λ p with Q ext ∼ 80 and λ res /D ∼ 7 for D = 0.26λ p with Q ext ∼ 40 respectively. When ε ∞ is increased to 10, the resonances are markedly redshifted, although this improves the miniaturization factor for the first resonance, the increased size mismatch with the wavelength of light translates into a reduced Q sca , see Fig. 7. This leads to λ res /D, Q ext and λ p /D being halved compared to the free-electron case (ε ∞ = 1) for η ≥ 50% and 90%. Last, let us look at a polar crystal with ε ∞ = 1, ω T O = 2π · 10 THz and ω LO = 2π · 12 THz (λ LO = 25 µm), see equation 3. Thanks to the strong dispersion towards the TO phonon, light compression is an order of magnitude stronger than for conductors with a 10 times larger particle, see Fig. 3a. The trade-off for a dominating scattering contribution is comparable to that of a conductor though, at about D = 0.12λ LO and 0.27λ LO for η ≥ 50% and 90%. Unfortunately, the narrow spectral region for which ℜ{ε} < 0 together with the absorption line of the TO mode reduce dramatically λ res /D to 8 and 3.5 and Q ext to 25 and 8 respectively. Because of the restricted permittivity range available in a polar crystals, the effect of ε ∞ or the backgroud index is rather limited. High aspect ratio particles We now turn to elongated rods as this is, after an increase in size, the most typical way of redshifting further the optical response of scatterers [7]. In the following calculations, the polarization is always directed along the largest dimension of the particles, which we refer to as length. One can clearly see that as the aspect ratio of the particle is increased from the sphere (black curve) to thin bars with cross-sections 800 × 800 (blue), 400 × 400 (red) and 200 × 200 µm 2 (green), the lowest energy resonance narrows down and strengthens, however, its spectral position blueshifts, rather than redshifts, if anything. This is because the resonance wavelength of a dielectric cavity is mostly defined by the index of the material and its size. More, we note that the relative radiation efficiency Q sca /Q abs does not improve with a change of the particle shape for that lowest energy mode, see Fig. 4b. The picture can become somewhat complicated if one studies materials with a negative permittivity such as conductors and polar dielectrics, because they support localized modes which are highly dependent on the shape of the particle [7]. Nonetheless, if we consider rods with a circular cross-section, it is possible to make use of Novotny's approach [22] which was recently extended to absorbing materials by Demetriadou and Hess [37] where we use the z 4 solution which is the only one to bear physical sense. This effective wavelength theory consists in a scaling of the microwave half-wave antenna formula λ 0 = 2L by determining the velocity factor k 0 /γ which satisfies λ e f f = 2L where λ e f f = λ 0 k 0 /γ − 4R with k 0 = 2π/λ 0 the wavevector of light in free space and R the radius of the rod. In that framework, one is looking for the highest possible λ 0 /λ e f f ratio which translates into a strong miniaturization factor. Fig. 5a reports the expected confinement for conducting rods with the same permittivity considered before for different radii R = 5 − 200 nm. Because of the skin depth effect, thinner wires allow a stronger compression of the light as is well-known from plasmonics [22]. More specifically, a conducting rod with R = 5 nm leads to a miniaturization λ 0 /L = 8 (i.e. λ 0 /λ e f f = 4). One can therefore conclude that rods can not achieve as high a scaling as that offered by the small spheres presented in Fig. 2. Note that the asymptotes at small wavelengths arises from the breakdown of the assumption of wires R << L. Fig. 5. a) Effective wavelength scaling λ 0 /λ e f f for a conducting nanorod with a permittivity given by equation 2 with ε ∞ = 1, ω p = 2π · 1200 THz (λ p = 0.25 µm) and Γ = ω p /500 in function of its radius R in a background with index a) n bg = 1 and b) n bg = 2. Note the asymptotes towards short wavelengths which originate from the breakdown of the assumption of a nanorod, i.e. D << L. Effect of a background index To terminate this overview on the miniaturization capabilities of radiating antennae, we investigate the effect of the background index and the redshift accompanying the coupling between multiple elements for the case of conductors. Dielectric resonances are dependent on the index contrast, as such an increase in the background index or the presence of a substrate will only lead to a reduced wavelength compression [38]. Furthermore, as we showed earlier, the resonance wavelength in these systems is given by the total length, therefore the resonance of assemblies can only be at equal or higher energy than that of the spheres discussed before. On the other hand, polar materials are hampered by the strong TO absorption line and hence redshifting mechanisms are either detrimental or of little effect. Back to conductors, we see that an increase in the background index redshifts most strongly the resonance of the smallest particles as expected, see Fig. 8a, resulting in up to a factor ∼ n bg increase for those. As the particle size increases, the effect weakens and almost disappear for the largest spheres. The presence of a substrate rather than a homogeneous background would have an even more limited effect. When one imposes the condition η ≥ 50% and 90%, the miniaturization is not improved by the background index nor is it much undermined, see Fig. 8b and c. The optimum size is simply shifted to D = 0.25λ p and 0.6λ p while Q ext ∼ 60 and 25, respectively. This stems from the exponentially decaying dependence on the background index as the size of the particle increases. In the case of the conducting rods, see Fig. 5b, the shift is largest for the thinnest particles, again as expected, with a shift also comparable to ∼ n bg for R = 5 nm. As a side note, let us highlight the interesting fact that a conductor with increased loss (larger Γ) exhibits a stronger plasmonic behaviour leading to an improved wavelength compression, see Fig. 9. However, one should pay attention to the radiation efficiency η in that case, as it is not considered in our simple effective wavelength analysis. This can be fully taken into account by the model developed by Dorfmüller et al. [39,40] to which we direct the interested reader but is beyond the scope of the present article. On the assembly of particles It has been shown recently by Li and co-workers [41] that an alternative route to the miniaturization of antenna can be achieved by placing small elements in conductive contact to each others, see Fig. 10 for an example. This can give rise to an additional 200% wavelength scaling under optimal circumstances and up to 300% if one uses core-shell instead of plain particles. The individual particles are also more sensitive to the background index than the equivalent rod enabling strong redshifting mechanisms, as shown in Fig. 10. However, this method poses two important problems. The first and most obvious issue is that the fabrication and assembly of touching spheres is long and tedious. The second aspect is more treacherous and is related to the radiation efficiency η. To illustrate this issue, we calculated by FDTD the cross-sections of five touching spheres and a rod of equal diameter and equivalent total length (L = 5 × D) made out of a conductor with a permittivity close to that of indium antimonide (m * = 0.014 · m e , ε ∞ = 16, N = 10 16 cm −3 and Γ = ω p /10) [42], see Fig. 6a. Note that the polarization is aligned along the bars and particle chains. As one can see, although the extinction efficiency is smaller for the sphere assembly, its first resonance is located at lower energy that that of the equivalent rod. But more importantly, because of the greater size mismatch between the individual spheres and free-space light than between the latter and the rod, the relative scattering contribution is much smaller than the absorption for the assembly, see Fig. 6b. Furthermore, by considering a size of the individual nanoparticles which leads to Q sca /Q abs ∼ 1 (D = 100 µm) or Q sca /Q abs < 1 (D = 60 µm), we conclude that the assembling cannot improve nor modify notably the radiation efficiency. Indeed, we see that in both cases Q sca /Q abs is largely unaffected by the assembling compared with the single sphere. At the opposite, the rods are much more efficient with a strong scattering contribution. This means that as long as radiating antennae are concerned, the method of assembly is not favourable. Conclusion In this contribution, we identify the dimensionless physical quantities which allow us to describe and compare optical antennae made out of three categories of materials: polar crystals, high index dielectrics and conductors. Although the physical principles at the origin of their strong optical properties differ, comparable behaviour are observed for all three cases. This is particularly true when one considers antennae for which the radiative contribution dominates. Furthermore, we show that in this situation the ratio between the wavelength of light and the antenna size is below 20 in all cases (it is 2 in the microwave regime) including all possible redshifting mechanisms such as that of shape, background index or coupling (assembly). Strikingly, and contrary to expectations, we see that conductors are still the best optical materials for the fabrication of radiative miniaturized antennae when all factors are taken into account. These conclusions bring an interesting perspective on the current trends in state-of-the-art nanophotonics and provides us with the upper achievable limits in antenna miniaturization. Appendix A In this appendix, we present additional figures showing the variation of the miniaturization factor caused by an increase of the high frequency permittivity ε ∞ (Fig. 7) or the background index ( Fig. 8) for conducting spheres or the losses (Fig. 9) for conducting wires. We also reproduce in Fig. 10 and expand on the results of Li and co-workers [41] highlighting the additional shifts induced by replacing a gold bar (full lines) by an assembly of five touching spheres (dashed lines) with equivalent total length and light polarised along it. These two systems are placed in air (black curves), on glass (blue curves) and in glass (red curves). Note furthermore, that this figure considers periodic arrays from which only the transmittance is extracted thus overlooking changes in the ratio between radiative and absorbing contributions which are discussed in detail in Fig. 6 for a similar system. Fig. 7. a) Resonance wavelength λ res of the first order mode for a conducting sphere with a permittivity given by equation 2 with ε ∞ = 10 and ω p = 2π · 1200 THz (λ p = 0.25 µm) in function of its diameter D and scattering rate Γ, b) resonance wavelength of the lowest energy mode for which η ≥ 50% and c) η ≥ 90% for the same conducting sphere. λ res /D c) First resonance for which =90% Fig. 8. a) Resonance wavelength λ res of the first order mode for a conducting sphere with a permittivity given by equation 2 with ε ∞ = 1 and ω p = 2π · 1200 THz (λ p = 0.25 µm) in a background index n bg = 2 in function of its diameter D and scattering rate Γ, b) resonance wavelength of the lowest energy mode for which η ≥ 50% and c) η ≥ 90% for the same conducting sphere. Fig. 9. a) Effective wavelength scaling λ 0 /λ e f f for a conducting nanorod with a permittivity given by equation 2 with ε ∞ = 1, ω p = 2π · 1200 THz (λ p = 0.25 µm) and Γ = ω p /3 in function of its radius R in a background with index a) n bg = 1 and b) n bg = 2. Note the asymptotes towards short wavelengths which originate from the breakdown of the assumption of a nanorod, i.e. D << L.
7,681.8
2015-02-09T00:00:00.000
[ "Physics" ]
Wave propagation in fractal-inspired self-similar beam lattices We combine numerical analysis and experiments to investigate the effect of hierarchy on the propagation of elastic waves in triangular beam lattices. While the response of the triangular lattice is characterized by a locally resonant band gap, both Bragg-type and locally resonant gaps are found for the hierarchical lattice. Therefore, our results demonstrate that structural hierarchy can be exploited to introduce an additional type of band gaps, providing a robust strategy for the design of lattice-based metamaterials with hybrid band gap properties (i.e., possessing band gaps that arises from both Bragg scattering and localized resonance). V C 2015 AIP Publishing LLC . Phononic crystals 1,2 and acoustic metamaterials 3-7 have attracted significant attention in recent years 8,9 both because of their rich physics and of their broad range of applications. These include wave guiding, [10][11][12] frequency modulation, 13,14 noise/vibration reduction, 15,16 acoustic imaging, [17][18][19][20] and thermal management. 21,22 An important characteristic of these composite structures is their ability to manipulate the propagation of elastic waves through band gaps-frequency ranges of strong wave attenuation. In phononic crystals, band gaps are generated by Bragg-type scattering, whereas in acoustic metamaterials, localized resonances within the medium are exploited to attenuate the propagation of waves. Materials with structural hierarchy are ubiquitous in natural and man-made systems [23][24][25][26] and have recently received considerable interest because of their superior properties. [27][28][29][30][31][32] It has also been shown that structural hierarchy can be exploited to manipulate the propagation of elastic waves. [33][34][35] However, while all previous studies have focused on hierarchical phononic crystals, the effect of hierarchy on lattice-based acoustic metamaterials has not been explored yet. In this letter, we focus on the dynamic response of fractal-like triangular beam lattices and investigate both numerically and experimentally the effect of hierarchy on the propagation of small amplitude elastic waves. While a simple triangular lattice is characterized by a locally resonant band gap, 36 we find that fractal-like triangular lattices exhibit two types of gaps: (i) locally resonant band gaps and (ii) Bragg-type band gaps due to scattering. Locally resonant gaps are found in correspondence of the natural frequencies of the beams, whereas the stiffer regions introduced by the hierarchical refinement into the lattice are responsible for Bragg-type gaps. Our analysis reveals that, by introducing structural hierarchy into the lattice, not only higher frequency band gaps can be created, but also the mechanism responsible for such band gaps can be tuned. To generate the hierarchical triangular lattice considered in this study, we start with an hexagonal unit cell comprising 24 equilateral triangles of edge L (see Fig. 1(a)) and create 24 smaller triangles by connecting the edge centers of the 6 central triangles (see Fig. 1(b)). Clearly, this process can be repeated to create triangular lattices of higher hierarchical order and after k iterations each original unit comprises 24 triangles of edge L=2 k and 18 triangles of edge L=2 kÀj (with j ¼ 1; :::; k). Therefore, a structure with k orders of hierarchy comprises beams of slenderness k j ¼ L=ð2 j bÞ (with j ¼ 0; :::; k), where b denotes the width of the beam. We start with a triangular lattice comprising beams of slenderness k 0 ¼ 50 and then introduce one order of hierarchy by adding beams of slenderness k 1 ¼ 25. All models are constructed using planar Euler-Bernoulli beams with width b, mass per unit length m, and made of a linear elastic isotropic material with Young's modulus E. We further assume that all joints are welded. The Finite Element (FE) commercial package Abaqus/ Standard is used to investigate numerically the propagation of elastic waves both in infinite and finite-size lattices. The dynamic response of the infinite lattice is studied by considering a unit cell with Bloch-type boundary conditions and performing frequency-domain wave propagation analysis. 7 Note that we simplify the computational implementation by using rhombic unit cells in Figs. 1(c) and 1(d) instead of the hexagonal ones shown in Figs. 1(a) and 1(b). Moreover, steady-state analyses are conducted to calculate the transmission of finite-size lattices comprising different numbers of unit cells. In these simulations, an harmonic displacement is applied at the central node on the left edge of the model. The displacement of the central node on the right edge of the model is then monitored and the transmission is calculated as the ratio between the amplitudes of the output and input displacements (see supplementary material for more details 38 ). In addition to the numerical analysis, acrylic samples of the simple triangular lattice and the triangular lattice with one order of hierarchy are fabricated and tested. These samples are cut from a sheet of acrylic material of thickness 0.5 cm (with Young's modulus E ¼ 2.8 GPa and density q ¼ 1190 kg=m 3 ) with a VLS6.60 laser cutter machine (equipped with a 60 W CO 2 laser). Each specimen comprises an array of 3  1 unit cells and measures 39.5 cm  20.5 cm (see Figs. 1(e) and 1(f)). Wave propagation in each sample is excited by an electrodynamic shaker (Br€ uel & Kjaer-model LDS V406) attached to the left edge, and the dynamic response is recorded using miniature piezoelectric accelerometers (DJB Instruments-model A/25/E) attached at both ends of the sample. Finally, the transmittance is computed as the ratio between the output and input acceleration signals. We start by investigating numerically the propagation of elastic waves in the triangular lattice. Fig. 2(a) shows the band structures in terms of the dimensionless frequency is the first natural frequency of a single beam of length L with both ends fixed (welded). As recently noticed, 36 the structure is characterized by a band gap generated by local resonance. This finding is clearly supported by the fact that the band at the lower edge of the band gap is completely flat (see red band in Fig. 2(a)) and that is located in correspondence of the first natural frequency of the beams (i.e., X ¼ 1). Furthermore, the Bloch mode shapes of the flat band at the high-symmetry points G, X, and M reported in Fig. 2(a) confirm that each beam vibrates independently according to its natural mode. A similar flat band can be observed at X ¼ 2:7 in correspondence of the second natural frequency of the single beam. However, this second flat band does not give rise to a band gap, in agreement with previous studies. 11,36 Next, we simulate the dynamic response of the triangular lattice with one order of hierarchy. The results reported in Fig. 2(b) clearly show that this system has a very different band dispersion behaviour. While the gap at X ¼ 1 is retained, three additional band gaps appear at X ¼ 2:98 À 3:00; X ¼ 3:13 À3:35, and X ¼ 3:43 À 3:87. Importantly, all these three band gaps are located at frequencies far from the natural frequencies of elastic beams of length L and L=2 and the bands at their edges are not flat, suggesting they are not generated by local resonance, but by Bragg scattering (note that no locally resonant band gap is found at X ¼ 4-see supplementary material for more details). In particular, focusing on the band gap at X ¼ 3:43 À 3:87, we can see that the bands at the lower (highlighted in purple) and upper (highlighted in orange) edges of the gap are not flat close to the X point (see Fig. 2(c)). Therefore, for certain wave vectors k the group velocity of the propagating wave is not zero, resulting in not localised eigenmodes (as shown in Fig. 2, right). Finally, we note that these Bragg-type band gaps are generated because of the contrast of the effective properties introduced by the hierarchical refinement within the structure. In fact, since for a triangular lattice, the effective stiffness, E, and density, q, are given by 39 It is easy to see that the shorter beams introduced into the lattice by the hierarchical refinement result in denser and stiffer cores within the unit cell. Having demonstrated that infinite hierarchical triangular lattices are characterized by higher frequency band gaps and that these gaps are generated by Bragg scattering and not local resonance as for the case of the triangular lattice, we now investigate how this affects the transmission of finite-size structures. First, we numerically investigate the dynamic response of models comprising 1  20 unit cells and apply periodic boundary conditions on their horizontal edges (to mimic the response of structures that are infinitely long in vertical direction-see Fig. S2). As shown in Fig. 3(c) for the triangular lattice, a significant asymmetric drop in the transmittance is observed between X ¼ 1 and X ¼ 1:24 with a pronounced minimum at X ¼ 1. The fact that the lowest transmittance is observed in the vicinity of the lower edge of the band gap predicted by the dispersion relation further confirms that the band gap is generated by local resonance. 3,9 On the other hand, for the triangular lattice with one order of hierarchy, we still see drops in the transmittance in correspondence of the band gaps predicted by the Bloch wave analysis at X ¼ 2:98 À 3:00; X ¼ 3:13 À 3:35, and X ¼ 3:43 À 3:87, but these are more symmetric (see Fig. 3(d)). This is a characteristic of Bragg-type band gaps. 8,9 While the results reported in Figs Finally, in Figs. 3(g) and 3(h), we report the experimentally measured transmittance for the same structures (the samples are shown in Figs. 1(e) and 1(f)). For both tested structures, we find a strong attenuation in transmission in the vicinity of the numerically predicted gaps. In particular, for the triangular lattice, we observe a drop of $20 dB near X ¼ 1:0, which corresponds to a physical frequency of f ¼ 606 Hz for this sample (see Fig. 3(g)). For the triangular lattice with one level of hierarchy, instead we see two regions of strong attenuations in transmission in the vicinity of X ¼ 3:1 and X ¼ 3:5, which correspond to f ¼ 1879 Hz and f ¼ 2121 Hz, respectively (see Fig. 3(h)). The Bragg-type nature of these two gaps is further confirmed by the fact that their wavelength is about twice the unit cell size. In fact, from the homogenized properties defined in Eq. (1), the shear wave speed of the homogenized media can be estimated to be c $ 530 m/s, resulting in a band gap wavelength k ¼ c=f $ 25:2 cm (since the band gaps frequency is $2100 Hz), which about twice the unit cell size (see Fig. 1(f)). In summary, we have studied both numerically and experimentally the propagation of small amplitude elastic waves in fractal-inspired beam lattices. First, our results indicate that the locally resonant band gap at X ¼ 1 that characterizes the dynamic response of the triangular lattice is retained when introducing hierarchy in the structure. Interestingly, the position of this gap is fully predictable (it always occurs at X ¼ 1), facilitating the design of systems that suit the engineering constraints. Second, we have seen that by adding hierarchy, more band gaps are formed. Most of these are generated by Bragg scattering, since the hierarchical refinement introduces a contrast in the effective properties within the unit cell. While systems with multiple bandgaps in both lower and higher frequency intervals have been reported before, [40][41][42][43][44][45] our results presented here indicate that hybrid band gap properties can also be achieved in an elastic material using a simple building block such as a straight elastic beam, without embedding additional resonators. In fact, in Fig. 1(e)). (h) Experimentally measured transmittance in a triangular lattice with one order of hierarchy comprising 1  3 unit cells (see Fig. 1(f)). the proposed lattice-based metamaterials, the beams play simultaneously two roles: (i) they form a periodic elastic lattice with stiffer regions introduced because of the hierarchical refinement, so that Bragg-type band gaps are generated; (ii) they act themselves as mechanical resonators, resulting in the formation of locally resonant band gaps. Therefore, our results indicate a robust strategy to design acoustic metamaterials with hybrid band gap properties. This work has been supported by Harvard MRSEC through Grant No. DMR-1420570 and by NSF through Grant Nos. CMMI-1120724 and CMMI-1149456 (CAREER). The authors would like to thank Bas Overvelde for help with illustrations.
2,885.4
2015-12-03T00:00:00.000
[ "Physics" ]
Biocatalytic Regioselective O‐acylation of Sesquiterpene Lactones from Chicory: A Pathway to Novel Ester Derivatives We report the first biocatalytic modification of sesquiterpene lactones (STLs) found in the chicory plants, specifically lactucin (Lc), 11β,13‐dihydrolactucin (DHLc), lactucopicrin (Lp), and 11β,13‐dihydrolactucopicrin (DHLp). The selective O‐acylation of their primary alcohol group was carried out by the lipase B from Candida antarctica (CAL‐B) using various aliphatic vinyl esters as acyl donors. Perillyl alcohol, a simpler monoterpenoid, served as a model to set up the desired O‐acetylation reaction by comparing the use of acetic acid and vinyl acetate as acyl donors. Similar conditions were then applied to DHLc, where five novel ester chains were selectively introduced onto the primary alcohol group, with conversions going from >99 % (acetate and propionate) to 69 % (octanoate). The synthesis of the corresponding O‐acetyl esters of Lc, Lp, and DHLp was also successfully achieved with near‐quantitative conversion. Molecular docking simulations were then performed to elucidate the preferred enzyme‐substrate binding modes in the acylation reactions with STLs, as well as to understand their interactions with crucial amino acid residues at the active site. Our methodology enables the selective O‐acylation of the primary alcohol group in four different STLs, offering possibilities for synthesizing novel derivatives with significant potential applications in pharmaceuticals or as biocontrol agents. Introduction Terpenes are the most abundant and diverse family of natural compounds, with over 64,000 structures identified to date. [1,2]heir significance to humanity is undeniable, with plants rich in terpenes having been employed for medicinal purposes across the globe.Sesquiterpene lactones (STLs) are a group of highly diversified C15 terpeneoids found in plants that serve them as defense tools to cope with environmental stresses. [3]any of them also possess pharmacological properties.For instance, over 1500 scientific publications between 1990 and 2010 focus on their antitumor and anti-inflammatory activities. 6][7][8] Among plants rich in STLs, Chicory (Cichorium intybus) is well known. [9]This plant of the Asteraceae family has been historically used by Ancient Greeks, Egyptians, and Chinese as a herbal remedy to treat a variety of respiratory, liver, and digestive disorders. [10,11]Around fifteen STLs belonging to the guaianolide sub-family have been reported, with lactucin (Lc) and its ester analog lactucopicrin (Lp) being the most wellknown, as they were previously identified in wild lettuce (Lactuca virosa). [12,13]Other natural analogs of Lc and Lp, such as 11β,13-dihydrolactucin and 11β,13-dihydrolactucopicrin, were also identified.16] Chicory-derived STLs have demonstrated promising antimicrobial properties.In a recent study, both 11β,13-dihydrolactucin (DHLc) and lactucopicrin (Lp) inhibited the growth of Pseudomonas aeruginosa, DHLp was also effective against Staphylococcus aureus, while DHLc showed promising results against different strains of Candida. [17]Antiparasitic properties were reported for STLs from chicory such as lactucin and lactucopicrin, notably against Plasmodium falciparum strain Honduras-1. [18,19]Moreover, lactucin has also been linked to both in vivo and in vitro anti-adipogenesis effects and anticancer activities. [20,21]reactions.While those have been identified as the main pharmacophores, there are other factors that play a role in modulating their biological activities, such as lipophilicity, the number of alkylating sites and the presence of certain ester side chains. [22,23]hile the synthesis of new STL esters has not been explored thoroughly yet, a few examples in the literature have shown interesting results.Kitai et al. recently carried out the synthesis of several ester derivatives (propyl, butyl, pentyl and 2-methoxy ethynyl) of the STL sonchifolinic acid, isolated from Yacon (Smallanthus sonchifolius), and studied their cytotoxicity by evaluating the influence of these side chains. [24]Another study showed a similar effect for the STL helenalin, where its acetate and isobutyrate ester derivatives displayed a higher toxicity towards tumor cells.The difference in cytotoxicity was shown to be directly related to both the size and the lipophilicity of the side chain. [25]Moreover, a recent work by Zhang et al. describes the synthesis of several semi-synthetic aryl ester derivatives of the STL scabertopin (isolated from Elephantopus scaber).Their evaluation as potential anti-cancer agents for non-small cell lung cancer also showed promising results. [26]he primary allyl alcohol moiety found in chicory STLs and in many other terpenoids represents a promising starting point for the addition of side chains.Thus, we sought to develop a biocatalytic methodology that could be conveniently applied to a large variety of STLs, paving the way for the synthesis of numerous novel semi-synthetic derivatives.In this context, biological catalysts can offer distinct advantages over conventional synthesis methods, including superior selectivity and environmental friendliness. [27,28]Lipases (EC 3.1.1.3)have already been employed as biocatalysts in the synthesis of terpenoid esters via esterification and transesterification reactions in organic media. [29,30]Among commercial lipases, the lipase B from Candida antarctica (CALÀ B) has demonstrated remarkable versatility and robustness, particularly when it comes to the synthesis of lipophilic esters. [31]However, despite its widespread commercial availability, its application in synthesizing STL esters remains largely unexplored.Furthermore, the specific STLs selected for this article have not been described in any existing literature as having undergone enzymatic modification. Given the significant cost and limited commercial availability of chicory STLs, we began our study with a simpler model compound.The monoterpenoid (S)-perillyl alcohol (POH), featuring an allyl-type primary alcohol moiety similar to the one present in our STL targets, was selected as the model to set up our reaction conditions.After selecting the best reaction conditions, we focused on introducing alkyl ester chains onto the primary alcohol group present in chicory STLs.This approach aimed to modulate their lipophilicity and, consequently, their biological properties, such as interaction with plasma membranes and permeability through biological barriers.Our strategy allowed the synthesis of eight novel acyl STL derivatives.Beyond alkyl chains, we also investigated the synthesis of aryl esters using aryl vinyl esters as acyl donors.Additionally, docking simulations were conducted with the purpose of understanding the differences in reactivity (selectivity and yield) observed with the different acyl donors. Generic parameters In this study, we selected the widely available lipase B from Candida antarctica (CALÀ B) for the O-acylation of our target compounds.Considering the limited availability of chicory STLs and the lack of prior reports on their biocatalytic modification, we chose Novozym 435 (N435), an immobilized lipase, known for its excellent catalytic performance.The use of an immobilized lipase simplified the reaction work-up and analysis.Instead of inactivating the enzyme post-reaction -a process with potential risk of degrading the STLs -a simple filtration step was preferred.Moreover, the interfacial immobilization onto a microporous resin is known for enhancing the catalytic efficiency and robustness of N435 against denaturants such as acetaldehyde.However, for future applications, we believe that other less expensive formulations of CALÀ B could be used, if a more cost-effective enzyme was required.Though it may require adjusting the amount of enzyme to compensate for any potential decrease in catalytic activity. Our primary criterion for selecting the solvent system was its capacity to solubilize the main STLs from chicory root and the various acyl donors, while still enabling effective enzymecatalyzed acylation reactions.For this reason, common solvents used to extract STLs, such as alcoholic solvents (methanol, ethanol) and ethyl acetate, were not considered in this case.Acetonitrile (ACN) effectively solubilized the four Figure 1.Guaianolide skeleton and structure, clog (P) and surface polarity of the main non-conjugated STLs found in chicory root. [14]TLs and the acyl donors, however, based on previous experiments, we noted that methyl tert-butyl ether (MTBE) typically increased the overall conversion with Novozym 435.Taking this into account, we opted for a mixture of 3/1:MTBE/ACN, which proved to be the best compromise tested.In the future, MTBE could potentially be replaced by a greener alternative such as cyclopentyl methyl ether (CPME) which can be obtained from biomass. [32]egarding acyl donors, we began our study with the simplest donor and then gradually increased the complexity of the acyl chain by modifying the characteristics of its substituents (length and functional groups).Consequently, we began by synthesizing the simplest ester derivative, a methyl ester.Given that the nature of the acyl donor can significantly influence the yield of the desired ester, we chose to compare two commonly used acetyl donors: vinyl acetate and acetic acid.Based on the existing literature, we used the acyl donor in excess relative to the alcohol substrate (typically 3 equivalents), this contributed to shifting the equilibrium towards the formation of the desired ester and maximizing the consumption of the valuable STLs. Esterification of (S)-perillyl alcohol -model reaction- As discussed in the introduction, perillyl alcohol (POH) was chosen as our model compound as it fitted within our criteria of accessibility/cost and, more importantly, in terms of structural similarity, sharing the allyl alcohol moiety of chicory STLs.We initiated the process by selecting the most suitable acyl donor -either acetic acid or vinyl acetate -to form the corresponding ester using immobilized CALÀ B (Figure 2). The fact that acyl donors generally lead to more favorable outcomes than corresponding carboxylic acids in lipasecatalyzed acylation reactions is well-known in the literature.[35] Furthermore, acetaldehyde is unable to act as a nucleophile in the reverse transesterification reaction.Moreover, another contributing factor to the lower reactivity of carboxylic acids is their acidic character, which can lower the pH of residual water surrounding the enzyme and negatively affect its performance. [36]he acylation reactions and their negative controls were carried out in a small glass vial (2 mL) containing 1 mL of solvent (3/1:MTBE/ACN) in the presence of molecular sieves (5 Å).The formation of perillyl acetate was monitored by GC-FID and its structure was confirmed by NMR analysis ( 1 H, 13 C and HSQC).The transesterification with 10 mg of Novozym 435, 100 mM of POH and 300 mM of vinyl acetate allowed for a remarkable > 99 % conversion after 1 h of reaction at 37 °C.On the other hand, as expected, the esterification with 300 mM of acetic acid proceeded much slower, only achieving 12 % conversion in the same timeframe, and 24 % after 2 h.After 5 days, 1 H NMR analysis showed a conversion of 90 % � 5 % which was estimated via the integral ratio of the hydrogens from perillyl alcohol and perillyl acetate present in the mixture. Based on these results, we chose to proceed with vinyl esters as acyl donors instead of their corresponding acids, aiming to maximize the conversion to the desired ester.Furthermore, the complete conversion achieved with vinyl acetate can serve as a benchmark for the enzyme's ability to utilize different acyl donors.While it is known that acetaldehyde can act as an enzyme denaturant, [37] the concentration used in this study did not significantly impact the performance of Novozym 435.Additionally, the lack of water generation when using vinyl esters as acyl donors theoretically removes the need for molecular sieves.However, these sieves may still aid in removing vinyl alcohol or acetaldehyde from the medium.Conveniently, the use of this methodology eliminates the need for a complicated workup procedure.The immobilized enzyme can be easily recovered for future recycling, requiring only a simple filtration followed by concentration (8 mbar, 35 °C, 1 h) to remove any unreacted vinyl acetate and acetaldehyde from the medium.Thus, this work allowed us to set up promising reaction conditions for the acylation of terpenoids containing a primary allyl alcohol moiety, such as perillyl alcohol. It should also be noted that while the use of lipases (particularly CALÀ B) for the synthesis of various monoterpenoid methyl esters using vinyl acetate is well known, [29,38] to the best of our knowledge this study reports the first lipasecatalyzed synthesis of perillyl acetate.The methodology shown here could represent a viable and straightforward option for obtaining perillyl acetate in quantitative yield for future industrial applications or as an interesting potential building block. Acylation of 11β,13-dihydrolactucin Based on the promising results obtained with perillyl alcohol, we then employed the same general conditions to the Oacylation of the four chicory STLs of interest (Figure 3).Given that we managed to extract a relatively larger amount of DHLc from chicory root, we selected this STL as the main substrate for this section of the study. [16]he lipase-catalyzed acylation with vinyl acetate as the acyl donor proceeded under similar reaction conditions as before.The reaction was conducted in 1 mL of 3/1:MTBE/ACN, with 20 mg of the immobilized lipase (Novozym 435), 10 mM of DHLc at 37 °C, along with the respective negative controls.In the context of this first study, the cost of the acyl donor and the enzyme were insignificant compared to the cost of the STL.Thus, 10 equivalents of acyl donor were used in order to ensure an optimal conversion.Likewise, the amount of STL (10 mM, ~3 mg) was the minimum necessary for effective product characterization. The reaction was stopped after 48 h and the resulting mixture was analyzed by 1 H NMR after filtration and concentration (8 mbar, 35 °C, 2 h).We observed that the signals of the two hydrogens adjacent to the primary hydroxy group of DHLc (15a and 15b), two doublet of doublets with strong roof effect at 4.23 ppm and 4.65 (Figure 4, A), shifted to higher ppm values (4.83 and 5.22 ppm) (Figure 4, C).This shift was attributed to the deshielding effect caused by the formation of an ester group, which possesses a greater electro-attractive effect than the original alcohol.This was confirmed by the negative control in the absence of the enzyme, where no shift appeared (Figure 4, B).Therefore, we followed the shifting of these two protons to monitor and validate the acylation of the primary alcohol group of DHLc for the rest of the study. Remarkably, no shift was observed for the hydrogen adjacent to the secondary alcohol group (hydrogen #8), or for those related to the lactone moiety (Figure 4, A and C).This showed two important findings: 1) The reaction catalyzed by CALÀ B is very selective for the primary alcohol; 2) The enzyme is not able to hydrolyze the lactone, allowing us to obtain only the acetyl derivative of DHLc (conversion > 99 % measured by LC-MS). Due to the cost of the starting substrate, we first maximized our chances of obtaining a detectable yield with DHLc by using a larger amount of enzyme in this first trial than in the preliminary study with perillyl alcohol (20 mg here versus 10 mg used previously).However, given the very high conversion, we repeated the operation under identical conditions, but using only 2 mg of enzyme.In this case, a conversion of 95 % � 5 % was again obtained in only 24 h, demonstrating the very good acceptance of DHLc by the lipase. As DHLc proved to be an excellent substrate for the enzyme, we were curious to know whether acetic acid could also be used as an effective acyl donor.Particularly in order to consider its future use on a larger scale, as vinyl esters are 1) typically more toxic than the corresponding carboxylic acid, 2) lead to the generation of acetaldehyde as side product, and 3) ultimately decrease the carbon efficiency. [39]The reaction was thus carried out under the same conditions as before, with 20 mg of lipase.A conversion of 74 % was achieved after 24 h and a maximum of 76 % after 48 h, demonstrating the presence of a thermodynamic equilibrium under these conditions, despite the addition of the molecular sieves.We deemed important to let the reaction run for 6 days to confirm this observation.After 6 days, the yield of DHLc acetate at the primary alcohol position reached 72 % with no other by- products, as confirmed by LC-MS and NMR analysis.This also demonstrated the high selectivity of this enzymatic reaction, irrespective of the nature of the acyl donor.While this approach was less efficient when using acetic acid as opposed to vinyl acetate, we believe that in future studies it could still be optimized for the synthesis of other DHLc esters.This might be achieved by increasing the quantity of molecular sieves or employing alternative water removal systems, such as a Dean-Stark apparatus, which would be compatible with the solvent mixture used and its boiling point. Following the successful synthesis of the O-acetyl ester, we explored the possibility of extending the scope of the reaction to more lipophilic ester derivatives.Consequently, our methodology was extended to facilitate the esterification of DHLc with vinyl propionate, hexanoate and octanoate.Additionally, vinyl chloroacetate was also tested for two reasons: firstly, to assess the impact of a larger atom in the acyl chain; and secondly, to investigate the introduction of halogens, given their importance in the synthesis of bioactive molecules. The same conditions were applied to the different acyl donors (20 mg Novozym 435, 10 mM DHLc and 100 mM acyl donor).As for vinyl acetate, vinyl propionate also led to a conversion of > 99 % within 48 h (Table 1).Notably, the conversion slightly decreased with longer chain lengths, reaching 74 % and 69 % with vinyl hexanoate and vinyl octanoate, respectively.In addition, a slightly lower conversion was obtained with vinyl chloroacetate compared to vinyl propionate, suggesting that the presence of the larger chlorine atom appears to limit the ability of the enzyme to carry out the reaction.It remains plausible that near-complete conversion could be achieved with a longer reaction time, however, this hypothesis was not investigated in the present study.Lastly, it is worth noting that no acylation occurred on the secondary alcohol in these experiments. To continue to broaden the substrate scope of the reaction, we also tested a panel of vinyl esters containing aromatic substituents (Figure 5).All of these compounds were synthesized in-house as, to the best of our knowledge, only compound (8) was commercially available.These compounds were tested in excess (10 equivalents) relative to DHLc, following the same general conditions as before. Despite the use of identical conditions, none of these new aromatic acyl donors proved effective for the acylation of DHLc, as no esters were detected.However, some of them underwent hydrolysis into their corresponding carboxylic acids, which was likely attributable to residual water in the reaction medium.This was particularly prominent for vinyl-2-(4-methoxyphenyl) acetate ( 9) which experienced complete hydrolysis, and to a lesser extent for vinyl-4-formylbenzoate (11) and vinyl-4-nitrobenzoate (12), exhibiting over 50 % hydrolysis. On a related note, the lipase A from Candida antarctica and a lipase from Pseudomonas cepacia were also tested with compounds 9 and 12, but both failed to catalyze the desired acylation reaction.These results are not entirely surprising, as several lipases have already been tested on other occasions with aromatic acyl donors and have typically demonstrated poor activity.Such is the case for CALÀ B, for which another article mentions its inability to catalyze transesterification reactions with aryl vinyl esters as acyl donors. [40]Consequently, future research may necessitate the identification of a specific enzyme capable of accepting aryl vinyl esters as acyl donors. Acylation of other STLs from chicory root Having demonstrated that our methodology enabled the regioselective enzymatic acylation of the primary allyl alcohol group of DHLc with very high conversion rates, we moved to lactucin (Lc).Lc shares the same overall structural characteristics as DHLc, except for the presence of an α-methylenegamma lactone moiety (α-MGL), which may have been problematic due to its electrophily.For this experiment, vinyl acetate was chosen as the preferred acyl donor and the same reaction conditions were used. The 1H NMR analysis of the reaction mixture after 48 h showed a very promising 95 % � 5 % (> 99 % from LC-MS) conversion with perfect selectivity towards the primary alcohol.Remarkably, the α-MGL moiety also remained unchanged as shown in the 1 H NMR spectra (Figure 6). Building upon these promising results and aiming to extend this concept to encompass the majority of STLs found in chicory root, we replicated the previous experiment with lactucopicrin (Lp) and 11β,13-dihydrolactucopicrin (DHLp).These two molecules correspond to the respective ester analogues of Lc and DHLc, formed at their secondary alcohol sites with a 4-hydroxyphenylacetic acyl moiety.The high selectivity exhibited by CALÀ B was of significant interest to us in this instance, as it raised the prospect of selective esterification of the primary alcohol, without the risk of degrading the pre-existing ester of these two substrates.Additionally, the feasibility of the reaction with these two STLs was of interest, considering that the presence of the secondary ester makes them bulkier than the previous substrates.Remarkably, we managed to achieve the same excellent results as before, with 95 % � 5 % conversion and complete selectivity towards the primary alcohol group. Study of enzyme-substrate binding modes in STL acylation reactions Our experiments showed that CALÀ B exclusively targeted the primary alcohol group of STLs (e. g.DHLc) in acylation reactions using various alkyl vinyl esters as donors.To better understand this specificity, we conducted molecular docking simulations to explore the preferred binding modes between different acyl-enzyme complexes and the four STLs discussed in this study. In the transesterification with vinyl acetate, flexible docking calculations indicated that steric effects alone were inadequate to account for the selectivity towards the primary alcohol.Indeed, two main orientations of DHLc were observed: with either the primary or the secondary alcohol group pointing towards the catalytic residues.A detailed analysis of the distances showed 30 % of the poses with either the primary or the secondary alcohol group close to the acylenzyme carbonyl function.Among these poses, both hydroxy groups came at a distance below 4 Å that is necessary for the nucleophilic attack and the subsequent establishment of the ester bond (Figure 7A).A similar result was obtained with the vinyl propionate chain.Thus, in the case of short acyl chains, these results suggest that the selectivity observed is due to the intrinsic reactivity of both hydroxy groups, rather than steric hindrance or the bad orientation of the substrates.On the other hand, when dealing with longer alkyl chains, such as those present in vinyl hexanoate and octanoate, steric hindrance became more significant.This led to less buried poses and increased distances between DHLc and the catalytic residues.Only 3 % and 1 % of the poses came close to the catalytic residues in the presence of the hexanoyl chain and the octanoyl chain, respectively.Moreover, only the primary alcohol of DHLc could reach the acyl-enzyme carbonyl function.These results suggest that as the acyl donor chain length increases, the binding modes of the enzyme-substrate complex become less favorable and the steric effect plays a more significant role in determining selectivity.This also correlates with a reduced reactivity compared to vinyl esters with a shorter side chain, resulting in a lower conversion. Hence, it appears that, for acyl donors possessing a small alkyl side chain, steric factors cannot explain the complete selectivity towards the primary alcohol group.This implies, as discussed, that such effect may be partially mediated by other factors, for instance, the superior nucleophilicity of primary hydroxy groups which is related to more favorable electronic effects.In addition, our findings suggest that the flexibility of the acyl chain is crucial in forming favourable enzymesubstrate complexes.This helps explain why aryl vinyl esters, with their rigid aromatic ring, failed to react with DHLc.This was the case even for substrates 11 and 12, possessing an electron-deficient aromatic cycle which theoretically increases the electrophilic character of the carbonyl carbon.Docking simulations between DHLc and methoxybenzoyl-or methoxyphenoyl-CALÀ B targets confirmed this, as no pose came within a 5 Å radius from the acyl-enzyme carbonyl function. For all the acyl acceptors (DHLc, DHLp, Lc, Lp), the main interactions within enzyme/substrates complexes were hydrophobic in nature.More specifically, the residues Ile189, Ile285 and Val154 located on both sides of the cavity entrance interact with cycle B and the methyl groups of the STLs (Figure 1 and Figure 7B).Also, the residues constituting the hydrophobic wall of the cavity interact with the acyl donor chain (Figure 7B).Aliphatic alkyl chains, particularly those with high flexibility, are preferred in the acylation of DHLc and similar STLs due to their ability to adhere to the hydrophobic wall, optimizing the available space in the catalytic cavity.This efficient use of space favorizes the formation of key transition states in the acylation process.Indeed, a substrate like DHLc already takes up a significant amount of space in the enzyme's cavity, even without an acyl donor.Thus, Figure 7D shows that a flexible alkyl chain, such as octanoyl, makes better use of the cavity space compared to a rigid chain. In addition to the primary hydrophobic interactions, we also observed various hydrogen bond interactions involving the STLs.The hydroxy groups of the STLs formed hydrogen bonds with both Thr40 from the oxyanion hole and the less buried residue Gln157.Notably, when one hydroxy group interacts with Thr40, the other tends to interact with Gln157, and vice versa.Furthermore, hydrogen bonds were observed between Ala282 and the α,β-unsaturated ketone in cycle A (cyclopentenone), as illustrated in Figure 7C. Conclusions In conclusion, our work showed the remarkable selectivity and efficiency of the immobilized lipase B from Candida antarctica (Novozym 435) in the O-acylation of the primary alcohol group of STLs, performed with various alkyl chains (acetate, propionate, hexanoate, octanoate and chloroacetate).The initial development of this method was facilitated by using the Oacylation of perillyl alcohol with vinyl acetate and acetic acid as a model reaction, allowing for the first reported lipasecatalyzed synthesis of perillyl acetate, achieving > 99 % conversion in just 1 h at 37 °C.The corresponding ester derivatives of DHLc were then obtained with excellent conversions going from > 99 % (acetate and propionate) to 69 % (octanoate).As for Lc, Lp and DHLp, their corresponding acetate derivatives were obtained with > 99 % conversion.Thus, we report a versatile and very selective method for the biocatalytic synthesis of semi-synthetic ester derivatives of STLs found in chicory root. In addition, the study of the enzyme-substrate binding modes in the biocatalytic acylation of STLs brought us a more comprehensive understanding of their reactivity and the nature of the interactions with important amino acid residues in the active site of CALÀ B. Our findings indicate that lipophilic acyl chains with sufficient flexibility are more effectively incorporated into STL targets, especially compared to aryl chains which are unreactive. Existing literature suggests that enhancing the lipophilicity of STLs by introducing alkyl side chains may increase their reactivity.These side chains could play a role in modulating the biological activities associated with their pharmacophores, namely the α-MGL and unsaturated cyclopentenone moieties.Consequently, we hypothesize that the semi-synthetic ester derivatives discussed in this article might be able to cross biological barriers more readily, potentially leading to enhanced antimicrobial properties.Future research should focus on conducting biological tests against specific microbial targets, as this could provide valuable insights and potentially lead to the development of new antimicrobial agents. Lipase-catalyzed acylation general procedure A mixture of 3/1:MTBE and ACN was prepared and dried under 3 Å molecular sieves (previously activated at 350 °C for 48 h) for 24 h prior to the reactions.The acylation reactions were conducted with 1 mL of the solvent mixture in 2 mL HPLC vials with screw-on cap (8 mm) and unpierced septa in the presence of 3 spheres of 5 Å molecular sieves (zeolite with 3-5 mm diameter previously activated), the vials were placed in an orbital carousel rotating shaker (Thermo Scientific Tube Revolver) at 35 rpm in an oven at 37 °C. Acetylation of (S)-perillyl alcohol 100 mM of perillyl alcohol and 300 mM of acetic acid or vinyl acetate were dissolved in the solvent mixture in the presence of 3 Å molecular sieves. 1 mL of the resulting solution was introduced into a 2 mL vial containing 10 mg of Novozym 435 and 3 spheres of 5 Å molecular sieves.Controls were performed in the same conditions in the absence of the enzyme.25 μL samples were taken at different time intervals and dissolved in 225 μL of acetonitrile (LCMS grade).They were then filtered on 0.2 μm PFTE filter and introduced into GC vials for analysis. Acetylation of STLs from chicory root 10 mM of Lc (2.76 mg), DHLc (2.78 mg), Lp (4.10 mg) or DHLP (4.12 mg) and 100 mM of vinyl acetate (8.60 mg, 9.22 μL) were dissolved in the solvent mixture in the presence of 3 Å molecular sieves. 1 mL of the resulting solution was introduced into a 2 mL vial containing 20 mg of Novozym 435 and 3 particles of 5 Å molecular sieves.The vials were placed in an orbital carousel rotating shaker (Thermo Scientific Tube Revolver) at 35 rpm in an oven at 37 °C for 48 h.The reaction mixtures were then filtered on 0.2 μm PFTE filter into 2 mL Eppendorf tubes and concentrated under 8 mbar at 35 °C in a Thermo Scientific SpeedVac system for 2 h until dry. Synthesis of ester derivatives of DHLc 10 mM of DHLc (2.76 mg) and 100 mM of the corresponding vinyl esters were dissolved in the solvent mixture in the presence of 3 Å molecular sieves. 1 mL of the resulting solution was introduced into a 2 mL vial containing 20 mg of Novozym 435 and 3 particles of 5 Å molecular sieves.The vials were placed in an orbital carousel rotating shaker (Thermo Scientific Tube Revolver) at 35 rpm in an oven at 37 °C for 48 h.The reaction mixtures were then filtered on 0.2 μm PFTE filter into 2 mL Eppendorf tubes and concentrated under 8 mbar at 35 °C in a Thermo Scientific SpeedVac system for 2 h.DHLc-propionate was obtained at > 99 % yield. 1 General Procedure for the Synthesis of Vinyl Esters 8 and 9 The synthesis proceeded in accordance to the literature. [41]otassium hydroxide (0.5 eq.), palladium (II) acetate (0.4 eq.) and 1 eq. of carboxylic acid were weighed in a 25 mL round-bottom flask and dissolved in vinyl acetate (0.1 M).The reaction mixture was stirred overnight at 40 °C.The resulting mixture was cooled to room temperature, filtered over a celite pad and washed with dichloromethane.The solvents were then removed under vacuum and the crude product was purified by flash column chromatography. General Procedure for the Synthesis of Vinyl Esters 7, 10, 11 and 12 The synthesis proceeded in accordance to the literature. [42]In a 25 mL round-bottom flask copper(II) triflate (1 eq.) and 1,3diethylurea (1 eq.) were added.Then, anhydrous THF (0.1 M) was added and the mixture was stirred to give a clear solution.Eventually, triethylamine (1 eq.) was added to give a dark solution, followed by carboxylic acid (1 eq.) addition.At last, trivinylboroxine pyridine complex (0.66 eq.) was added and the reaction was stirred overnight at 50 °C under a balloon filled with air.The solvent was evaporated under vacuum and the desired product was obtained after flash column chromatography. NMR spectroscopy All reaction mixtures were filtered on 0.2 μm PFTE filter into 2 mL Eppendorf tubes and concentrated under 8 mbar at 35 °C in a Thermo Scientific SpeedVac system for 1-2 h until dry.The concentrate was then dissolved in 650 μL of DMSO-D 6 (99.9 % from Dutscher) and placed into the NMR glass tube.Analysis was carried out in a 300 MHz Bruker NMR spectrometer.The spectra were analyzed on MestreNova software (version 14.2.3). Gas Chromatography Samples were analyzed on a GC-FID from Shimadzu equipped with a Phenomenex ZB-5MS (30 m×0.25 mm×0.25 μm) column.The following temperature programming was used: starting at 50 °C for 2 min, then 20 °C/min until 310 °C and hold for 5 min.The injector and the FID were both set at a temperature of 320 °C.A split ratio of 10 with a splitless sampling time of 1 min was used.Total flow was 21.6 mL/min with a linear velocity of 47.2 cm/s and a purge flow of 3 mL/min. Liquid Chromatography Chromatograms and mass spectra from the acylation reactions of lactucin, lactucopicrin and dihydrolactucopicrin with vinyl acetate, as well as the acylation of DHLc with vinyl chloroacetate, propionate, hexanoate and octanoate were obtained on the following system : LC-MS Waters ACQUITY UPLC I-Class system equipped with a UPLC I BIN SOL MGR solvent manager, a UPLC I SMP MGR-FTN sample manager, an ACQUITY UPLC I-Class eK PDA Detector photodiode array detector (210-400 nm) and an ACQ-UITY QDa (Performance) as mass detector (full scan ESI + /-in the range 30-1250).Acquity BEH C18 column (1.7 μm particle size, dimensions 50 mm × 2.1 mm) was used for UPLC analysis.The injection volume was 0.5 μL.For a 5 min analysis, the elution was done at pH 3.8 from 100 % H 2 O/0.1 % ammonium formate to 2 % H 2 O/98 % CH 3 CN/0.1 % ammonium formate over 3.5 min.A flow rate at 600 μL/min was used.For a 30 min analysis, the elution was performed at pH 3.8 from 100 % H2O/0.1 % ammonium formate to 100 % CH 3 CN/0.1 % ammonium formate over 25 min.A flow rate of 600 μL/min was used. The acetylation of DHLc was followed by UPLC-QTOF using a Phenomenex Luna Omega Polar C18 column (50×2.1 mm×1.6 μm) using water and CH 3 CN containing 0.1 % of trifluoroacetic acid and an injection volume of 0.5 μL via the following gradient: starting at 40 % CH3CN during 2 min, gradually increasing to 100 % CH3CN from 2 to 5 min, 100 % CH3CN was maintained for 3 more minutes.The percentage of CH 3 CN was then decreased back to 40 % over 2 more minutes, for a total run time of 10 min. Molecular docking simulations The targets for docking simulations were prepared as previously described by Dettori et al. (2018). [43]Briefly, the CALÀ B crystal structure (PDB entry: 1LBS) that contains ethylhexylphosphonate (HEE) inhibitor covalently bound to the catalytic serine was chosen as starting structure.The inhibitor was removed and acyl-enzyme systems were built by binding the different acyl donors to the catalytic serine.The building strategy consists of following the placement of the inhibitor hexyl chain that was assumed to indicate the localization of the acyl-enzyme acyl moiety.Then, a structure relaxation procedure was performed with constraints and restraints that were progressively removed in order to preserve the organization of protein atoms.Docking simulations were run using the Flexible Docking module of the software Discovery Studio 4.5.the flexible zone was defined by the residues 40, 105, 106, 134, 140, 141, 144, 149, 154, 157, 187, 189, 224, 278, 281, 282, 285 and 286. Figure 3 . Figure 3. Representation of the lipase-catalyzed transesterification between the different sesquiterpene lactones (STLs) and vinyl esters; conducted with 10 mM of STL, 100 mM of vinyl ester, 20 mg of Novozym 435, 3 spheres of molecular sieves 5 Å in 1 mL of solvent mixture (MTBE/ACN) at 37°C and 35 rpm on an orbital carousel rotating shaker.R 3 = alkyl or alkyl chloride. Figure 4 . Figure 4. 1 H NMR (DMSO-D 6 ) comparison of the reaction mixture, the negative control and the spectra of pure DHLc.(A) Pure DHLc used for the reaction; (B) Reaction mixture of negative control without lipase after 48 h; (C) Reaction mixture after 48 h with 10 mM DHLc, 100 mM vinyl acetate and 20 mg Novozym 435. Figure 5 . Figure 5. Vinyl esters with aromatic side chains tested for the CALÀ B catalyzed transesterification with DHLc. Figure 6 . Figure 6. 1 H NMR (DMSO-D 6 ) comparison of the reaction mixture for lactucin and the negative control.(A) Reaction mixture after 48 h with 10 mM of Lc, 100 mM of vinyl acetate and 20 mg of Novozym 435; (B) Reaction mixture of negative control without lipase after 48 h. Figure 7 . Figure 7. Main binding modes and interactions between DHLc and CALÀ B acyl enzyme.A) proximity of DHLc primary hydroxy group to catalytic residues in acetylation reaction.H bonds stabilizing the acyl enzyme within the oxyanion hole are shown in dot lines.B) Hydrophobic interactions between DHLc and the residues Ile189, Ile285, Val154 (coloured in purple).Hydrophobic and hydrophilic regions are coloured in red and blue, respectively.C) H bond interactions between DHLc and the residues Gln157, Ala282.Regions with H bond donor residues and H bond acceptor residues are coloured in pink and green, respectively.D) Binding mode between DHLc and CALÀ B in the presence of long acyl chain.Octanoyl chain is shown in CPK representation and coloured in dark purple; the Connolly accessible surface of CALÀ B is coloured in grey except the region made by the hydrophobic residues Ile189 and Ile285 that is coloured in light purple.
8,338.4
2024-01-18T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Carbon nanomaterial-derived lung burden analysis using UV-Vis spectrophotometry and proteinase K digestion The quantification of nanomaterials accumulated in various organs is crucial in studying their toxicity and toxicokinetics. However, some types of nanomaterials, including carbon nanomaterials (CNMs), are difficult to quantify in a biological matrix. Therefore, developing improved methodologies for quantification of CNMs in vital organs is instrumental in their continued modification and application. In this study, carbon black, nanodiamond, multi-walled carbon nanotube, carbon nanofiber, and graphene nanoplatelet were assembled and used as a panel of CNMs. All CNMs showed significant absorbance at 750 nm, while their bio-components showed minimal absorbance at this wavelength. Quantification of CNMs using their absorbance at 750 nm was shown to have more than 94% accuracy in all of the studied materials. Incubating proteinase K (PK) for 2 days with a mixture of lung tissue homogenates and CNMs showed an average recovery rate over 90%. The utility of this method was confirmed in a murine pharyngeal aspiration model using CNMs at 30 μg/mouse. We developed an improved lung burden assay for CNMs with an accuracy > 94% and a recovery rate > 90% using PK digestion and UV-Vis spectrophotometry. This method can be applied to any nanomaterial with sufficient absorbance in the near-infrared band and can differentiate nanomaterials from elements in the body, as well as the soluble fraction of the nanomaterial. Furthermore, a combination of PK digestion and other instrumental analysis specific to the nanomaterial can be applied to organ burden analysis. Background Inhalation is the most common and hazardous route of exposure to nanomaterials in an occupational setting. Inhalation of nanomaterials produces a higher deposition rate of the micron-sized particles within the alveoli as a result of their size-dependent aerodynamic properties [1][2][3]. Furthermore, deposited particles exhibit limited clearance rates from the alveoli due to the absence of mucociliary clearance. The clearance of these nanomaterials from the alveoli is influenced by the physicochemical properties of the material including size, shape, functionalization, and dissolution [4][5][6]. Because of the long retention period for nanomaterials in the lungs, the Organization for Economic Cooperation and Development (OECD) testing guidelines call for repeated inhalation studies (i.e., TG 412 and 413) and were revised in 2018 to include lung burden measurements showing lung clearance kinetics for the material of interest [7,8]. There are various methods which can be used to measure the lung burden of non-labelled nanomaterials. Generally, lung burden analysis can be divided into two steps: (1) collection of nanomaterials from the lung and (2) quantification of nanomaterials using instrumental analysis. In the first step, chemical or enzymatic digestion methods are commonly used to collect nanomaterials from the lung tissue. Chemical digestion methods using acids, alkalis, and oxidants are all common but chemical digestion reagents can damage the structure of the nanomaterials resulting in defects, dissolution and oxidation [9]. Enzymatic digestion uses proteinase or collagenase with a chemical lysis buffer and has been proposed as an alternative to chemical lysis, as this degradation approach seems to limit structural damage of the nanomaterials [9,10]. In the second step, nanomaterials can be measured by various instrumental analyses including inductively coupled plasma mass spectrometry (ICP-MS), fluorometry, and optical absorbance spectrometry. For carbon nanomaterials (CNMs), the determination of the concentration is challenging because of the difficulty of measuring carbon in an organic matrix. Several approaches have been used to measure CNMs in biological matrices, including gel electrophoresis [11], programmed thermal analysis (PTA) [9], Raman spectroscopy [12], and near-infrared (NIR) spectroscopy [13]. However, there are calls for the development of more efficient and reliable measurement methods or protocols for CNMs in an organ. Carbon nanomaterials (CNMs) such as carbon nanotubes, graphene and carbon black are considered hazardous materials when inhaled because of their biopersistence, high bio-durability, and unique physicochemical properties including their size and shape [14][15][16][17]. Therefore, the precise evaluation of the kinetics of CNMs is required for proper hazard and risk assessment of CNMs. In this study, we developed an efficient and reliable protocol for measuring the lung burden of various CNMs including carbon black (CB), nanodiamond (ND), multi-walled carbon nanotube (MWCN T), carbon nanofiber (CNF), and graphene nanoplatelet (GNP) using proteinase K (PK) tissue digestion and quantification of the recovered CNMs using a UV-Vis spectrophotometer. Results Working scheme Figure 1 is a schematic of the workflow used to evaluate CNM lung burden in this study. Five types of CNM including CB, ND, MWCNT, CNF, and GNP were selected as test materials, which allowed us to cover most of the available CNMs currently employed in research and industries. The first step in our assay development was to identify the optimal wavelength for measuring CNM concentration. This wavelength is needed to reduce any interference from the biocomponents of the Fig. 1 Schematic workflow for lung burden analysis. a, Selection of the optimal wavelength to quantify CNM concentration without interference from the bio-components of the tissue homogenate. b Quantification of CNM concentration after mixing with lung tissue homogenates. In this step, proteinase K (PK) digestion was used to collect CNMs from lung tissue homogenates. c, in vivo evaluation of this lung burden assay using a murine pharyngeal aspiration model lung homogenates while still giving accurate CNM quantitation (Fig. 1a). The second step was evaluating the quantification of CNMs after they were added to the lung tissue homogenates (Fig. 1b). To do this CNMs were collected from lung tissue homogenates following PK digestion. Finally, we needed to evaluate this assay in an in vivo model; here we measured CNM lung burden at 24 h post pharyngeal aspiration in mice (Fig. 1c). Transmission electron microscopy (TEM) analysis of CNMs Representative TEM images of the CNMs are presented in Fig. 2. CB and ND were spherical with an average size of 14 ± 0.2 nm and 4.87 ± 0.4 nm, respectively. MWCNT and CNF were tubular. The size, specific surface area, and IG/ID ratio of CNMs are presented in Table 1. The diameter and length of MWCNT were 16.7 ± 0.2 nm and 3.55 μm, respectively. The diameter and length of CNF were 24.79 ± 0.4 nm and < 10 μm, respectively. GNP was plate-shaped with a mean diameter of 512 ± 9.7 nm. The BET specific surface area of CNMs was ranged 184-500 m 2 /g. The ID/IG ratio of CNMs showed variable values by the types of CNMs. Optimal wavelength selection for measuring CNM concentration with minimal interference CNMs dispersed in distilled water (DW) with 3% foetal bovine serum (FBS) were shown to have increased absorbance in the 200-300 nm range, which then reached a plateau and stabilised until absorbance reached 900 nm (Fig. 3a). The absorbance of the empty vehicle (DW with 3% FBS) used in this study also showed increasing absorbance between 200 and 300 nm, but its absorbance was reduced to nearly zero after 500 nm. In addition, CNMs/lung homogenate mixtures exhibited slightly higher absorbance values from 200 to 900 nm when compared to CNMs in DW with 3% FBS (Fig. 3b). However, the absorbance of the lung tissue lysis solution was shown to be reduced after 750 nm (approximately 0.068), which meant that this wavelength could be used to successfully measure CNMs in mixed biological solutions without interference. CNM at 25 μg/mL were shown to have an absorbance of 0.219 at 750 nm (Fig. 3b), confirming that 750 nm was the optimal wavelength for evaluations of CNM concentration in lung tissue homogenates. Because this technique uses optical absorbance in the near-infrared region, any nanomaterials having a strong absorbance in this range could be quantified using a similar approach. Quantification of CNMs dispersed in DW with 3% FBS To evaluate the detection limit for CNMs, a range of CNM concentrations (0 to 1000 μg/mL) were resuspended in DW with 3% FBS and then subjected to evaluation at 750 nm. The lower and upper detection limits for a linear dose-response were 0.39-50 μg/mL for CB, MWCNT, and CNF, and 1.56-200 μg/mL for ND and GNP (Fig. S1, see Supporting Information). To evaluate the accuracy and reproducibility of this detection method, four concentrations of each of the CNMs (i.e., 10, 20, 30, and 300 μg/ mL) were tested. The R 2 values of standard curve fits for all CNMs were more than 0.98 (Fig. 4). The detection accuracy (%) for all CNMs was more than 94% compared to the target concentration ( Table 2). Quantification of CNMs from the lung tissue homogenates The second step in developing our lung burden assay was to evaluate its efficacy in a tissue setting. To do this, CNMs were mixed with lung tissue homogenates, treated with PK and then evaluated using the UV-Vis spectrophotometer technique described above. First Fig. 2 The shape and morphology of various CNMs evaluated using transmission electron microscopy. a, carbon black; b, nanodiamonds; c, carbon nanotube; d, carbon nanofiber; e, graphene nanoplatelet 0.02 g (dry weight) of lung tissue homogenates were treated with 1 mL of Tris buffer (pH 8.0) containing 200 μg PK at 56°C and showed complete lysis within 2 days (Fig. 5). The presence of erythrocytes did not influence the efficacy of the PK digestion as lung tissues were completely digested regardless of perfusion (Fig. 5). Thus, the main experiment was performed with lung tissues without perfusion. All CNMs were properly detected using this technique and the recovery percentage for the CNMs between 3.1 to 100 μg/mL was over 86% and mean recovery percentage of tested concentrations was over 90% for all types of CNMs ( Fig. 6 and Tables S1 and S2, see Supporting Information). It is worth noting that the UV-Vis spectrophotometer can detect both higher and lower CNM concentrations than the ones used here supporting its widespread utility in this type of application. The loss rate of CNMs by mechanical processes such as washing and centrifugation was ranged about 3-7% (Table 3). Because the loss rate was dependent on the concentration of samples, it was slightly increased as increasing the concentration. Lung burden analysis after pharyngeal aspiration of CNMs in mice As a pilot study, we evaluated the deposition rate at time zero and retention rate at 24 h after a single pharyngeal aspiration of CNMs. The deposition rates of CB, ND, MWCNT, CNF, and GNP at time zero compared to the nominal treatment dose (30 μg/mouse) were 84.9, 80.4, 79.1, 80.2, and 75.7%, respectively (Fig. 7a). While, the retention rates of CB, ND, MWCNT, CNF, and GNP at 24 h after aspiration compared to that of time zero were 99.5, 99.5, 98.4, 97.9, and 97.5%, respectively (Fig. 7b). Discussion Measuring the lung burden of nanomaterials is now mandatory under the revised inhalation toxicity testing guidelines (i.e., TG412 and TG413) published by the OECD [7,8], and there is an ongoing project of testing guidelines for toxicokinetics to accommodate nanomaterials [18]. In addition, new or improved methodologies to evaluate the concentration of nanomaterials in biological samples is essential for continued research including evaluating nanomaterials in biomedical applications. Thus, this study was designed to create a novel methodology to quantify CNMs deposited in the lung using PK enzymatic digestion and UV-Vis spectrophotometry. The recovery of CNMs from organ tissue homogenates is critical for the success of organ burden assays and various chemical cocktails or enzymes have been suggested for the digestion of these organ tissues [9]. and CNMs collected from a mixture of 25 μg/mL CNMs and 0.02 g (dry weight) lung tissue homogenates following treatment with 200 μg of proteinase K (PK). The insert figure represents that the absorbance of the bio-components was lower at 750 nm after treatment with PK, making it the optimal wavelength to quantify CNMs without interference from the tissue homogenate Chemical cocktails to digest organ tissues use oxidants, acids, and alkalis. These cocktails are hazardous to human health and some chemicals like nitric acid can induce defects or degradation of nanomaterials even in their most stable formats [19]. Because CNMs are commonly quantified using thermal or optical methods structural defects or degradation of CNMs can result in inaccurate quantification. In addition, most organ burden analysis of metal-based nanomaterials use acid digestions to lyse the organ tissues and nanomaterials [20,21]. However, this method cannot discriminate between nanomaterial-derived metal ions and tissue-derived metals or bio-persistent nanomaterials from dissolved ions [22,23]. For example, the iron concentration collected by acid digestion of organs treated with iron oxide nanomaterials can be derived from iron in the organ, iron from dissolved iron oxide in the body fluid such as lysosomal fluid, or iron from bio-persistent iron oxide. Thus, the extraction of nanomaterials from the organ without defect or degradation is critical for the accurate quantification of nanomaterials in organ tissues. The enzymatic digestion of these tissues could therefore provide a solution to these problems. Here, we showed that enzymatic digestion could facilitate the recovery of nanomaterials from tissue homogenate without damaging the CNMs and allowed for the evaluation of morphological changes like defects or biotransformation [24]. Mineralization of CNMs is required for the acid digestion methods such as HCl, nitric acid, and hydrofluoric acid. However, some of the chemical digestion agents such as Solvable (PerkinElmer, Waltham, MA, USA) and Clean 99-K200 (Clean Chemical, Osaka, Japan) and enzymatic digestion methods do not require the mineralization process [9,25]. Because the mineralization process can reduce the recovery of CNMs, the wet process such as proteinase-K can have advantages to collect CNMs in the biological matrices. To our knowledge, this the first report The target concentrations were selected as 10, 20, and 30 μg/mL for nondiluted samples and 300 μg/mL for samples needed dilution The data are presented as mean ± SEM from four independent measurements demonstrating the recovery of CNMs from the mixture of tissue homogenates and nanomaterials using the PK digestion method. Here, we were able to recover about 90% of the CNMs mixed with lung tissue homogenates. The loss of CNMs during the lung burden analysis using PK digestion method can be due to the mechanical loss (e.g., 3-7%) such as washing and centrifugation and inaccuracy (e.g., < 6%) of the UV-Vis spectrophotometer technique as shown in this study. Furthermore, PK enzymatic digestion was shown to work perfectly with or without perfusion, which is advantageous for toxicity studies. After collecting nanomaterials from the organ tissue, it is important to select an appropriate method for instrumental analysis; this will allow proper quantification of the relevant material. CNMs are generally measured using programmed thermal analysis (PTA) like NIOSH 5040 [5,26]. However, PTA analysis requires expensive instrumentation and inaccuracies can occur when CNMs are damaged during the extraction process. This method shows only about 50% recovery in lung burden analyses from a 28-day inhalation study of MWCNT [5,9]. Here, we have been able to show that UV-Vis spectrophotometry can accurately measure five types of CNM over a range of concentrations (0.39-50 μg/mL for CB, MWCNT, and CNF; 1.56-200 μg/mL for ND and GNP) with more than 94% accuracy. Although this study showed a detection limit of MWCNT as 0.39 μg/mL, a previous study suggests that the limit of detection for MWCNT using a UV-Vis spectrophotometer is 0.025 μg/mL [10], which is more sensitive than any of the metallabelling methods like Ni-labelling, which showed a 0.1 μg/mL limit of detection [27], and PTA, which showed 0.2 μg limit of detection [9]. In comparison with this study, the lower detection limit of MWCNT performed by Zhang et al. [10] could be due to the difference in the physicochemical properties such as length, diameter, and dispersibility [28][29][30]. Furthermore, the use of UV-Vis spectrophotometer is available to other types of nanomaterials such as metal oxides (Table S3, see Supporting Information). The visual changes in the lung tissue homogenates after incubation with proteinase K (PK). Carbon black (CB) was selected as a representative CNM and a mixture of CB at 25 μg and lung tissue homogenates at 0.02 g (dry weight) were incubated in 1 mL of Tris buffer (pH 8.0) containing 200 μg PK. After 24 h samples were washed by centrifugation and incubated for another 24 h in 1 mL of Tris buffer (pH 8.0) containing 200 μg PK. Note that CB could be completely recovered from the lung tissue homogenates following PK digestion and centrifugation. The perfusion step did not influence nanomaterial recovery Conclusions In this study, we developed an optimized lung burden assay with over 90% accuracy and 83% recovery rates designed to evaluate CNMs. This methodology relies on a PK digestion and UV-Vis spectrophotometry. This method can also be applied to other nanomaterials with significant absorbance in the near-infrared band. In addition, this technique can be applied to differentiate between the nanomaterials from the elements of the body or soluble fraction. Furthermore, the combination of PK digestion and other instrumental analysis, such as PTA, ICP-MS, fluorometer, or particle counter, could help to overcome the limitations in quantifying other nanomaterials in biological samples. Selection of CNMs and TEM analysis A panel of CNMs was assembled to include various types of nanomaterials including CB, ND, MWCNT, CNF, and GNP. Based on the morphology, two materials can be classified as particles (i.e., CB and ND), two as fibers (i.e., MWCNT and CNF), and one as a platelet (i.e., GNP). All CNMs were provided from commercial sources: CB (# Printex 90; Evonik Degussa GmbH, Frankfurt, Germany), ND (# ND1; S.W. Chemicals Co., Ltd., Gunsan, Korea), MWCNT (#CM-100; Hanwha Nanotech Co., Seoul, Korea), CNF (# T-CNF; Carbon Nano-material Technology Co., Pohang, Korea), and graphene (# 06-0230; Stream Chemical Inc., Newburyport, MA, USA). The size and shape of the CNMs were evaluated by TEM (JEM-1200EXII, JEOL, Tokyo, Japan) as described in our previous study [31]. The size of the CNMs was calculated by counting at least 300 CNMs using a built-in analysis program (JEOL). Raman spectroscopy was used on the CNMs to evaluate defects using a WITec alpha300 system (WITec GmbH, Ulm, Germany) with incident laser light at a wavelength of 532 nm. The surface area of the CNMs was measured with the Brunauer-Emmett-Teller method using a BELSORP-mini II (BEL Japan Inc.). Dispersion of CNMs The degree of dispersion of the preparations is critical because NIR signals are known to be dispersiondependent [29,30]. Because of the hydrophobic nature of the CNMs, serum was used as the dispersion medium to provide a protein corona, which ensured proper dispersion of the CNMs within the media [32]. Furthermore, a water bath sonicator was applied to breakup Fig. 6 The recovery rate of CNMs mixed with lung tissue homogenates following proteinase K (PK) digestion. All CNMs in the tested dose ranges showed a more than 86% recovery rate from lung tissue homogenates following PK digestion. a, carbon black; b, nanodiamond; c, carbon nanotube; d, carbon nanofiber; e, graphene nanoplatelet. Data are expressed as mean ± SEM and n = 4. The detailed numeric data are presented in Tables S1 and S2 (see Supporting Information) agglomerates, which was broadly used in the process of nanomaterial dispersion [30,33,34]. Briefly, CNM powders were dispersed in DW containing 30% v/v heatinactivated FBS and sonicated using a bath sonicator (Saehan Sonic, Seoul, Korea) to break up agglomerates. The operation condition of a bath sonicator was 40 kHz frequency and 400 W output power. Then, DW was added to make up the final working solution of CNMs, and the concentration of FBS was kept to less than 3% v/v. The target concentrations of the stock solution and working solution to evaluate the dispersion of CNM was 1 mg/mL and 25 μg/mL, respectively. The dispersibility of stock solution was measured by optical absorbance at 750 nm after diluting with DW at 25 μg/mL. After the selected optimal sonication duration of stock solution, the working solution was prepared at 25 μg/mL in DW and the dispersibility was evaluated after further sonication for 10-30 min. Because each CNMs needs a nanomaterial-specific duration of sonication for the best dispersion efficacy, the stock solution and working solution were sonicated with different durations (Table 4 and Fig. S2). All CNMs were stable up to 4 h with some minor variations between types of CNMs (Table 4 and Fig. S3). All standards and samples were measured within 10 min after the dispersion process. Measurement of CNMs dispersed in DW using a UV-Vis spectrophotometer To evaluate the accuracy CNM concentration measurements on the UV-Vis spectrophotometer, the absorbance spectra of well-dispersed CNMs in 3% FBS DW were measured at 200-900 nm in quartz cuvettes using a UV-Vis spectrophotometer (Lambda 365, Perkin-Elmer, Waltham, Massachusetts, USA). Based on these results and those from several previous studies [35][36][37], we selected an absorption wavelength of 750 nm for all of the CNMs experiments. To estimate the linear dosage range for the standard curve various concentrations of CNMs from 0 to 1000 μg/mL were tested evaluated using the spectrophotometer, a standard curve of 8 concentrations (3.1, 4.7, 6.2, 9.4, 12.5, 18.7, 25, and 37.5 μg/ mL for CB, MWCNT, and CNF; 9.4, 12.5, 18.7, 25, 37.5, 50, 75, and 100 μg/mL for ND and GNP) was selected for further experiments. To evaluate the accuracy and reproducibility, we selected target concentrations as 10, 20, and 30 μg/mL for non-diluted samples and 300 μg/ mL for samples needed dilution. These target concentrations were not included in the data points used to calculate the calibration regression. Four independent measurements were performed for each to evaluate the accuracy and reproducibility of this system of measurement. Recovery of CNMs from lung tissue homogenates Six-week-old specific-pathogen-free female ICR mice were purchased from Samtako (Gyeonggi-do, Korea). The mice were maintained and handled in accordance with the procedures approved by the Institutional Animal Care and Use Committee of Dong-A University. Animals were acclimatized for one week prior to . Samples were then centrifuged at 15000×g for 20 min and the supernatant was removed. The pellets were resuspended in 1 mL of PK digestion buffer and sonicated for 5 min in a bath sonicator (Saehan Sonic) and incubated at 56°C for a further 24 h. These suspensions were centrifuged at 15000×g for 20 min and the pellets were resuspended in 1 mL of DW and sonicated for 5 min in a bath sonicator (Saehan Sonic). The recovered CNMs were quantified using the UV-Vis spectrophotometer as described above. Evaluation of the loss rate of CNMs during the lung burden analysis We evaluated the mechanical loss rate of CNMs during various processes in the lung burden analysis such as washing and centrifugation. Before starting the experiment, the concentration of dispersed CNMs at nominal concentrations of 10, 20, and 30 μg/mL were measured by UV-Vis spectrophotometer. Then, the suspensions of CNMs were processed with the identical procedures described in "Recovery of CNMs from lung tissue homogenates" without the addition of lung tissue homogenates. We excluded lung tissue homogenates to exclude the possible interference by the bio-components in the UV-Vis spectrophotometer technique and to focus on the mechanical loss rate of CNMs such as washing and centrifugation. Then, the recovered concentrations of CNMs were expressed as loss percentage compared to the initial concentration. Lung burden analysis after a single pharyngeal aspiration in mice A pilot study of lung burden analysis was performed after a single pharyngeal aspiration of CNMs in mice. A schematic of the workflow for this study is presented in Fig. 8. Six-week-old female ICR mice (Samtako) were acclimatized for one week prior to experimentation. To perform the pharyngeal aspiration, mice were anaesthetized with isoflurane (Piramal Critical Care) and placed on a board in a near-vertical position. Then, a suspensions of well-dispersed CNMs in PBS with 3% (v/v) heat-inactivated mouse serum was loaded into the mouth and aspirated by holding the tongue at full extension and covering the nose. The aspiration volume was 50 μL/mouse and 3% mouse serum in PBS served as the vehicle control. The treatment dose was 30 μg/mouse. To evaluate the deposition rate in comparison with the nominal treatment dose, mice were sacrificed immediately after a single pharyngeal aspiration and lung burden analysis was performed. The deposition rate was calculated by dividing the lung burden at time zero with the nominal treatment dose. To evaluate the retention rate of CNMs at 24 h, mice were sacrificed at 24 h after a pharyngeal aspiration. The retention rate was calculated by dividing the lung burden at 24 h with that of time zero. At each time-point, mice were sacrificed by removing blood from the inferior vena cava under deep isoflurane anaesthesia. Lung tissue was cut into pieces and dried in an oven at 60°C for 48 h. Dried lung tissues were weighed and crushed using a tissue homogenizer (Thomas Scientific). Then, 1 mL of PK digestion buffer containing 200 μg PK was added to 0.02 g (dry weight) lung homogenate and incubated for 24 h at 56°C. Samples were centrifuged at 15000×g for 20 min to collect pellets containing CNMs and undigested lung tissue, and then resuspended in 1 mL of PK digestion buffer and incubated further 24 h at 56°C. Finally, these samples were then sonicated for 5 min. The CNMs were collected via centrifugation at 15000×g for 20 min and resuspended in DW with 5 min sonication. The recovered CNMs were quantified using the UV-Vis spectrophotometer as described above. Statistical analysis The data are presented as mean ± SEM and linear regression was applied to the standard curve fit. GraphPad Prism software (ver. 6.0; La Jolla, CA) was used to draw the graphs and perform all the statistical analysis. Additional file 1: Figure S1. Measurement of the concentration of CNMs using a UV-Vis spectrophotometer. CNMs were dispersed in distilled water with 3% FBS and tested up to 1000 μg/mL. The absorbance was measured at 750 nm wavelength. Note that the lower and upper detection limits for a linear dose-response were 0.39-50 μg/mL for CB, MWCNT, and CNF, and 1.56-200 μg/mL for ND and GNP. (A), carbon black; (B), nanodiamond; (C), multi-walled carbon nanotube; (D), carbon nanofiber; (E), graphene nanoplatelet. Figure S2. Evaluation of the dispersibility of CNMs. The time-course dispersibility of stock solution (A) and working solution (B) of CNMs. (A), To evaluate the dispersibility of the stock solution, 1 mg/mL stock solution was sonicated for 10 min -100 min. Then, at each time-point, the stock solution was diluted in DW at 25 μg/mL with vigorous vortexing for 30 s and measured the optical density at 750 nm. (B), To evaluate the duration of sonication for working solution, the working solution (25 μg/mL) of each NM after an optimal sonication duration of stock solution (see Table 4) was sonicated further up to 30 min and optical density was measured at 750 nm. n = 4. Figure S3. Duration of the dispersion stability of CNMs. The working solution of CNMs at 25 μg/mL was sonicated for 10 min after an optimal sonication duration of stock solution (see Table 4). Then, the duration of the dispersion stability was measured at each time-point up to 24 h. n = 4. Table S1. The recovery rates of CB, MWCNT, and CNF from lung tissue homogenates following proteinase K digestion with quantification using the UV-Vis spectrophotometer technique. Table S2. The recovery rates of ND and GNP from lung tissue homogenates following proteinase K digestion with quantification using the UV-Vis spectrophotometer technique. Table S3. The screening result of NIR absorbance at 750 nm of various types of nanomaterials.
6,342
2020-09-11T00:00:00.000
[ "Materials Science", "Medicine", "Environmental Science", "Chemistry" ]
WAVE DYNAMICS MYSTERY DISCOVERED BY LIDAR , RADAR AND IMAGER Since the start of the McMurdo Fe lidar campaign, largeamplitude (~±30 K), long-period (4 to 9 h) waves with upward energy propagating signatures are frequently observed in the MLT temperatures. Despite its frequent appearance, such type of wave was neither widely observed nor well understood in the past. At McMurdo (77.8°S, 166.7°E), the simultaneous observations of such waves using lidar, radar and airglow imager can provide 3-D intrinsic wave-propagation properties, which are greatly needed for understanding their sources and potential impacts. This study presents the first coincident observation of these 4–9 h waves by lidar, radar and airglow imager in the Antarctic mesopause region. INTRODUCTION Although observations in the Antarctic middle and upper atmosphere started very early, there were no rangeresolved temperature measurements at McMurdo until the start of Chu group's lidar campaign [1].Since then, many new discoveries have been made by lidar observations in Antarctica, e.g., the thermospheric neutral iron (Fe) layers [1], solar effects on Fe layer bottomside, two simultaneous Inertia Gravity Waves (IGWs) [2], large eastward planetary waves and super-exponential growth of thermal tide amplitude above 100 km.These discoveries are only made possible by the robustness and powerfulness of the lidar system.To achieve this, Chu group members have upgraded the lidar several times, and have been constantly maintaining the system well.Therefore, this lidar is capable of running continuously for several days with high performance, which enables the statistical study of wave occurrence and properties. Benefited from the capabilities of the Fe Boltzmann lidar, a completely new wave phenomenon has been discovered at McMurdo, i.e., strong and persistent 4 to 9 h period waves with short vertical wavelengths (λ z ) from the stratosphere all the way to the lower thermosphere that occur all year round, e.g., [1,2].However, after its discovery, the true identity of these waves still remains a mystery to the community: Are these waves ordinary IGWs, or short-period Atmospheric Normal Modes (ANMs), or something unknown before?Initial case studies have suggested that some of these waves are ordinary IGW [2].However, persistent appearance of these waves over McMurdo contradicts the current understanding of Gravity Waves (GWs) being intermittent.Furthermore, what could be the persistent sources for these waves around Antarctica?There are no commonly known sources, especially in summertime when polar vortex has disappeared.On the other hand, ANMs, natural resonant modes of atmospheric free oscillations [3] or 8-h/6-h tides can be possible causes.However, these explanations cannot solve the mystery because the observed vertical wavelength λ z (~20 km) are much shorter than those predicted for ANMs or tides according to both theory and simulations.The lack of a convincing theory for these 4-9 h waves in Antarctica has put forth a big mystery.Solving this mystery is crucial to improving the current weather and climate models and to understanding climate changes especially in Polar Regions.Currently, models have simulated large-scale waves such as planetary waves and tides well, but most of them cannot directly resolve small-scale waves such as GWs and ANMs.These small-scaled waves, although smaller in scales, transport significant amounts of energy and momentum up to the MLT region where they strongly influence the mean wind and alter the temperature structure.Therefore, this incapability has become the biggest uncertainty in weather and climate forecasting models and has greatly undermined the models' predictive capability. To resolve this mystery, we need the 3-dimensional (3-D) intrinsic wave properties, i.e., observed and intrinsic periods, horizontal propagation direction, wavelengths λ h and λ z .Fortunately, two other instruments collocated with our Fe Boltzmann lidar enable such study.One is Scott Base MF radar, which has continuous temporal coverage and measures MLT winds between 70 and 100 km.Combining our lidar with this radar, intrinsic wave properties can be derived, e.g., [2].Another is Utah State University (USU) all-sky infrared imager provides measurements on the intensity of infrared OH emission layer (~87 km).The OH imager data can provide horizontal propagation information directly. Simultaneous and common volume observations of the 4-9 h waves by lidar, radar and imager On 11 June 2013, strong wave perturbations with period τ ~ 5 h were observed in Fe lidar temperatures as shown in Figure 1a.This wave has downward phase progression and λ z of ~20 km.The raw MLT temperature data have resolutions of 0.1 h and 0.1 km.The data we used were smoothed temporally and vertically with Hamming windows of 0.5 h and 1 km FWHM.Therefore, only the waves with τ≥1 h, and λ z ≥ 2 km were resolved.MF wind data also shows a 5-h wave as in Figure 1b, after removing the semidiurnal and diurnal tides, we can see large wind perturbations in both the zonal and meridional directions.We first assume this wave as IGW, and then test it with a linear GW theory.At the same time, the 5-h wave was clearly observed by the OH imager.Figure 2 shows two keograms, one for the South-North (S-N) direction and another for West-East (W-E), which are composed of the S-N and W-E slices at the center of each raw image proceeding with time.The horizontal ranges over the S-N and W-E directions are 280 and 350 km, while the temporal and spatial resolutions are 0.5 min and ~ 1 km, respectively. Data Analysis In order to minimize contamination of tidal and planetary wave oscillations, the temperatures, winds and OH intensities are all band-pass filtered by a 6th-order Butterworth filter with pass band between 2.5−7 h.We then fit two OH intensity keograms to a non-linear monochromatic wave model to extract the horizontal propagation properties of the waves.The model I(t,x,y) which takes two forms for each keogram, is written explicitly as follows. In this model, ω is observed frequency, k is the zonal wavenumber, and l is meridional wavenumber.The fitting process is to minimize the chi-square fitting error. We utilize a Monte-Carlo sampling method to do the fitting [4].Basically, for each step of the fitting, a set of random parameters around the current model parameters was generated.Then, a new χ 2 is calculated.If the new χ 2 is smaller than the pervious one, then this new set of model parameters was accepted.We continue this search until the fitting error is below our tolerance, and the parameters are converged.This method avoids the entrapment in local likelihood maxima, and therefore a useful way to solve non-linear fitting problems.λ h and propagation direction are then derived: The fitted results are λ h ~1760 km, θ~180°.As shown in [2], combining lidar temperatures and radar winds, we can calculate the intrinsic wave properties for IGW.The radar wind vector at each altitude can tell the wave propagation direction but with a 180° ambiguity. Then by comparing the phase difference between the wind and temperature perturbations, we can resolve this ambiguity.We take a new approach to obtain this phase difference, which is to fit the temperatures and winds together using the following wave model, see [5]. where, t 0 is the phase of temperature perturbation, and t 1 is the phase of the horizontal parallel wind perturbation.We find the phase difference is ~297°, which is very close to that predicted by linear theory for ordinary IGW, (~270°).This means that our assumption of this wave being ordinary IGW is valid.After the 180° ambiguity was resolved, we can then use the method in [2] to solve the intrinsic properties.The results are shown in Figure 3.As noticed, the directly fitted OH keograms yields much longer λ h than that calculated from lidar and radar.The discrepancy could be due to the fact that OH density variations do not have the same response as the temperature variations, due to the chemistry involved. STATISTICS Although we have reported several case studies of these 4-9 h waves identified with IGWs [2,6], we believe a statistical study of the wave properties will be more helpful to obtain a comprehensive interpretation of the waves.In this section, we utilize the winter (May, June, and July) temperature data from 2011 to 2014, to derive several statistical wave properties of these 4-9 h waves.We select observation episodes longer than 12 h to do the study because this allows us to accurately determine the wave periods.A total of 135, 267, 172 hours of data were used in this study for May, June and July, respectively.Figure 4a shows an example of the persistence of these 4-9 h waves in our observations.During nearly 50 hours of observations on 18 June 2014, clear downward phase progressions are always presented.Figure 4b shows the corresponding frequency spectra at 85 km, 90 km and 95 km in which the waves peak at ~5.5 h, ~6.5 h, and 8 h.Persistent presence and discrete distribution of the dominant periods of the 4-9 h waves as shown in Figure 4 are very common in our observations. Wave Frequency Spectra Observations have shown that ordinary GWs tend to have a so-called "canonical" wave spectrum, i.e., in the spectral region where waves are regarded as being saturated; the power spectral density (PSD) follows a form of m -p or ω -q (m and ω are the vertical wavenumber and frequency, respectively; -p and -q are the corresponding slope of the power) [7].According to diffusive filtering theory [7], p=2q-1.The studies done by Lu et al. [8] showed that, -p is around -2.26 at McMurdo, therefore, -q should be around -1.6.We plot the mean frequency spectra for each month in Figure 5. Two things are different from the predicted universal power law.First, the slopes in all three months are much steeper than -1.6.Second, the spectral energies in periods range 4-9 h are higher than the power law fitting suggesting that these waves do not follow power law and therefore are different from the ordinary GWs. Occurrence frequency Ordinary GWs are believed to be intermittent, and therefore the occurrence frequency of these waves is critical when identifying whether they are ordinary GWs.Because multiple waves were often present at the same time, wavelet techniques were used in analyzing temperature time series at each altitude as described in [9].Multiple peaks were identified in wavelet power as a function of periods and time, each of which is considered as one wave event.Then the time span of each wave event is determined from the full-width-half-maximum (FWHM) of the corresponding wavelet power peak.Usually, 2−6 wave events can be identified at each altitude during each episode of observation using this method.Figure 6 shows the percentage of the time span of a wave event over entire observation time as a function of the wave period in May, June, July averaged over all altitudes through 4 years of lidar campaign.Clearly shown in this Figure 6 is the prevalence of 4-9 h waves, which are detected by the algorithm ~92%, ~87% and ~84% of the entire observation time in May, June, and July respectively.This high occurrence frequency contradicts the current understanding of GWs being intermittent.Another distinct feature is that the discrete distribution of the dominant periods.For example, in July, the occurrence frequency of waves with τ~8 h is two-thirds higher than the waves in the neighboring period bins.If these are ordinary GWs, this means that the sources of these waves have preference over certain periodicity. Dominant vertical wavelengths To determine the λ z of these waves, we first filter the wave perturbations with a 6-order Butterworth band-pass filter (pass bands between 3 to 12 h), and then calculate vertical spatial spectra for each episode, followed by averaging over each month.Figure 7 shows the mean vertical spatial spectra.The dominant λ z ~22 km for all three months. Amplitude We then plot the wave amplitude (standard deviation) with altitudes in Figure 8.The waves have been categorized into three groups: 3-5 h, 5-9 h, and 9-11 h.At least in May and June the 5-9 h wave amplitude clearly has a faster growth rate than the other two groups of waves, suggesting less damping encountered, or less saturated. CONCLUSIONS AND DISCUSSION Large-amplitude (~±30 K), long-period (4 to 9 h) upward propagating waves are frequently observed in the MLT temperatures at McMurdo.Simultaneous observations of such waves using lidar, radar and airglow imager has revealed that one of the wave events on 11 June 2013 has the following properties: τ ~ 5 h, λ z ~20 km, λ h ~1100 km, propagating from north to south at azimuth θ~180°. Statistical study has shown that, unlike ordinary GWs, these waves have higher energy than the predicted universal power-law spectra, and very high occurrence frequencies.The discrete distribution of the dominant periods is also a distinct feature for these waves.Shortperiod ANM, 6-h or terdiurnal tides cannot be the cause of these waves because of the short vertical wavelength (λ z ~22 km).Planetary scale Inertio-gravity waves simulated by Mayr et al. Figure 1 . Figure 1.(a) Raw temperatures from 1.5 to 15 UT on 11 June 2013 by lidar.(b) Zonal wind (top) and meridional wind (bottom) data by MF radar at the same time with the 12-h and 24-h tides removed. Figure 3 . Figure 3. Intrinsic properties calculated from OH imager keograms and lidar/radar data. Figure 5 . Figure 5. Mean frequency spectra in the MLT for May, June and July.Black dashed lines are fittings of the power law shapes in the range of ~1-11 h.The data used to calculate the spectra have 0.5 h and 1 km intervals in time and space. Figure 6 . Figure 6.Percentage of the time span of a wave event over entire observation time as a function of the wave period in May, June, July through 4 years of lidar campaign.Results are averaged between 81 and 105 km. Figure 7 . Figure 7. Mean vertical spatial spectra in the MLT region in May, June and July.Dotted lines are spectra for each episode. Figure 8 . Figure 8.Standard deviations of the wave perturbations change with altitude for waves periods between 3-5 h, 5-9 h, 9-11 h in May, June, July. [10] at high latitudes have τ ~ 10 h, λ z ~20 km, is one possible candidate.These waves are related to the Class I gravity mode discussed in the classical literature.However, Mayr et al.only showed waves with τ ~ 10 h.Whether their simulations also have periodicity between 4 and 9 h is unknown.The planetaryscale IGWs by Mayr et al. have zonal wave number 0-4, and long horizontal wavelength λ h ~4000 km.Multiplestation observation network is required to examine the zonal structure of the 4-9 h waves.
3,333.4
2016-06-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Image quality and acquisition time assessments for phase oversampling in compressed sensing sensitivity encoding: Comparison with conventional SENSE Abstract This study compared sensitivity encoding (SENSE) and compressed sensing sensitivity encoding (CS‐SENSE) for phase oversampling distance and assessed its impact on image quality and image acquisition time. The experiment was performed with a large diameter phantom using 16‐channel anterior body coils. All imaging data were divided into three groups according to the parallel imaging technique and oversampling distances: groups A (SENSE with phase oversampling distance of 150 mm), B (CS‐SENSE with phase oversampling distance of 100 mm), and C (CS‐SENSE with phase oversampling distance of 75 mm). No statistically significant differences were observed among groups A, B, and C regarding both T2 and T1 turbo spin‐echo (TSE) sequences using an acceleration factor (AF) of 2 (p = 0.301 and 0.289, respectively). In comparison with AF 2 of group A, the scan time of AF 2 of groups B and C was reduced by 11.2% and 23.5% (T2 TSE) and 15.8% and 22.7% (T1 TSE), respectively, while providing comparable image quality. Significant image noise and aliasing artifact were more evident at AF ≥ 2 in group A compared with groups B and C. CS‐SENSE with a less phase oversampling distance can reduce image acquisition time without image quality degradation compared with that of SENSE, despite the increase in aliasing artifact as the AF increased in both CS‐SENSE and SENSE. INTRODUCTION Several different parallel imaging techniques were introduced to reduce data acquisition time in magnetic resonance imaging (MRI). [1][2][3] In recent years, compressed sensing (CS) and hybrid technique (CS-SENSE), for example, combination of CS and sensitivity encoding (SENSE), were widely used in clinical practice. The image-based SENSE technique theoretically does not require an extra field of view (FOV) given appropriate coverage. While those scans with pre-This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2021 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine scribed FOV smaller than the target anatomy do require extra FOV called oversampling distance in the phase encoding direction with increasing acceleration factor (AF), which is defined as the ratio between a fullysampled data and an under-sampled data. 1,4 As the spacing distance between k-space lines is inversely proportional to the FOV, the increase in AF results in a reduced FOV image from each of the coil elements. Thus, the Nyquist sampling theorem criterion is not met, appearing with aliasing artifacts in the reduced FOV. 5 To overcome this problem, CS-SENSE consists of a variable density incoherent undersampling scheme that optimizes the balance between random basis and SENSE sampling using iterative reconstruction. Oversampling distance in the phase encoding direction is related to the data acquisition time due to the increase in the number of phase encoding steps, causing longer scan time. Hence, it is important to properly adjust phase oversampling distance and shorten the image acquisition time to avoid aliasing artifacts while using the AF. However, it is believed that most SENSE or CS-SENSE studies have not mentioned phase oversampling distance to acquire images without aliasing artifact. [6][7][8][9] Moreover, no study focusing on the comparison of phase oversampling distance between SENSE and CS-SENSE exists. Therefore, this study aims to compare SENSE and CS-SENSE for phase oversampling distance and assess its impact on image quality and acquisition time. Phantom and study design This study used a large diameter phantom (Philips Healthcare, Eindhoven, The Netherlands) with a diameter of 40 cm and 45 small circular holes, each separated by 5.0 cm for the experiments. The phantom was filled with copper sulfate that enables the holes in the phantom to be shown as hyperintense objects in the MR image, thereby evaluating them for image distortion. The phantom was carefully positioned and aligned by fixing the support device. All imaging data were divided into three groups according to the parallel imaging technique and oversampling distances: groups A (SENSE with phase oversampling distance of 150 mm), B (CS-SENSE with phase oversampling distance of 100 mm), and C (CS-SENSE with phase oversampling distance of 75 mm). Image analysis The structural similarity index (SSIM) tool was analyzed as an image quality assessment using MATLAB (R2016b; MathWorks, Natick, MA, USA). This SSIM was calculated using the equation where μ x , μ y , σ x , σ y , and σ xy are the local means, standard deviations, and cross-covariances for images x and y, respectively. This value indicated from 0 to 1 and was ∼1 when the two images were nearly identical. 10,11 Moreover, the signal-to-noise ratio (SNR) was calculated using the National Electrical Manufacturers Association subtraction method according to the equation: 12,13 where the mean signal value of the two images for subtraction and is the standard deviation of the subtracted images, which is related to two images obtained from identical parameters (a subtraction image was performed to make a noise-only image). The mean signal value and were acquired from the corresponding 85% region of interest in the two images and the subtracted image, respectively ( Figure 1). The √ 2 value is required because noise with a propagation of error is obtained from the difference image. 14 The SNR analysis was performed using Image J (Bethesda, MD, USA; http://rsbweb.nih.gov/ij/). Statistical analysis The Kolmogorov-Smirnov test was used to confirm the SNR and SSIM values following a normal distribution. All values among the three groups were compared using the analysis of variance based on the results of the Kolmogorov-Smirnov test. Moreover, post hoc tests were performed using the Tukey-Kramer method when statistically significant differences were indicated. Statistical analyses were performed using IBM SPSS Statistics for Windows/Macintosh, v. 21.0 (IBM Corp., Armonk, NY, USA). For all statistical analyses, a twosided level of p < 0.05 was considered statistically significant. RESULTS The measured SNR values are presented in Table 2. The SNR values had a general tendency to decrease as AF increased in all groups. The highest and lowest SNR values were shown at AF 1.5 in group B and AF 4 in group A, respectively, in both T2 and T1 TSE sequences. In T2 and T1 TSE using an AF 1.5, no statistically signifi-cant differences were found between groups A and C (p = 0.928 and 0.252, respectively). Moreover, no statistically significant differences were noted between groups A, B, and C in both T2 and T1 TSE sequences using AF 2 (p = 0.301 and 0.289, respectively; Figure 2). In comparison with AF 2 in group A, the scan time at AF 2 in groups B and C was reduced by 11.2% and 23.5% (T2 TSE) and 15 Notes: *p-Values between groups A and B when statistically significant differences were indicated as per post hoc tests using the Turkey-Kramer test, †i values between group A and C, and ‡ between groups B and C when statistically significant differences were indicated as per post hoc tests using the Turkey-Kramer test. Group A, SENSE with phase oversampling distance of 150 mm; Group B, CS-SENSE with phase oversampling distance of 100 mm; Group C, CS-SENSE with phase oversampling distance of 75 mm. AF, acceleration factor; TSE, turbo spin-echo. F I G U R E 3 Images showing the effect of parallel imaging technique with phase oversampling distance and acceleration factors between three groups using both sequences. Group A, SENSE with phase oversampling distance of 150 mm; Group B, CS-SENSE with phase oversampling distance of 100 mm; Group C, CS-SENSE with phase oversampling distance of 75 mm while providing comparable image quality. When using AFs 1, 3, and 4, the SNR values were significantly higher in group B than in group A in both sequences (p < 0.05), despite having no statistical difference at AF 2 (p > 0.05; Figure 3). Figure 3 shows the images used to calculate the SNR values as a function of groups, sequences, and AFs. The significant image noise and aliasing artifact were more evident at AF ≥ 2 in group A compared with those of groups B and C in T2 TSE. A reduced aliasing artifact was seen at AF ≥ 3 in group B compared with that of group A for both sequences despite the increase in aliasing artifact as the AF increased in both CS-SENSE and SENSE. Table 3 presents the SSIM values obtained among the three groups using both T2 and T1 TSE sequences that were nearly identical. All SSIM values were > 0.9928 regardless of the parallel imaging technique, phase oversampling distance, and AFs. Overall, no significant differences in image quality degradation among the three groups were observed (p > 0.05; Figure 4). DISCUSSION CS-SENSE with a less phase oversampling distance in the current study showed a reduced image acquisition time without image quality degradation compared with that of SENSE. CS-SENSE with 100 mm phase oversampling distance demonstrated a significantly higher SNR value without image distortion with an image acquisition time reduction by up to 17.3% for both T2 and T1 sequences compared with SENSE with phase oversampling distance of 150 mm. The effort to reduce image acquisition time without deteriorating image quality is a crucial issue in clinical practice, and various studies related to these efforts have been conducted. Among them, CS-SENSE has a unique undersampling method of k-space by a balanced incoherent acquisition of variable density with iterative reconstruction. 6,[15][16][17] Recent studies demonstrated that CS-SENSE offers similar image quality to that of SENSE, with a reduction in image acquisition time. 15,[17][18][19] These results are consistent with the current study concerning image quality and image acquisition time reduction. However, they had no explanation for phase oversampling distance and did not focus on it for their results. Thus, the result of the current study is worth providing baseline phase oversampling information for further evaluation because the current study demonstrates the effect of phase oversampling distance between SENSE and CE-SENSE on image quality and image acquisition time. In addition, the results of the current study showed that CS-SENSE, which has a less phase oversampling distance than that of SENSE, can reduce image acquisition time without image quality degradation. This may be explained by the differences in the undersampling method, which allows for denser sampling in the central than in the peripheral k-space. 16,19 Moreover, iterative reconstruction to remove the aliasing artifact in CS-SENSE may contribute to comparable SENSE image quality while reducing image acquisition time. Regarding SNR values and image acquisition time, CS-SENSE was superior to SENSE with a phase oversampling distance that was 50% shorter than that of SENSE except for AF 2 as well as up to 26.4% reduction in image acquisition time. Therefore, CS-SENSE cannot only reduce image acquisition time but also yield comparable image quality even with a shorter phase oversampling distance than in SENSE. In contrast, both SENSE and CS-SENSE using AF ≥ 3 showed significantly increased aliasing artifacts. These results are consistent with those of other studies that reported increased noise and aliasing artifacts when using higher AFs. 13,20 Thus, more consideration should be given to the phase oversampling distance as the higher AFs are used. Given the findings of the current study, the phase oversampling distance with parallel imaging should be optimized and discussed as important to understand its influence on image quality and acquisition time. The current study had some limitations. First, the phase oversampling distance in SENSE could not be used by setting it equal to that of CS-SENSE. This is because SENSE can only operate over a distance of at least 60 cm including phase oversampling distance and FOV by mechanical constraints of the SENSE acquisition technique. Second, this study only used a large phantom that does not represent various organs, soft tissues, and the specific target tissues. The further study including various phantom sizes, patients, and body parts is required to demonstrate the effects of the combination phase oversampling distance with the parallel imaging technique. Finally, both T2 and T1 TSE sequences were used instead of the three-dimensional sequence in the current experiment F I G U R E 4 Images showing the hyper-intense points as a function of phase oversampling distance and acceleration factors between three groups using both sequences. Group A, SENSE with phase oversampling distance of 150 mm; Group B, CS-SENSE with phase oversampling distance of 100 mm; Group C, CS-SENSE with phase oversampling distance of 75 mm even though the three-dimensional sequence has been widely used in clinical practice. Additional efforts to optimize a phase oversampling distance with either SENSE or CS-SENSE in the three-dimensional sequence are warranted. Nevertheless, the current study is the first study that focused on phase oversampling distance and its effects on image quality as a function of parallel imaging techniques and AFs. CONCLUSIONS Compared with SENSE, CS-SENSE with a less phase oversampling distance can reduce image acquisition time without image quality degradation, despite the increase in aliasing artifact as the AF increased in both CS-SENSE and SENSE.
3,042.2
2021-12-24T00:00:00.000
[ "Physics" ]
Distributed Event-Triggered Consensus-Based Control of DC Microgrids in Presence of DoS Cyber Attacks In this paper, the problem of distributed event-based control of large scale power systems in presence of denial-of-service (DoS) cyber attacks is addressed. Towards this end, a direct current (DC) microgrid composed of multiple interconnected distributed generation units (DGUs) is considered. Voltage stability is guaranteed by utilizing decentralized local controllers for each DGU. A distributed discrete-time event-triggered (ET) consensus-based control strategy is then designed for current sharing in the DGUs. Through this mechanism, transmissions occur while a specified event is triggered to prevent unessential utilization of communication resources. The asymptotic stability of the ET-based controller is shown formally by using Lyapunov stability via linear matrix inequality (LMI) conditions. The behavior of the DGUs subject to DoS cyber attacks are also investigated and sufficient conditions for secure current sharing are obtained. Towards this end, a switching framework is considered between the communication and attack intervals in order to derive sufficient conditions on frequency and duration of DoS cyber attacks to reach the secure current sharing. The validity and capabilities of the presented approach is confirmed through a simulation case study. I. INTRODUCTION A microgrid is a group of the low-voltage electrical system consisting of multiple distributed generation units (DGUs), loads, and storage devices interconnected through power lines [1]. The AC microgrid is the standard model of a microgrid used in residential, commercial, and industrial consumers and has attracted a lot of attention in the field of AC microgrids control [2], [3]. However, DC microgrid has several advantages over AC microgrid, such as improved overall efficiency, appropriate interfacing of batteries and DC power sources, and the increasing number of DC loads, which have made DC microgrid an attractive research The associate editor coordinating the review of this manuscript and approving it for publication was Bin Zhou . topic [4]- [6]. Having the opportunity to utilize renewable energy sources by DC microgrid and the widespread usage in modern-designed vehicles such as train, aircraft, watercraft are representative examples of DC microgrid application and usage. Extensive use in industries makes DC microgrids an emerging subject that has recently achieved much research attention [7]. Current sharing and voltage regulation of DC microgrids are the main two control challenges of these systems. The optimal voltage regulation strategy results in the desired output voltage of each microgrid, while the current-sharing control strategy divides, shares, and dedicates balanced current to each DC microgrid [7]- [11]. Hierarchical control schemes have been developed in the literature to achieve both objectives [12]. Although centralized controllers satisfy the VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ voltage stabilization and precise current sharing goals [12], the computational and communication burden of these architectures increase by the larger size of microgrids. Moreover, a single-point-of-failure in the central control unit may lead to malfunction of the entire system [13]. This is the main reason why decentralized and distributed regulators, such as droop controllers [12], are preferred. Being a communication-less approach, droop controllers may lead to voltage deviations from reference values. Consequently, secondary control layer with consensus algorithms have been deployed and combined with the droop controller to deal with the deviation problem [13]- [15]. Scalability criteria have become one of the most important characteristics of control-scheme designs in distributed systems. Physical wide range of distributed microgrid systems has attracted researches' interest toward scalable control strategies, particularly aiming at current (power) sharing [7], [8], [14], [16]- [18]. In a distributed control scheme, each subsystem can receive information from its neighbors, resulting in their overall performance improvement. Therefore, this approach has been developed as a viable scheme for large-scale systems as in [19], [20]. Moreover, information exchanges among subsystems are transmitted over networks, which may generate a heavy communication burden. Event-triggered control (ETC) techniques receive much attention in recent years to avoid the unnecessary utilization of communication resources (refer to for instance [21]- [27]). In the distributed ETC of large-scale systems, each subsystem transmits its information through the network based on certain event-triggering conditions. Data transmission only takes place when event-triggering conditions are violated, and hence the communication cost is considerably reduced [28]. In [29], the DC microgrid was controlled with an ET communication-based voltage droop control strategy to ensure power sharing. The proposed DC microgrid was composed of distributed energy resources (DERs) in which the DER layer was composed of a distributed source connected to a DC/DC converter with a specific duty cycle. A distributed nonlinear ETC approach was developed in [30] for current sharing and voltage regulation in an electrical network model of a DC microgrid. This DC microgrid includes converters and local and public loads. In [31], a distributed discrete-time algorithm is developed to achieve proportional load current sharing and average bus voltage regulation in discrete-time DC microgrids. A periodic event-triggered discrete-time algorithm is proposed to reduce the communication requirement and avoid the Zeno phenomenon. The ET-based control approaches [29], [30] do not guarantee the Zeno behavior (infinite events over a finite time interval) exclusion which is an important issue in evaluation of the controller performance. Indeed, the Zeno phenomenon describes the behavior of the ET-based controller when the system is subjected to an unbounded number of events in a finite and bounded duration of a given time interval. This can occur when the controller unsuccessfully attempts to satisfy the event-triggered condition more rapidly that would lead to sending infinite number of data in a finite interval. In other words, feasibility and practicality of the ET-based controller should be considered by showing the Zeno behavior exclusion. However, this important fact is not guaranteed in the above approaches [29], [30]. Recently, cyber security of power systems against malicious cyber attacks has attracted significant attention. Adversaries may disrupt power systems by launching malicious attacks on the physical system layer and/or the communication network layer. Several security results on cyber attacks against the power grids have been addressed in [32]- [36]. One of the most common malicious attacks is the denial of service (DoS), which can congest the communication channels by sending large quantities of unauthentic packets. This cyber-attack is regularly the main reason for a heavy transmission burden and consumes unusual amounts of network bandwidth resulting in interruptions in the network [37]. Hence, it blocks the transmission medium and interrupts regular communication for a period of time. Analysis of the DoS cyber attacks on load frequency control (LFC) of power systems under different communication schemes have been recently addressed in [32], [37]- [39]. The analysis of DoS cyber attacks under event-triggered load frequency control of single area power system was carried out in [37]. The average dwell time design approach is utilized to establish exponential stability criteria based on the choice of appropriate rate of allowable DoS attack duration for the entire running time of system and time delay margins. A similar kind of approach was used for multi-area LFC system in [38], where the study investigated the maximum degree of tolerance of LFC system against DoS attack and the total length of DoS attacks time for assuring stability of the LFC system was obtained [37]. An event-triggered based approach for interconnected power systems that tolerates the lack of data because of the DoS attack was presented in [32]. It concentrated on developing resilient control without a priori knowledge of additional DoS attacks probability distributions. The influence of DoS attack in the form of uncertainty of event triggering condition in networked control systems was discussed in [39]. Moreover, event-triggered H ∞ control for networked control systems under denial-ofservice attacks is addressed in [40] which reduces excessive utilization of communication resources. In this paper, sufficient conditions for the stability of the system are achieved by using LMI conditions. DC microgrid systems rely on real-time operation and in presence of DoS cyber attacks may become unstable and damaged [41]. In [42], a distributed monitoring scheme for attack detection in large-scale linear systems applied to DC microgrids is presented. The recommended architecture utilizes a Luenberger observer as well as a bank of unknown-Input Observers at each subsystem to provide attack detection capabilities. In [43], the attack-resilient event-triggered control synthesis approach for a networked nonlinear DC microgrid system under DoS attacks was 54010 VOLUME 9, 2021 addressed. An event-triggered switched system model of the nonlinear DC microgrid was established and an average dwell-time method and piecewise Lyapunov functional method were employed to show the asymptotic stability of the system. However, in this work only stability of the microgrid was evaluated and the current sharing which is one of the main challenges in these systems, was not considered. Therefore, the secure current sharing problem of DGUs in a DC microgrid subject to cyber attacks is an important problem that needs to be formally investigated. In [44], the reactive power sharing problem of an AC microgrid under DoS attacks is addressed. A periodic ET update method is proposed which can avoid the Zeno phenomenon. The tolerance range of DoS frequency and duration for the DG related to the smallest event-interval time of the ET update method is found. However, the microgrid type and modeling, the ET mechanism, and the stability analysis approach in our paper are totally different from [44]. In this paper, a DC microgrid system including different types of DGUs is considered, where voltage stabilization is guaranteed by using a decentralized local controller for each DGU. A distributed discrete-time ET consensus-based controller is then designed for current sharing in DGUs. A state-dependent threshold is then designed for proper ET condition using the secondary controller. Indeed, stability of the overall microgrid is then guaranteed by using the Lyapunov stability results, and design parameters are found via solving a linear matrix inequity (LMI). The advantages of our proposed approach are in reducing the cost of the network communication and improving its security since the data transmission will be based on the ETC system conditions. Finally, the overall microgrid subject to the DoS cyber attack is considered and sufficient conditions for the secure current sharing are determined by applying a switching framework between the communication and the cyber attack intervals. The main contributions of this paper are summarized as follows: • A discrete-time ET consensus-based control methodology for the DGU is investigated and developed in order to achieve proportional current sharing in a DC microgrid. This event-based secondary controller is designed based on a linear discrete-time consensus protocol in which each DGU transmits its information through the network channels when the event-triggering conditions are violated, and hence the communication cost is considerably reduced. The DC microgrid modeling in our paper is different from those in [29] and [30]. The microgrid system type in [44] is AC microgrid which is totally different from our proposed system. Specifically there is no need to consider the Zeno phenomena in our proposed event-triggered secondary controller since it is implemented in a discrete-time framework whereas the works in [29] and [30] proposed continuous-time event-triggered controllers without investigating the exclusion of the Zeno phenomena. The ET mechanism and the technique of avoiding Zeno phenomena in our proposed event-triggered secondary controller is different from [44]. • The vulnerabilities of our proposed discrete-time event-triggering mechanism to DoS cyber attacks in DC microgrid systems are investigated. Towards this end, a switching framework is developed and sufficient conditions on frequency and duration of DoS cyber attacks are derived in order to simultaneously guarantee secure current sharing and voltage regulation. In other words, a switching framework is considered between the communication and attack intervals in order to derive sufficient conditions on frequency and duration of DoS cyber attacks to reach the secure current sharing and the stability of the overall microgrid. In [31], a periodic event-triggered discrete-time algorithm is proposed to achieve proportional load current sharing and average bus voltage regulation in discrete-time DC microgrids. In comparison to [31], a continuous-time DC microgrid is considered in our paper and the ET condition is different. Furthermore, the overall microgrid is exposed to DoS cyber attacks. In [43], an attackresilient event-triggering mechanism was proposed for a nonlinear DC microgrid system subject to intermittent DoS attacks where the DoS frequency and DoS duration were characterized in the stability criterion. As compared to [43], the system modeling in our proposed approach is different and moreover importantly both current sharing and voltage regulation in the DC microgrid are considered. The stability analysis approach in our paper is different from [44]. Moreover, in our proposed approach the DoS attack impacts on the voltage stability is addressed which was not noticed in [44]. In other words, in our proposed approach by using Lyapunov stability approach, the linear matrix inequality conditions that ensure the voltage stability and current sharing are obtained. The remainder of the paper is organized as follows. In Section II, the description of microgrid system is presented and the problem formulation is provided in Section III. The stability analysis of the overall microgrid and the main results without and with the presence of DoS cyber attacks are analyzed in Section IV. In Section V, simulation results are provided to confirm the efficacy of the proposed method and to illustrate the efficiency of the proposed ET consensus-based method in achieving voltage regulation and current sharing of the DC microgrid in the presence of DoS cyber attack. Finally, conclusions are presented in Section VI. II. MICROGRID SYSTEM DESCRIPTION In this section, we describe the model of the microgrid and the control systems. A DC microgrid consists of N DGUs that are connected to each other through power lines. An undirected graph (digraph) G e = (ν, ε e , w e ) is used to illustrate the microgrid where the nodes, ν ∈ {1, . . . , N }, show the DGUs, and the edges, ε e ∈ ν × ν, represent the power lines. Moreover, the diagonal matrix w e with w e,ii = w e,i is VOLUME 9, 2021 used to show the weight matrix, where w e,i is the associated edge weight for the edge e i ∈ ε e . Note that the direction of edges specifies a reference direction for positive currents, and the edges weights are related to the corresponding line conductances, 1 R ij . The Laplacian matrix of the physical system is given by L e = q e w e q e , where q e denotes the incidence matrix of G e . The set of neighbors of the ith node is denoted by The microgrid takes advantage of a communication network such that each local controller can obtain information from its neighbors. Moreover, this paper assumes that the information network topology is the same as the physical topology. Here, we consider a hierarchical control architecture with two objectives: keeping local stability of subsystems and achieving consensus of the second state variable among the large-scale system's subsystems. The equipped DGU with the proposed ET hierarchical control is shown in Fig. 1. A DC voltage source is used to model the renewable resource in each DGU and provides a local load through a DC-DC converter. The local DC load and the PCC are connected through an RL filter. The dynamics model of the i-th DGU is given as follows [45]: where V i (t), I i (t) and I Li (t) denote the load voltage, generated current, and local current demand, respectively, L ti , C ti , R ti , and R ij denote filter inductance, shunt capacitor, filter resistance, and line resistance, respectively. V i (t), I i (t) denote the states, V ti (t), I Li (t) denote inputs, V j (t) is the point of common coupling (PCC) voltage of the DGUi's neighbors, and 1 R ij denotes the conductance of the power line connecting DGUs i and j. The primary decentralized controller is given to regulate each PCC's voltage and guarantee the overall microgrid's stability. Measurements of V i (t) and I i (t) are exploited as well as the local regulator of each DGU to create the command V ti (t) of the i-th DC-DC converter and guarantees a reference signal V ref,i (t) is tracked. The control loop of the converter is the local controller which is assumed in the model. In general, not all the DGUs can provide the demanded local current loads and require power from other DGUs. Hence, the currents between DGUs should be shared proportional to their generation capacity and this is achieved by designing the secondary current sharing controller. Moreover, in order to minimize the voltages deviation at PCCs, the secondary controller's objective is to also guarantee the same average voltage value among all the PCCs. In particular, for generation efficiency improvement, it is usually required to share the total current demand among different DGUs in proportion to their corresponding energy sources (proportional current sharing). Conventionally, each DGU broadcasts its current at every time instant which may lead to inefficient utilization of communication resources. Instead of this conventional approach, an ET-based mechanism is introduced in this paper, in which the transmission occurs only when a certain event is triggered. The architecture of the proposed distributed ET consensus-based secondary control for the microgrid is shown in Fig. 2. III. PROBLEM FORMULATION A. DC MICROGRID MODEL The state space representation of the i-th DGU can be written as follows: where . . , N denotes the local state, u i (t) = V ti (t) denotes the primary control input, and d i (t) = I Li (t) denotes the exogenous input. It is assumed that the current demands of the DGUs, I Li (t), are piece-wise constant current loads. The matrix A ii is the local state transition matrix, A ij describes the interconnection between DGUs i and j, and B i is the control input matrix. These matrices are defined as follows [11]: B. HIERARCHICAL CONTROL MODEL This section considers the hierarchical control strategy, which ensures subsystems local stability and guarantees current sharing among DGUs. This two-layered control strategy is explained in the following. 1) DECENTRALIZED PRIMARY CONTROLLER In the first step, an augmented state variable ζ i (t) is introduced to presents the required integrator action in the primary local controller. The dynamics of ζ i (t) is given byζ , and α i (t) ∈ R denotes the secondary control input. Hence, the resulting augmented system model with an integrator is now given as follows: is the exogenous input. The matrices in (3) are now given as follows: Note that the pair (Â ii ,B i ) is controllable, and hence the system (3) is stabilizable. In the second step, in order to guarantee the stability of the overall microgrid and to regulate the voltage at each PCC, a decentralized state feedback controller is designed as follows [11]: such that (A ii + B i K i ) is Hurwitz where the gain matrix K i can be obtained based on the dynamics of the i-th DGU and the power line parameters of the neighboring DGUs via LMI conditions [46]. 2) DISTRIBUTED ET CONSENSUS-BASED SECONDARY CONTROLLER An event-based secondary controller is now designed based on a linear discrete-time consensus protocol to achieve current sharing in a DC microgrid. Denoting τ i k h ⊂ Z + as the k-th time instant that events are triggered in the subsystem i, with h denoting as the sampling period, the latest transmitted i-th DGU current signal,Î i (τ h), τ ∈ Z + , is defined as follows: when an event occurs where I i (τ i k−1 h) is the i-th DGU current at the last event-triggered instant. For notation of simplicity, we omit the sampling time h when referring to discrete-time instants, i.e.Î i (τ ) =Î i (τ h). The following control objective is defined for the event-based proportional current sharing of the microgrid. Control Objective for Proportional Current Sharing: Current sharing is obtained at steady state, if the overall load current is proportionally shared among DGUs, i.e., where I s i > 0 denotes the i-th DGU current generation capacity. The proposed secondary ET consensus-based controller for the i-th DGU is given as follows: (7) where w i = 1 I s i and k I ,i is the local gain of the i-th DGU. Note that at the triggering instants τ j k , the j-th DGU will communicate with its neighbors and share the value of I j (τ ). The secondary control input is then generated by using the zero-order hold as follows: Although the i-th DGU has access to its own current I i (t), the ET consensus-based controller (7) uses the last broadcast currentÎ i (τ ). This is to ensure that the average of DGUs' initial currents is preserved throughout the evolution of the system. The subsequent event instants are determined by the event-triggering mechanism, which is given as follows: where σ i > 0 is a scalar to be designed as a trade off between the network utilization and the control performance. In fact, in order to guarantee the ET-based current sharing in DGUs, the currents information should be transmitted only when condition (9) is met. It should be noted that in the ET condition (9), the continuous states are not needed and as it was discussed earlier, the conditions are only checked in the sampling periods due to the consideration ofÎ i (τ ) =Î i (τ h). In other words, the eventtriggered-based secondary layer controller in (7) is designed in a discrete framework but the results are inserted into the main continuous system (3) in a continuous format by using the ZOH in (8). The error between the latest broadcasted current signal and the i-th DGU current is defined as Note that at time τ i k+1 , a new event is triggered so that the error signal e i (τ ) is reset to e i (τ i k+1 ) = 0. Consequently, the following inequality can be written which holds for all τ : and it follows that: where e(τ ) = [e 1 (τ ), e 2 (τ ), . . . , e N (τ )] , α(τ ) = [α 1 (τ ), α 2 (τ ), . . . , α N (τ )] , and = diag(σ 2 1 , σ 2 2 , . . . , σ 2 N ). Remark 1: It is assumed that the transmitted data in the event-based communication network can be available for the neighbors without delay. In other words, when the data is updated based on the event-triggered condition, the updated data will be available for the neighboring DGU at the moment. Delay in the network channel is another important problem in large-scale networks which will be taken into consideration in our future works. C. DENIAL-OF-SERVICE (DoS) ATTACK A DoS attack is defined as a period of time at which the currents cannot be transmitted successfully through the network communication channels. Cyber attacks with unlimited energy make the overall system unstable. However, in reality the attackers need inactive sleep intervals for energy recovery. Therefore, it is assumed that the length and frequency of cyber attacks are limited. According to the above fact, the entire time is divided into communication intervals and cyber attack intervals, where in the communication intervals the event-based data transmission is performed successfully but in the cyber attack intervals the data transmission is terminated. Defining {h z } z∈Z + , h 0 ≥ 0, as the sequence of the DoS attack, the time interval of the zth DoS attack could be expressed as H z = [h z , h z + z ), where z ≥ 0 is the length of the zth DoS attack time interval in which data transmissions are disrupted. The sets of cyber attack and successful communication time instants in a given interval [λ, τ ) are defined as follows, respectively: where τ, λ ∈ Z + and τ ≥ λ. The general format of the DoS attack is shown in Fig. 3. The sequence of time instants that the current is transferred successfully is denoted by τ i m . In practical cases, the system update rules are performed on a digital platform. Hence, it is assumed that there exists a time delay between the end of the DoS cyber attack (τ = h z + z ) and the successful transmission of the data (τ = τ i m+1 , m = 1, . . . ) as shown in Fig. 3. Therefore, the z-th time interval that the triggering condition (9) does not hold is as follows: and consequently, any time interval [λ, τ ) can be represented as follows: − h z denote the minimum possible sampling rate (lower bound on the inter-sampling rate) and the time elapsing between any two successive DoS triggering, respectively. In case of the discrete framework, the lower bound on the inter-sampling rate will be 1. It is worth noting that if z < 1 then overall microgrid stability can be lost in spite of the ET secondary control update strategy. Hence, in order to assure the stability, the frequency at which the DoS can occur must be sufficiently small as compared to the minimum sampling rate. The following assumptions are considered on the cyber attack frequency and duration [47], [48]. DoS Frequency : For all T 2 > T 1 > T 0 , there exist η D > 0 and τ D > 1 such that where N a (T 1 , T 2 ) is the total number of the DoS off/on transitions over [T 1 , T 2 ) and τ D is the parameter whose inverse provides an upper bound on the average frequency of the DoS off/on transitions, i.e., average number of the DoS off/on transitions per unit time. DoS Duration : For all T 2 > T 1 > T 0 , T 0 > 0 and λ a > 0, the cyber attack duration over [T 1 , T 2 ) is defined as follows: where λ a is the parameter whose inverse provides an upper bound on the average duration of the DoS per unit time. IV. STABILITY ANALYSIS AND CURRENT SHARING In this section, it is shown that stability of the overall microgrid controlled by utilizing (7) is achieved and the event-based current sharing objective is satisfied with and without the presence of DoS cyber attacks. Using the primary controller, the following relationship holds [45]: Therefore, the following expression for the microgrid can be obtained: (19) and (20) and knowing that the current of DGUs is I (τ ) = I L (τ ) − q e I l (τ ) and the line current is I l (τ ) = −w e q e V (τ ), one can obtain the following relationship: (21) where I L (τ ) = [I L1 (τ ), I L2 (τ ), . . . , I LN (τ )] denotes the vector of local load currents, I l (τ ) = [I l1 (τ ), I l2 (τ ), . . . , I lN (τ )] denotes the vector of line currents, Q = LWM , and M = q e w e q e . Consequently, due to the fact that the load currents I Li (τ ) and the reference voltages V ref,i are bounded, the following system is considered for stability analysis of the linear system (21), namely: where A = (I −hQ) and B = hLW . A. WITHOUT DoS ATTACK In the proposed distributed discrete-time ET consensus-based control methodology for the microgrid, each DGU transmits its information through the network channels based on the ET protocol (7) which guarantees the current sharing. Data transmission only takes place when the event-triggering conditions are violated, and hence the communication cost is considerably reduced. Theorem 1: Consider the system (3) subject to the ET protocol (7). It follows that under Assumption 1 all DGUs can achieve current sharing under the triggering condition (9) and the overall microgrid (22) is stable if there exist a symmetric positive-definite matrix P ∈ R N ×N , and a positive definite diagonal matrix ∈ R N ×N , such that the following LMI condition holds: Proof 1: First the stability analysis of the overall microgrid is shown. System (22) is stable if there exists a discrete-time quadratic Lyapunov function S a (τ ) = α (τ )Pα(τ ) with P > 0 such that the following inequality holds: Considering the event-triggering condition (11), the sufficient condition for satisfying (24) is obtained by the following LMI: +α (τ ) α(τ ) < 0. (25) Substituting (22) into (25), and after some algebraic manipulations, the LMI (23) is achieved. Next, the current sharing objective is shown. The distributed controller (7) leads to current sharing at the steady state which can be expressed as follows: which is equivalent to: which can compactly be expressed for all DGUs as follows: whereĪ = [Ī 1 ,Ī 2 , . . . ,Ī N ] is the steady state solution of I (τ ). Equation (27) can be expressed as follows: According to properties of the Laplacian matrix, it is concluded from (28) that W (Ī + e(τ )) ∈ R(1), where R(1) denotes the range of 1, i.e., all elements of W (Ī + e(τ )) are identical. Therefore, it is shown that (6) is satisfied and the event-based proportional current sharing is achieved. This completes the proof of the theorem. Remark 2: It should be emphasized that the proposed event-triggered secondary controller is implemented in a discrete-time framework, and hence there is no need to consider the Zeno phenomena while the previous works in [29], [30] proposed continuous-time event-triggered controllers without investigating the existence of the Zeno phenomena. The main challenge for the continuous-time framework is that in current sharing controller the even-triggered mechanism depends only on the current of the DGU, i.e. I i while generally it should depend on both states of the DGU, i.e. I i and V i . B. WITH DoS CYBER ATTACK In presence of the DoS cyber attack, the event-based data transmission is disrupted which can affect the stability of the overall microgrid. Towards this end, the behavior of DGUs subject to the DoS cyber attack needs to be investigated and sufficient conditions for secure current sharing should be determined. Towards this goal, a switching framework similar to [47], [48] and [49] is considered between the communication and the cyber attack intervals in order to derive sufficient conditions for secure current sharing. To evaluate the system behavior in presence of the DoS cyber attacks the dynamics of the overall microgrid should be obtained. In the DoS intervals H z , the error definition is as follows: whereÎ (h z ) represents the last successful broadcast current up to h z . Considering the DGU currents by I (τ ) = I L (τ ) − q e I l (τ ), the line currents as I l (τ ) = −w e q e V (τ ), and the DGUs PCC voltages as V (τ ) =V ref + α(τ ), the following equality holds: Substituting (30) into (29) the following equation can be written: Substituting (31) into (22), yields: Consequently, the following system is considered for stability analysis of the overall microgrid during the DoS intervals: Note thatÎ (h z ) remains constant in the DoS intervals. The following theorem is now provided to show that the secure current sharing is achieved over two different time intervals, provided that the cyber attack frequency and attack duration satisfy a certain condition. Theorem 2: Consider the system (3) in presence of the DoS cyber attack subject to the ET protocol (7). It follows that under Assumption 1 all the DGUs can achieve current sharing for all time intervals (communication and attack intervals) under the triggering condition (9) and the overall microgrid (22) is stable if there exist symmetric positive-definite matrices R ∈ R N ×N and P ∈ R N ×N , a positive definite diagonal matrix ∈ R N ×N , and constants 0 < η 1 < 1 and 0 < η 2 < 1 such that the following LMIs holds: (34) and the cyber attack duration and frequency of the DoS would satisfy: where = 1+η 2 1−η 1 and γ = max( λ max (P) λ min (R) , λ max (R) λ min (P) , 1). Proof 2: Based on the switching mode approach, two types of Lyapunov functions are considered, namely S(τ ) = S κ (τ ) where κ ∈ {a, b}. In order to address the switching framework between the communication and cyber attack intervals, it is assumed that in communication intervals there exists a discrete quadratic Lyapunov function S a (τ ) = α (τ )Pα(τ ) with P > 0 and 0 < η 1 < 1 such that the following inequality is satisfied: Considering the event-triggering condition (11), equation (36) is satisfied if there exists 0 < η 1 < 1 such that the following inequality is satisfied: Substituting (22) into (37) and after some algebraic manipulations, the LMI condition (33) is obtained. In presence of the DoS cyber attack, a quadratic Lyapunov function is considered as S b (τ ) = α (τ )Rα(τ ) with R > 0 such that the following inequality holds: (38) where 0 < η 2 < 1. In this interval, the communication is interrupted by hackers and by substituting (32) into (38), the following inequality is obtained: (39) which is equal to the LMI condition (34). V. SIMULATION RESULTS In this section, simulation results are provided to show the efficiency and capabilities of our proposed distributed discrete-time ET consensus-based control for current sharing and voltage stabilization of DC microgrids. A microgrid that is composed of 5 DGUs is considered in Fig. 4. It can be noted in Fig. 4 The sampling period and the secondary discrete-time ET-based controller gains are considered as h = 0.01 and VOLUME 9, 2021 Figure 6 shows the performance of the proposed event-triggered control sharing control for voltage regulation, and current sharing, and as shown in this figure, the overall microgrid is stable via primary controllers and the current sharing is achieved by the discrete-time ET consensus-based controller. It is also seen from Figure 6 that the voltage balancing is also achieved and the average PCCs voltages are the identical at the steady state. The DGUs broadcast currents are shown in Fig. 7. The ability of the event-triggering scheme in adjusting the broadcast periods is demonstrated in this figure. It follows from this figure that the transmission currents do not update continuously and the data exchanges are reduced. The inter-event intervals of the DGU 1, where each stem shows the length of the time period between the event and the previous one are shown in Fig. 8. For example, if the value of a stem in Fig. 8 is 150, it implies that during the past 150 time steps no DGU 1 current data is sent to the network. Moreover, it can be concluded from the simulation results that the currents data In presence of the DoS cyber attack, it is expected that η D = 0.1 and = 0.01. In the time interval including 400 samples (4 seconds), it is assumed that τ D = 20 where the sampling rate is h = 0.01. Consequently, based on the attack frequency definition (16), the total number of DoS off/on transitions over [0, 400) satisfies N a (0, 400) ≤ 0.1 + 400−0 20 = 20.1. In order to gain the maximum stability margin we assume that η 1 = 0.99 and η 2 = 0.01 by which the LMIs (33) and (34) are satisfied. In this case, in accordance with Theorem 2, the upper bound on the average duration of the DoS cyber attacks is achieved 1 λ a = 0.6. Consequently, based on (17), the attack duration is obtained as a (0, 400) ≤ (400 × 0.6) = 240 which implies that in each 400 samples, the maximum tolerable duration of cyber attacks can be 240 samples. In this simulation process, in each 400 samples, the cyber attacks frequency and duration are presumed to be 6 and 170 samples, respectively, which are smaller than the theoretical bounds. The procedure for selecting cyber attacks in the remaining intervals are the same. According to the DoS characteristics, the grey areas in Fig. 9 depict the sequence of DoS cyber attacks which are injected in the DGUs 1 and 4, as an example. It should be noted that it is not required to have synchronized DoS attacks in different channels and they can be independent. Voltage regulation, current sharing, and average PCCs voltage of the DGUs in presence of the DoS cyber attacks are depicted in Fig. 9. In this figure, it is shown that the overall microgrid is stable and the current sharing is achieved by the proposed discrete-time ET consensus-based controller. The average PCCs voltages are identical at the steady state and the voltage balancing is also achieved. The DGUs broadcast currents in presence of the DoS cyber attacks are shown in Fig. 10. It is concluded from this figure that the event-triggering scheme works well in adjusting the broadcast periods and currents data transmission rates are reduced by 72.19%, 89.34%, 97.58%, 94.5%, and 90% for the DGUs 1 to 5, respectively. The maximum tolerable DoS cyber attacks in the microgrid is also tested in our case study simulation results. The maximum tolerable duration of cyber attacks was achieved 240 samples in each 400 samples. This duration is increased to be higher than the allowable bound for the DGU 4 as an example. To show the effects of duration of the cyber attacks on the current sharing, 3 attacks with total duration of 270 samples are applied in the DGU 4 in every 400 samples. It can be seen that the overall microgrid is disrupted when duration of cyber attacks does not meet the requirements that are obtained in equation (17). Voltage regulation, current sharing, and average PCCs voltage of DGUs in presence of permissible DoS cyber attacks in the DGU 1 and impermissible DoS cyber attacks in the DGU 4 are depicted in Fig. 11. In this figure, it is shown that the overall microgrid is disturbed and stability of the system can be impaired. Moreover, the average PCCs voltages are not the same at steady state and the voltage balancing requirement is not achieved. The DGUs broadcast currents in presence of the DoS cyber attacks, are shown in Fig. 12. Note that the permissible DoS cyber attacks in the DGU 1 are shown in Fig. 9, and impermissible intervals of the DoS cyber attacks in the DGU 4 are shown in the grey areas in Figures 11 and 12. Note that the voltage deviation in practical DC power distribution networks should be within 5% of the nominal VOLUME 9, 2021 value. In our proposed ET consensus-based controller for the DC microgrid without the presence of DoS cyber attacks the voltage at the PCC of each DGU remains within this admissible range. For example, the DGU 3 PCC voltage shown in green line in Fig. 6 remains within the range of 48.07±1.44V at steady state, implying that voltage deviations are less than 2.99% of the nominal value V = 48.07V. In presence of permissible DoS cyber attacks, the DGU 3 PCC voltage shown in Fig. 9 remains within the range of 48.07 ± 1.94V. This implies that voltage deviations are less than 4% of the nominal value V = 48.07V at steady state. However, in presence of impermissible DoS cyber attacks, the DGU 3 PCC voltage shown in Fig. 11 changes in the range of 48.07 ± 7.07V. This implies that voltage deviations are more than 14.7% of the nominal value V = 48.07V at steady state and stability of the microgrid has been compromised. It should be noted that in [43] a nonlinear DC microgrid in presence of intermittent DoS attacks was considered and the DoS frequency and DoS duration were determined to guarantee stability of the system. In our proposed discrete-time event-triggering strategy, the cyber attack duration and frequency of the DoS were specified to ensure not only the stability, but also current sharing of the DC microgrid as depicted in Fig. 9. VI. CONCLUSION In this paper, a distributed discrete-time ET consensus-based controller for a DC microgrid that is composed of multiple DGUs is developed. The proposed ET based controller achieves current sharing and reduces the communication rate of the network objectives that would enhance the resiliency of the overall microgrid security and reduce the communication cost. The proposed event-triggered secondary controller is implemented in a discrete-time framework and hence there is no need to consider the Zeno phenomena. Stability of the overall microgrid using this hierarchical control framework is shown quantitatively through Lyapunov stability theory. In presence of the DoS cyber attacks, the overall microgrid is analyzed and sufficient conditions on frequency and duration of DoS cyber attacks are determined in order to reach the secure current sharing. In future work, the problem of secure current sharing in presence of other types of cyber attacks will be investigated.
9,665.6
2021-01-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Formal Modeling of IoT-Based Distribution Management System for Smart Grids : The smart grid is characterized as a power system that integrates real-time measurements, bi-directional communication, a two-way flow of electricity, and evolutionary computation. The power distribution system is a fundamental aspect of the electric power system in order to deliver safe, efficient, reliable, and resilient power to consumers. A distribution management system (DMS) begins with the extension of the Supervisory Control and Data Acquisition (SCADA) system through a transmission network beyond the distribution network. These transmission networks oversee the distribution of energy generated at power plants to consumers via a complex system of transformers, substations, transmission lines, and distribution lines. The major challenges that existing distribution management systems are facing, maintaining constant power loads, user profiles, centralized communication, and the malfunctioning of system equipment and monitoring huge amounts of data of millions of micro-transactions, need to be addressed. Substation feeder protection abruptly shuts down power on the whole feeder in the event of a distribution network malfunction, causing service disruption to numerous end-user clients, including industrial, hospital, commercial, and residential users. Although there are already many traditional systems with the integration of smart things at present, there are few studies of those systems reporting runtime errors during their implementation and real-time use. This paper presents the systematic model of a distribution management system comprised of substations, distribution lines, and smart meters with the integration of Internet-of-Things (IoT), Nondeterministic Finite Automata (NFA), Unified Modeling Language (UML), and formal modeling approaches. Non-deterministic finite automata are used for automating the system procedures. UML is used to represent the actors involved in the distribution management system. Formal methods from the perspective of the Vienna Development Method-Specification Language (VDM-SL) are used for modeling the system. The model will be analyzed using the facilities available in the VDM-SL toolbox. System Introduction Modeling, E.H.A.; Formal specification, S.K. N.A.Z.; M.H. E.H.A. Introduction The traditional power grid has been in place for more than a century, with little modification in its fundamental architecture despite the fact that energy demand has risen dramatically in recent decades, necessitating large-scale control of electricity supply and consumption. The smart grid is a modern way of power transmission in which user safety should be on top of the list during the checking and updating of the grid. With the increasing evolution of smart sensors and information and communication technologies, the conventional power system has come forth with the advancement of the smart grid. IoT the model in an efficient way before its implementation so that there is less chance of errors during the implementation due to its validation in the formalizing phase. Fault identification detects all possible fault points; Fault Isolation identifies the best switching order sequence to isolate the fault; Service Restoration identifies the best switching order sequence to return power to the feeders' safe parts. Formal methods are abstract techniques that are used to model complex and sophisticated systems. Developers can not only validate the system's properties in a more detailed manner (than they might through empirical testing) but also use mathematical evidence as a compliment to system testing to ensure the correct behavior by constructing a mathematically robust model of a complex system. Formal approaches have a number of benefits, including the ability to remove/overcome the uncertainties of system requirements and state the underlying assumptions. They also reveal the errors/defects in system specifications, and their diligence allows a more detailed understanding of the problem. The model will address the dynamic automating and efficient fault identification of the distribution management system. The supervisory control and data acquisition system is the standard method for the communication between the components of the distribution network; it will be used for collecting the system faults through its communication facility. Nondeterministic finite automata are used for automating the failure detection and recovery procedures. Unified Modeling Language (UML) diagrams are used for the representations of the actors and their actions in the system, and they show how the components interact with each other. Formal methods in terms of the Vienna Development Method-Specification Language (VDM-SL) are used for the description and formal modeling of the system in a systematic way. Formal methods are mathematically based techniques supported by many tools that offer careful and effective ways to model, design, and analyze systems of the real world. Ambiguities and contradictions are often discovered while formalizing the informal requirements. Related Work The identification of the faulty line section is made possible with the petri net theory. The data concentrator units transfer and receive the fault signal, including the pre/postfault current of the lines. The measurements of the feeder loading and statuses of the fault indicators play an important role in the petri-net-based fault identification method. The measurement of the feeder loadings and the pre/post-fault current values are compared to find out the loading mismatch of the feeder. The wireless LoRa module is used for all the communication between the equipment, feeder devices, and the web server [5]. An IoT-assisted power monitoring system using ThingSpeak technology is proposed in this paper. It provides an easy method for consumers and service provider companies to monitor and analyze the electrical parameters of the load data at the remote end. The arrangements are prepared with the integration of WiFi-based nodes, Arduino UNO, and an LCD for local display. WiFi nodes fetch the consumer's load voltage and values of current sensors, and the Arduino interacts with the sensors to collect load information. The WiFi module works as an intermediary gateway between the monitoring panel and the webserver; it transfers the real-time data to ThingSpeak for storage and manipulations [6]. The model checking framework is proposed with the intent of strong and resilient smart grid practices in accordance with distributed intelligence. In one study, the author auto-developed the formal model for two distributed grid intelligence systems in a symbolic model-checking language, and the proposed model was verified with the NuSMV model checker. Tests were created for model-checking the computational tree logic properties, and the initial obtained results were satisfactory [7]. A innovative prototype scheme was developed for the distribution SCADA system. The proposed scheme was developed by using smart meters for the automation of the distribution network. The smart meter would be installed at the substation for tracking the demand/supply parameters, the detection and location of faults, and bidirectional communication using GSM technology. With the integration of microcontrollers and GSM, the scheme identifies the fault type and its location, and there is an indication on the consumer's mobile [8]. To the best knowledge of the author, Ref. [9] is the first article on modeling a smart grid framework based on formal specifications using Z-language. Though the author does not work on the deep level infrastructure of the smart grid components, as titled, the study presents brief specifications on domains such as smart appliances, wind turbine systems, solar systems, and storage with limited conditions. An overall networking-based scheme for smart grids is presented in [10], from the integration of a wireless sensor network to its routing protocol and from possible attacks to its countermeasures, but the author remains precise on these topics and does not provide much research on the security protocols of smart grid communication. The faults of distribution systems are diagnosed using a programmable logic controller and supervisory control and data acquisition system in [11]. The programmable logic controller is used for analyzing the different parameters of transformers, such as oil levels, voltages, and the load current and its temperature. The suggested monitoring procedure works with the integration of solid-state plc devices and the package of sensors. The proposed scheme provides the facility to detect the internal/external faults of transformers, and whenever a fault occurs in a three-phase line, the detection circuit indicates the abnormal condition. Visual representations are shown in the system, which helps the crew to clear the faults and reduce the patrolling time. An automated communication system of the distribution network is designed with the functionality of data collection and processing and remote control fault indicators, and it presents the configuration of the distribution network system in case of failure [12]. A robust model-based fault detection and isolation scheme in smart power systems is presented with an unknown input observer mechanism. Load fluctuations of multi-area power systems and variations in the power of renewable energy resources are taken as unknown input observers. The sensor fault detection and isolation scheme will notify the operator of the power system that this specific faulty sensor needs to be replaced [13]. Three different algorithms, namely, system A, system B, and system C, were developed for identifying the fault location in the medium voltage power grid. In each line of the power grid, the value of the electric current is monitored by system A, the value of the current in transformers is measured by system B, and, lastly, system C compares the value of the current at the start and final/end of the power line of the grid in order to check the variation between them [14]. Fault detection is a key factor for the reliability of the smart grid; therefore, it is essential to detect and locate those faults of the smart grid. Smart features are implemented in smart grids for increasing reliability, efficiency, and sustainability. Technological advancements are taking place in smart grids because of the rising demands and complexities of power grids. Different aspects of smart grids with their features in distribution systems have been overviewed in [15]; the author presented the technological potential of how smart grids will strengthen the electric power distribution networks. Information and communication technologies are gaining popularity with their rapid advancement; this is similar to IoT with its embedding capability. In [16], the author presented IoT deployments in several parts of the smart grid. The major focus was given to three levels (generation, transmission, distribution) of the smart grid with the IoT application. In [17], a brief overview of communication network architecture for smart grids was given, such as home area networks, neighborhood area networks, and wide-area networks. However, for specified requirements, no profound methodology was presented. A scheme based on the resilient information architecture platform for smart grids has been designed for fault management in smart grids. A systematic approach was followed for designing fault management architecture in which the probable failure forms of the framework were recognized by reviewing the associated links across the layers of the resilient information architecture platform for smart grids. The communication protocols of the distinct services were analyzed with regard to the functional role of system enhancement and bettering the resilience properties [18]. Functional analysis of the smart grid has been examined through supervisory control and data acquisition method with the integration of two systematic approaches to structured analysis, design technique, and real-time structured analysis. The purpose of this comparative study was to design a general methodology framework for the analysis and supervision of smart grids, namely, control command applications. The drawback of the system was that the structured analysis real-time mechanism did not permit a direct pathway to the software, which was coded in an executable language [19]. A comparative study of fault location and outage area location methods was presented in this paper. The classification of algorithms was done with criteria of impedance-based, sparse measurements, traveling-wave, and some others as a helping guide for the engineers and researchers of power systems to select the methods according to their requirements [20]. A fast-distributed fault detection, isolation, and restoration algorithm was designed based on an IEC generic object-oriented substation event messaging system for reducing the service outage time [21]. In [22], a comprehensive survey is presented on different classification frameworks for faults in transmission, distribution, and consumption levels based on the learning algorithms of machine learning. In [23], a novel approach was developed for distribution networks by using feeder terminal unit signals with the combination of grid states to detect and locate faulty areas timely and accurately. The pickup and tripping signals of the feeder terminal unit and loss of voltage were used. An optimization model for the service restoration is presented with the objective of reducing the control actions in active distribution systems. The effectiveness of the system was measured on the distribution system by changing the buses from 135 to 540, and a satisfactory result was gained [24]. The author in [25] developed a platform based on IoT for performing the simulations. An Opal real-time simulator was used for modeling the physical elements of the smart distribution system. Transport message queuing protocol was used in the system, and an algorithm for fault detection was developed using Matlab. A comprehensive review of the customer activities based on different scenarios was undertaken, and the duration was measured in the test plant. A novel algorithm was presented for detecting the earth fault that happened in a cross-linked network with the integration of distributed energy resources [26]. An integrated framework with the combination of IoT and phasor measurement units was presented. The communication and monitoring of the system were designed with security measures, and they provided support for managing and forecasting the load [5]. A comprehensive study on smart grid technologies, along with their implications, was presented in this paper. Centering/focusing on consumers' empowerment, the architecture of smart grids analyzed issues including advanced metering infrastructure, demand-response, and demand-side management components. The author also stated that the smart grid is facing several issues, such as consumer awareness and their interest. Several other contributing components of smart grids were reviewed, such as microgrids, pico grids, nano grids, inter grids, virtual power plants, and distributed generation [27]. A generic review of the communication requirements of advanced metering infrastructure, distribution automation, and wide-area measuring systems was given for particular transmission and distribution smart grids. These requirements were analyzed with respect to the quality of service parameters, in particular latency and bandwidth [28]. Internet protocol multicast technology will be the only viable solution for communication, hence the demand for complex power system applications in the future. A heuristic algorithm has been presented that will add a minimum set of links to the network topology, and a threshold value of the delay for multicast configuration has been set. As a result, it has been shown that by adding a few links, delay can be reduced [29]. The petri-net-based method is proposed for fault location by using the fault indicators' information. The statuses of the indicators and circuit breakers and the measurement of pre-and post-current can help to identify the faults. The indicators have the capabilities to communicate with the central system and alert the system to send the team to the faulty area for restoring the services quickly. The proposed system was simulated in the distribution feeder of Taiwan [30]. A comprehensive review of the advanced progress for protecting the alternate current micro gird from faults was performed. Different fault detection methods were classified into digital signal processing methods and artificial intelligence methods alongside their advantages and disadvantages [31]. In [32], a new advanced smart sensor was designed with a self-adjustment setting capability, being coordinated with the rest of the network. The designed sensor was tested with several different scenarios of short-circuits. The sensor proved efficient up to 80% in comparison with other analog and intelligent electronic devices. The technique is comprised of four layers, which work in a hierarchical form. Multifaceted faults are detected through the islanding search algorithm, and the effectiveness of the designed technique is measured by using simulation tools [33]. The systematic study of smart grid communication infrastructure is presented, including its architectures, several network frameworks, and related technologies, and it is compiled with intelligent functions from the consumer perspective and distribution units of electricity [34]. In [23], the author presented an innovative FLISR solution for distribution networks that uses feeder terminal unit signals along with distribution grid states to rapidly identify and reliably locate the faulted sectors of the network. Particularly, the feeder terminal units, tripping signals of relays, and loss of voltages were used in combination for detecting and locating faults. Post-fault restoration was executed based on 13 different factors, including total operation cost, violation of power flow, and number of switching steps. To assess the Sustainability 2022, 14, 4499 7 of 25 electrical connection of the distribution network, a network topology processor was used so that at any time the network topology changed, it would automatically redefine it. We may infer from the literature analysis that no formal modeling of a distribution management system has been done before, and, to support this assertion, we have summarized past works in Table 1. Table 1. Comparison between previous studies and the current study. Reference Year Description Limitation Formal Modeling [9] 2018 State based formal specification framework for smart grid generation components is presented in this paper. The formal modeling has been designed only for the state identification of the smart grid components. Yes [35] 2017 Formal modeling approach is used generically for smart transformers. The given mechanism of theft detection and user communication is poor. Yes [30] 2020 Distribution dispatching control system is discussed in this paper. The standard petri nets are used, which have the distinct disadvantage of producing very large and unstructured specifications for the systems being modeled. No [36] 2019 Decentralized service restoration strategy is applied Unable to restore services where DGs are not available. Presented switching order is complex and occupies more time. No [37] 2018 Harmonic footprint method is used for determining the voltage dips. The provided technique is expensive with regard to installing external components. Moreover, it cannot prevent PV inverter shutdown during the fault event. Problem Statement and System Model Compared to the traditional grid, there are higher expectations with the smart grids to provide better services. Utility firms intend to convert the present unidirectional grid into a bi-directional power grid, with the goal of storing energy in the electrical system and using it wherever it is needed. Among other safety-critical applications, smart grids need precise modeling and the analysis of systems such as demand response, distribution automation, energy storage, and fault detection. These applications are critical because even a minor design flaw can have serious consequences, even at the expense of individual life and property. The present electricity grid is predicted to have expertise in the difficulties in the generation, transmission, and distribution of the requisite power for massive amounts of the demanding load. Considering the sophistication and complexity of today's distribution networks, there is always the risk of underlying errors in models that are not detectable by evaluating a small number of scenarios. Network failures and high operating expenses might occur if these models fail. As a result, it appears that a mathematical model for distribution automation capabilities in smart grids that can be confirmed is required. The aim of this paper is to design a systematic model for a distribution management system comprised of substations, feeders, and smart meters. Modeling an IoT-based automated distribution management system will portray equipment utilization in substations, fault detection, and a recovery mechanism in transmission lines for the integrity and efficiency of the system. Advanced Metering Infrastructure (AMI) The primary objectives of the smart grid are: self-coordination, self-awareness, selfhealing, and self-reconfiguration, to add intelligence to the grid so that it can perform, to increase the deployment of renewable energy sources, to improve the efficiency of power generation, transmission, and usage, and to shift and configure consumers' energy demands by using demand response (DR) techniques to manage peak loads of customers. Sophisticated distribution automation and price optimization models based on automated meter reading (AMR) and advanced metering infrastructure are required to achieve this. Smart meters are similar to a traditional electric meter but with enhanced ICT-enabled features because they not solely measure the amount of energy consumed but also track a vast amount of data over time, such as the patterns of electricity usage. AMI uses smart control and communication technologies to automate metering services that were previously done by hand, which are time-consuming activities such as energy meter readings, service connection and disconnection, interventions and theft detection, the monitoring of voltage, and fault and outage identification, when combined with cuttingedge customer-centric technology. The Drive-by/Walk-by meter reading in AMI is depicted in Figure 2. Sustainability 2022, 14, x FOR PEER REVIEW 8 of 26 management system comprised of substations, feeders, and smart meters. Modeling an IoT-based automated distribution management system will portray equipment utilization in substations, fault detection, and a recovery mechanism in transmission lines for the integrity and efficiency of the system. Advanced Metering Infrastructure (AMI) The primary objectives of the smart grid are: self-coordination, self-awareness, selfhealing, and self-reconfiguration, to add intelligence to the grid so that it can perform, to increase the deployment of renewable energy sources, to improve the efficiency of power generation, transmission, and usage, and to shift and configure consumers' energy demands by using demand response (DR) techniques to manage peak loads of customers. Sophisticated distribution automation and price optimization models based on automated meter reading (AMR) and advanced metering infrastructure are required to achieve this. Smart meters are similar to a traditional electric meter but with enhanced ICT-enabled features because they not solely measure the amount of energy consumed but also track a vast amount of data over time, such as the patterns of electricity usage. AMI uses smart control and communication technologies to automate metering services that were previously done by hand, which are time-consuming activities such as energy meter readings, service connection and disconnection, interventions and theft detection, the monitoring of voltage, and fault and outage identification, when combined with cutting-edge customercentric technology. The Drive-by/Walk-by meter reading in AMI is depicted in Figure 2. Cables and Transmission Lines Power cables, including transmission and distribution-level lines, establish crucial links between the generation and load. These lines often transport low-voltage power that is stepped down from the transmission grid or generated by distributed generating systems. Transmission lines carry voltages from transmission to distribution points, whereas the distribution lines carry voltage from distribution to domestic use, such as to homes, offices, and buildings. The faults in the transmission lines network are classified into four categories [38], which are illustrated in Figure 3, with a brief explanation as follows. • Single-line-to-ground fault The most frequent transmission line fault is the single-line-to-ground (SLG) fault that might be caused by a vehicle accident, by tree branches, or by flashovers over dusty insulators during rain showers, which will cause one of the phase conductors to collapse and come in touch with the ground. Cables and Transmission Lines Power cables, including transmission and distribution-level lines, establish crucial links between the generation and load. These lines often transport low-voltage power that is stepped down from the transmission grid or generated by distributed generating systems. Transmission lines carry voltages from transmission to distribution points, whereas the distribution lines carry voltage from distribution to domestic use, such as to homes, offices, and buildings. The faults in the transmission lines network are classified into four categories [38], which are illustrated in Figure 3, with a brief explanation as follows. • Single-line-to-ground fault The most frequent transmission line fault is the single-line-to-ground (SLG) fault that might be caused by a vehicle accident, by tree branches, or by flashovers over dusty insulators during rain showers, which will cause one of the phase conductors to collapse and come in touch with the ground. • Line-to-line fault When two phases of a three-phase line are unexpectedly coupled, a line-to-line fault occurs. The fault current will flow during both phases in this instance. • Double-line-to-ground fault The two lines, as well as the ground, come in touch with each other in a double-lineto-ground fault. Such faults have a nearly 10% chance of occurring. • Triple-line-to-ground fault A triple line-to-ground fault occurs when three lines come in touch with the neutral wire or lie on the ground. In our proposed monitoring network of transmission lines, several wireless sensors are mounted on the chosen towers. The working principle of these sensors is to collect information about the operating conditions of the transmission lines as well as their surroundings. These sensors, after collecting data, send the data to the nearest IED through a communication gateway. Figure 4 shows the communication infrastructure between the sensors and IEDs. These IEDs will send the collected data to the control center. The important point to be considered here is that sensors do not need to be installed on all of the towers. The two lines, as well as the ground, come in touch with each other in a double-lineto-ground fault. Such faults have a nearly 10% chance of occurring. In our proposed monitoring network of transmission lines, several wireless sensors are mounted on the chosen towers. The working principle of these sensors is to collect information about the operating conditions of the transmission lines as well as their surroundings. These sensors, after collecting data, send the data to the nearest IED through a communication gateway. Figure 4 shows the communication infrastructure between the sensors and IEDs. These IEDs will send the collected data to the control center. The important point to be considered here is that sensors do not need to be installed on all of the towers. to-ground fault. Such faults have a nearly 10% chance of occurring. • Triple-line-to-ground fault A triple line-to-ground fault occurs when three lines come in touch with the neutra wire or lie on the ground. In our proposed monitoring network of transmission lines, several wireless sensors are mounted on the chosen towers. The working principle of these sensors is to collec information about the operating conditions of the transmission lines as well as their sur roundings. These sensors, after collecting data, send the data to the nearest IED through a communication gateway. Figure 4 shows the communication infrastructure between the sensors and IEDs. These IEDs will send the collected data to the control center. The im portant point to be considered here is that sensors do not need to be installed on all of the towers. Intelligent Electronic Device (IED) Intelligent electronic devices in the distribution network of smart grids play a crucial role. Whenever a fault is encountered because of the failure of the transformers or the line current surpasses the threshold value, the substation's overcurrent relay trips the circuit breaker and the IED associated with that circuit breaker forwards an alarm message to other load-switch IEDs that are associated and operated by the IEDs of the substation's circuit breaker. The relay will determine the fault as temporary or permanent if after one or two consecutive trippings of the circuit breaker, the system comes back to its previous state; that is, if the power supply is restored, the fault is considered temporary. In the second scenario, after some consecutive tripping, if the system is unable to recover itself, the IEDs of the circuit breaker interact with the load switches of the IED to identify the actual fault location. After the fault identification, the fault localization will be finished when the load switch of the feeder terminal unit raises the fault flag; then, the next task of the IED is to isolate that area by tripping off the specific load switch. The load switch cuts off the power supply to the rest of the network within a short time span and transfers a message to each IED of the system components, including relays, circuit breakers, and tie switches, for the purpose of power supply restoration in the substation's faulty area. If the non-active switch is unable to restore the substation's power supply through the primary source, then it will choose another fault-free energy side. Figure 5 shows the possible states of the IED in the substation. Intelligent Electronic Device (IED) Intelligent electronic devices in the distribution network of smart grids play a crucial role. Whenever a fault is encountered because of the failure of the transformers or the line current surpasses the threshold value, the substation's overcurrent relay trips the circuit breaker and the IED associated with that circuit breaker forwards an alarm message to other load-switch IEDs that are associated and operated by the IEDs of the substation's circuit breaker. The relay will determine the fault as temporary or permanent if after one or two consecutive trippings of the circuit breaker, the system comes back to its previous state; that is, if the power supply is restored, the fault is considered temporary. In the second scenario, after some consecutive tripping, if the system is unable to recover itself, the IEDs of the circuit breaker interact with the load switches of the IED to identify the actual fault location. After the fault identification, the fault localization will be finished when the load switch of the feeder terminal unit raises the fault flag; then, the next task of the IED is to isolate that area by tripping off the specific load switch. The load switch cuts off the power supply to the rest of the network within a short time span and transfers a message to each IED of the system components, including relays, circuit breakers, and tie switches, for the purpose of power supply restoration in the substation's faulty area. If the non-active switch is unable to restore the substation's power supply through the primary source, then it will choose another fault-free energy side. Figure 5 shows the possible states of the IED in the substation. Supervisory Control and Data Acquisition (SCADA) System A smart grid is made up of several micro subsystems that work together to share connectivity and security components. It is the core component of the substation's control center, which is not only a monitoring system but also provides communication links. It is used for automating the distribution network of a medium voltage substation for intelligent remote controlling. This controlling and monitoring infrastructure provides benefits to the power utilities by enhancing electric supply maintainability and lowering the cost of operation [10]. The essential features of SCADA are gathering data, presentation Supervisory Control and Data Acquisition (SCADA) System A smart grid is made up of several micro subsystems that work together to share connectivity and security components. It is the core component of the substation's control center, which is not only a monitoring system but also provides communication links. It is used for automating the distribution network of a medium voltage substation for intelligent remote controlling. This controlling and monitoring infrastructure provides benefits to the power utilities by enhancing electric supply maintainability and lowering the cost of operation [10]. The essential features of SCADA are gathering data, presentation and monitoring, supervisory control, and notifying alarms as shown in Figure 6. It includes both hardware and software, having primary components such as human-machine interfaces (HMIs), programmable logic controllers (PLCs), data collection servers, and remote terminal units (RTUs). and monitoring, supervisory control, and notifying alarms as shown in Figure 6. It includes both hardware and software, having primary components such as human-machine interfaces (HMIs), programmable logic controllers (PLCs), data collection servers, and remote terminal units (RTUs). Sequence Diagram The Unified Modeling Language (UML) is a software engineering modeling language that tries to standardize how to depict a system's architecture [39]. UML is used to create a variety of diagrams, including interface, structural, and behavior diagrams. The most frequent type of interaction diagram is a sequence diagram. A sequence diagram simply displays the order in which objects interact or the order in which these interactions occur. Sequence diagrams show how and in what sequence the components of a system work together. In Figure 7, the sequence diagram of fault detection is shown, such as, whenever a fault occurs in the substation, the detector will eventually detect the fault and send a report message to the control center and the nearest recloser will automatically open itself according to the predefined instructions embedded in it. After the first tripping, the connected circuit breaker will try to make contact, and if the recloser recloses itself, the connection will be reestablished, the fault will be recorded as temporary, and then no further actions will need to be performed. Sequence Diagram The Unified Modeling Language (UML) is a software engineering modeling language that tries to standardize how to depict a system's architecture [39]. UML is used to create a variety of diagrams, including interface, structural, and behavior diagrams. The most frequent type of interaction diagram is a sequence diagram. A sequence diagram simply displays the order in which objects interact or the order in which these interactions occur. Sequence diagrams show how and in what sequence the components of a system work together. In Figure 7, the sequence diagram of fault detection is shown, such as, whenever a fault occurs in the substation, the detector will eventually detect the fault and send a report message to the control center and the nearest recloser will automatically open itself according to the predefined instructions embedded in it. After the first tripping, the connected circuit breaker will try to make contact, and if the recloser recloses itself, the connection will be reestablished, the fault will be recorded as temporary, and then no further actions will need to be performed. Formal Model Formal methods are mathematical entities that are used to model complicated systems. It is feasible to validate the characteristics of a complex system in a more formalized manner than empirical testing by developing a mathematically rigorous model of the system. Formal specifications are the descriptions of a model that can be described in a thorough and consistent manner for an application domain, a requirement or a group of requirements, software architecture, or program organization [40]. Formal methods in terms of the Vienna Development Method-Specification Language (VDM-SL) are used for the description and formal modeling of the system in a systematic way. Various constructs such as composite objects, invariants, sets and pre/post-conditions, are used for developing the specifications. Static Model Formal methods are mathematical entities that are used to model complicated systems. It is feasible to validate the characteristics of a complex system in a more formalized manner than empirical testing by developing a mathematically rigorous model of the system. Formal specifications are the descriptions of a model that can be described in a thorough and consistent manner for an application domain, a requirement or a group of requirements, software architecture, or program organization [41]. The proposed model in this article signifies yet another contribution in this field. In programming languages composite types are equivalent to record types. The static components include invariants for integrity checking of the condition, which must always hold true, and the fields, in the case of composite objects, may have several data types. In the formal specifications of the distribution system, one portion includes the data types of the variables, alongside which Formal Model Formal methods are mathematical entities that are used to model complicated systems. It is feasible to validate the characteristics of a complex system in a more formalized manner than empirical testing by developing a mathematically rigorous model of the system. Formal specifications are the descriptions of a model that can be described in a thorough and consistent manner for an application domain, a requirement or a group of requirements, software architecture, or program organization [40]. Formal methods in terms of the Vienna Development Method-Specification Language (VDM-SL) are used for the description and formal modeling of the system in a systematic way. Various constructs, such as composite objects, invariants, sets and pre/post-conditions, are used for developing the specifications. Static Model Formal methods are mathematical entities that are used to model complicated systems. It is feasible to validate the characteristics of a complex system in a more formalized manner than empirical testing by developing a mathematically rigorous model of the system. Formal specifications are the descriptions of a model that can be described in a thorough and consistent manner for an application domain, a requirement or a group of requirements, software architecture, or program organization [41]. The proposed model in this article signifies yet another contribution in this field. In programming languages, composite types are equivalent to record types. The static components include invariants for integrity checking of the condition, which must always hold true, and the fields, in the case of composite objects, may have several data types. In the formal specifications of the distribution system, one portion includes the data types of the variables, alongside which their quote types are declared. The model is comprised of various variables such as sequence type, token type, and string type. To record everything in the system, the date and time are taken as composite objects. The composite object substation consists of the substation ID; the capacity of the substation means the total amount of electricity to send and receive, the details about the substation, and the set of transformers. The invariant on the composite object substation is defined, and it ensures that the capacity of the substation will always be more than 0. The next composite object is the transformer, which has five fields: transformer ID; its mode, with three possible conditions idle|working|damage; location of the transformer; and the capacity of carrying voltage, respectively. The third composite object transmission line is composed of the line ID; a detector, which is embedded on each transmission line; the phase voltage of type real value; and the zero sequence current of type real. The zero sequence current is the unbalance flow of the current during the earth fault. Three lines are taken as wire, and the invariant on these lines ensures that all these lines are distinct from each other. (-,-,-,-, l1, l2, l3) == (l1<>l2 and l1<>l3 and l2 <>l3); The detector is the one that is embedded in each transmission line and the near-to-field devices, such as transformers and meters, and in the substation area. Each detector is distinguished based on the unique ID. The composite object Voltsensor is created for the sensor, which is used for measuring the voltage of the power. The deviation in the nominal values of voltage and current causes most electrical faults, so it is most important to continuously measure the value of the voltage in the lines. Actual load and requested load are those values of the voltage that are actually flowing and demanding, respectively. The sensor will send the alert message to the control center and provide the fault info. The keyword values is used to specify the constants of the specifications in VDM-SL. This declaration of values comes right before the state definition. Here, we declare the phase voltage and zero sequence current values. These values will be used in the fault detection operation. These values are specified here for the efficient working of the system. If the threshold values are exceeded, the system will go into the imbalance condition, which is the faulty state. Dynamic Model The dynamic components include the state definition, the possible operations, and the reusable functions. Various constructs, such as composite objects, invariants, sets, and pre/post conditions, are used for developing the specifications. IoT is deployed in the smart grid for detecting the faults that consist of sensors and detectors. The exceptional part of the VDM-SL specification is state, in which the variables are declared in a similar manner as in the other programming languages. The attribute specified in the state is permanently stored by the system. The illustration/explanation of specifying the state for our DMS is as follows. Many variables have been declared, with their data types being used in the state. Invariants are defined in these variables, which must be true for the system from the beginning to the system termination. The state of the system is always finished with the keyword end. The function is another tool that is vital when specifying complicated systems. These functions can be used later in operations. The following function is defined to check the voltage: nvolt is the normal voltage, and volt is the abrupt change in the voltage if it becomes high. We will use this function later in our operation of fault detection. The dynamic behavior of the system is described by exploiting operations. A nonmandatory pre-condition and a mandatory post-condition are required to represent the operation. By specifying the pre-conditions, each operation is correlated to the preceding one. Post-conditions are used to specify the accuracy of an operation. Our proposed formal model performs various operations on the distribution management system, such as checking the capacity of the substation to get its details and adding transformers to the substation. Operations The operation check capacity is designed to measure the total capacity of any specific substation in the smart grid. The capacity refers to the capability of power storage and power transfer to the distribution transformers. It takes one input, which is the ID of the specific substation, and will return the value of the capacity in the real data type. Before proceeding to the post-condition, the system will ensure the pre-condition. In the external clause, the rd keyword is used to tell the system that the access type is only to read. checkCapacity(idIn: SBID) cap : real ext rd substations : map SBID to Substation pre idIn in set dom substations post cap = (substations(idIn)).capacity; Pre-conditions (1) The registration of the specific substation is the first check by the system that the given ID is in the mapping of the substation in the system record. Post-conditions (1) The mapping of the substations is applied to the entered ID of the substation, which will create an object; the dot operator is used with the capacity attribute, which will take the value of the specified entry and return it as a real number. The get details operation is defined for getting the details of the specific substation. This operation is useful for obtaining the overall detail of the substation at any time; the system is updated periodically. getDetails(idIn: SBID) detailsOut : Details ext rd substations : map SBID to Substation pre idIn in set dom substations post detailsOut = (substations(idIn)).details; Pre-conditions The registration of the specific substation is the first check by the system that the given ID is in the mapping of the substation in the system record. Post-conditions The mapping of the substations is applied on the entered ID of the substation, which will create an object, and, with the dot operator, it will take the value of the specified entry and return the appropriate fields of details for that particular substation. When the demand for electricity usage increases, the need will be to receive more power from generation resources so that the demand of the consumers is met. The extra received electricity needs to be stored in the main substation, which means the need will arise for new substation transformers of heavy capacity. The add transformer SB refers to the newly added transformer in the substation. The required fields for the new record are: the substation ID in which it is being deployed and recorded, transformer ID, capacity of the new transformer, its location, and the date on which the transformer was added into the system. The external clause is used to give the write access to the mapping of substations. Pre-conditions (1) The registration of the specific substation is the first check by the system that the given ID is not in the mapping of the substation in which the transformer is added. Post-conditions (1) The let-in clause is used here to overcome the complexity of the operation. More than one let-in clause can be used in a single operation, such as in the post-condition. (2) Here, two local names 'trans' and 'newTrans' are used for the sub-expressions (substations~(idIn)).transformers and mk_Transformer (tidIn, <idle>, locationIn, tcapacityIn, dateIn), respectively. (3) Both of these local names are joined in the last sub-expression, with the union operation to add a new record in the substations mapping. The following operations are defined for the entity transformer. The add transformer operation is for the new transformer that is being deployed into the distribution network for the electric supply to the user. The required fields are the ID, location, and date of entry. Pre-conditions (1) The pre-condition is to first check the transformers mapping to ensure that there is no already existing transformer with the same ID. Post-conditions (1) The post-condition accepts the ID of a new transformer and records the fact that this transformer has been added with the specified fields to the collection of transformers. The operation removeTransformer is similar to the addTransformer operation in nature except the fact that its pre-condition is different; it accepts the ID of a transformer and records the removal of this transformer from the system. Pre-conditions (1) The pre-condition is to first check the transformers mapping to ensure that the accessed mapping is not empty; in brief context, in order to remove a record from the system, there should a record present in it. (2) The specified ID to be removed is in the collection of transformers. (3) The working mode of the transformer is checked, in that its status should not be working, which means its status can be any other than working. (1) The post-condition accepts the ID of a new transformer and removes the required transformer from the collection of transformers. Post-conditions To record a transformer as damaged, the operation to repair accepts the ID of a transformer and records its mode as damaged. To change the records of the composite object, the write access is given to the transformers mapping with the external clause. Pre-conditions (1) The pre-condition is to first check the transformers mapping to ensure that the transformer is in the collection of transformers. Post-conditions (1) The post-condition accepts the ID of the transformer, and, by using the override, the operator changes the required fields of that transformer, and then the collection of transformers will be updated. To update the record of the transformer's collection, the operation fixedTransformer accepts the name of a damaged transformer and records that its mode is set to idle; the other process is the same as above. The operation numberToFix returns the number of total damaged transformers. The operation will return a natural number value. Pre-conditions (1) The pre-condition is true here, which refers to the condition that there is no need to apply any check or constraints because, without any pre-condition, the post-condition will work perfectly, such that it is just a read type operation. Post-conditions (1) The cardinality operator is used here to take the number from a set type. (2) The condition in braces checks that there exists a transformer in the range of the transformers mapping of them; if the mode is damaged, then the total gathered transformers will be returned as a number. To determine the total number of transformers under a substation, we created the operation of get_total_Transformer, which will return a natural number value. get_total_Transformer()out:nat ext rd substations : map SBID to Substation pre true post out = card dom substations; The important aspect of the distribution management system is to detect the theft ofequipment in the substation. It can be ensured by checking the supply or working mode of the transformer; the operation will take the ID of the transformer as input and the query will return as true or false, depending on the post-condition check. detectTransformerTheft(tidIn:TID) query : bool ext wr transformers : map TID to Transformer pre true post query <=> tidIn in set dom transformers and transformers(tidIn).mode <> <working>; Post-conditions (1) The query will return as true when the specified transformer is in the record of the substation but the system shows its mode is not working. (2) We ensure that we have already created an operation of damaged transformers, so it cannot be considered that if a transformer is not working, it is in the damaged transformer collection. (3) The damaged transformer records are updated with the time, so a nonworking transformer will be considered as lost. To detect the fault in the transmission line, a volt measuring sensor is taken as input in the following operation; it will return a Boolean type value in the form of true/false. Pre-conditions (1) The pre-condition will check the fact that the voltage measuring sensor is in the collection of Voltsensor. Post-conditions (1) Fault alert will return true only if the volt value is greater than the normal volt that is nvolt; the fault info is transmitted to the control center. The major task of the distribution management system is to categorize the type of fault. Our operation determines that the fault type is quite lengthy yet easy to understand. The operation takes three transmission lines as input and the detector. The specified fields for fault detection in transmission lines are three transmission lines and one detector mounted on the tower. Read and write access is given to the collection of transmission lines, detectors, and the set of interrupted lines. Pre-conditions (1) The pre-condition will check the fact that the transmission lines are in the set of transmission lines. (2) The status of the detector is working. (3) Initially, there is no pending fault such that the set of interrupted transmission lines is empty. Post-conditions (1) Two types of faults are detected in the post-condition by comparing the combinations of transmission lines. (2) Two constraints, phase voltage and zero sequence current, are used here for checking the proper working condition of the lines. (3) For any combination of two adjacent wires, if these constraints exceed the threshold value, the fault will occur and will be detected by the detector, which will determine the fault type as permanent and send a signal to the nearest recloser to open itself. (4) When two adjacent wires exceed the value of phase voltage, they will touch each other and the occurring fault will be a line-to-line fault. (5) If one of the three wires exceeds the phase voltage as well as the zero sequence current value, the occurring fault will be considered a line-to-ground fault. The following function takes the transformer ID as input and will send a signal to the control center about the restoration of the supply. In the post-condition, the mode is checked for that specific transformer if the result is true; if it is in the condition, then a restored signal is sent to the system; in the opposite case, it sends a signal to do nothing. serviceRestore(tidIn: TID) signal : Signal ext rd transformers : map TID to Transformer pre tidIn in set dom transformers post if transformers(tidIn).mode= <working> then signal = <Restored> else signal = <DO_Nothing>; Pre-conditions (1) The pre-condition is to first check whether the given ID is in the collection of transformers. Post-conditions (1) The specified transformer mode is checked as to whether it is working or not. If the mode is equal to working, then a signal will be transferred to the control center that the service has been restored. The following operations are specified for the transmission lines. For checking the specific line against an ID, the first operation is used, which will check that line in the interrupted collection of wires. After resolving the fault of the specific line, it will add up to the collection of transmission lines; transmissionlines~is the old set of transmission lines and transmissionlines is the updated collection. To remove a faulty line from the interrupted transmission line collection, write access is used here to change the record. Sometimes, the continuous flow of the current in the wires gets changed due to some inner or outer conditions of the system, and the system starts tripping; this tripping mostly happens in the nearest circuit breaker. The following are the smart meters' operations: initially, the consumers request that the smart meter be installed at their residence; the required attributes are taken as input, which is stored in the system for further use in the future. The smart meter installation takes the required data of the user as input and also the unique ID of the meter that is being installed. The last two operations are regarding the usage of the units and meter removal. requestMeter(cidIn : CID,cnameIn: CName, dateIn: Date, detailsIn: Details) ext wr requestedusers: map CID to Consumer rd processesdusers : map CID to Consumer pre cidIn not in set dom requestedusers post requestedusers = requestedusers~munion {cidIn |->mk_Consumer(cidIn, cnameIn, dateIn, detailsIn)}; Pre-conditions (1) The pre-condition is to first check that the request has not already been registered against the ID of the consumer. Post-conditions (1) The post-condition is updating the mapping of the requested user. The installMeter operation will record the fact that the request of the specified consumer is completed with the complete details of the consumer and that the date of the record is also updated. numerous syntax and type errors within the model specification, which were explored by the tool's syntax and semantics evaluation as shown in Figure 9. of the model help to recognize areas of ambiguity and incompleteness in the requirements of the informal framework and give a degree of assurance that the key properties, in particular those of safety or security, will be appropriate for legitimate implementation. The specifications of the system are analyzed through the VDM-SL toolbox. The specification is evaluated via syntax, type checker, generator of C++ code, and pretty printer. In the formal specification, the reported errors are eliminated earlier by enhancing the characteristics of invariants and pre/post-conditions as well. The developed formal specification is approved with success through all checkers, and the proof of correctness is shown in Figure 8. An integrity analyzer is used to determine the specification's integrity properties. The dynamic level of the requirements is examined by the integrity checker, and VDM-SL predicates are defined by a set of formulated integrity properties that specify the parameters in which no runtime error should execute. There will be no runtime error if the integrity property responds to true. All integrity properties are found to verify the true condition of the specification. For the validation of model properties, invariants and pre/postconditions are specified. There are two functions that are necessary to be performed in order to ensure the system's formal validation's validity. The VDM-SL toolbox provides syntax and type checkers that evaluate the developed static and dynamic models. Initially, there were numerous syntax and type errors within the model specification, which were explored by the tool's syntax and semantics evaluation as shown in Figure 9. The detailed description of the proposed model is defined in tabular form, as given in Tables 2 and 3. The detailed description of the proposed model is defined in tabular form, as given in Tables 2 and 3.
12,706.4
2022-04-10T00:00:00.000
[ "Engineering" ]
Characterization of Limestone as Raw Material to Hydrated Lime In Malaysia, limestone is essentially important for the economic growth as raw materials in the industry sector. Nevertheless, a little attention was paid to the physical, chemical, mineralogical, and morphological properties of the limestone using X-ray fluorescence (XRF), X-ray diffraction (X-RD), Fourier transform infrared spectroscopy (FTIR), and Scanning electron microscopy / energy dispersive x-ray spectroscopy (SEM-EDS) respectively. Raw materials (limestone rocks) were collected from Bukit Keteri area, Chuping, Kangar, Perlis, Malaysia. Lab crusher and lab sieved were utilized to prepare five different size of ground limestone at (75 μm, 150 μm, 225 μm, 300, and 425 μm) respectively. It is found that the main chemical composition of bulk limestone was Calcium oxide (CaO) at 97.58 wt.% and trace amount of MnO, Al2O3, and Fe2O3 at 0.02%, 0.35%, and 0.396% respectively. XRD diffractograms showed characteristic peaks of calcite and quartz. Furthermore, main FTIR absorption bands at 1,419, 874.08 and 712.20 cm1 indicated the presence of calcite. The micrographs showed clearly the difference of samples particle size. Furthermore, EDS peaks of Ca, O, and C elements confirmed the presence of CaCO3 in the samples. Introduction In Malaysia, limestone is characterized by wide karstification with the development of a complex and large network of caves [1].Limestone hills naturally with sharp sides and rising up to many hundred meters above the flat alluvial plains are a well-known feature of the Malaysian landscape.Most of these alluvial plains are underlain at moderately shallow depths by the same limestone that forms, the prominent hills.The detection of rich tin deposits in the alluvium has led to the improvement of several urban centers adjacent to and later increasing on to the former mining regions [1]. Limestone is mainly composed of calcium carbonate mineral CaCO3.This compound is one of the most common materials among the chemically precipitated sedimentary rocks [2].Biological and also biochemical methods are the main processes in the carbonate sediments formation.Nevertheless, the inorganic precipitation of calcium carbonate from seawater can also happen [3].After CaCO3 formation, physical and chemical processes of diagenesis can significantly change the characteristics of CaCO3. Limestone is widely used in architectural applications for walls, decorative trim and veneer.It is less frequently used as a sculptural material, because of its porosity and softness.However, it is a common base material.It may be found in both bearing (structural) and veneer applications.In Malaysia, limestone is recently used in removal of heavy metals (Pb, Cd, Cu, Ni, Zn, and Cr(III)) from water [4].It's found that limestone was able to remove more than 90% of heavy metals from polluted water [4]. Another important use of limestone (after converting to hydrated lime) is mitigation of poisonous and dangerous gases emission.Lime treatment reduces odors, particularly hydrogen sulfide, which is not only a nuance odor but also can be very dangerous if localized high concentrations build up [5].In addition to high pH, lime provides free calcium ions, which react and form complexes with odorous sulfur species such as hydrogen sulfide and organic mercaptans [5,6]. In the current work, the characterization of limestone as a raw material of hydrated lime was conducted using wide range of analysis to investigate the physical, chemical, mineralogical, and morphological properties of limestone. Sample location The main material used in this work is the limestone, which was obtained from Bukit Keteri area, Chuping, 02450 Kangar, Perlis, Malaysia, 6.5035795,100.26139GPS coordination. Sample preparation In order to prepare the samples, limestone were crashed using lab crasher.The ground limestone samples were sieved into five different sizes at 75 μm, 150 μm, 225 μm, 300, and 425 μm as shown in Figure 1. Sample characterization After preparation of limestone samples, the mineralogical, chemical, physical and morphological properties of the hydrated lime samples were characterized and evaluated using X-ray fluorescence (X-RF) spectrometry, X-ray diffraction (X-RD), Fourier Transform Infrared Spectroscopy (FTIR), and Scanning Electronic Microscopy (SEM) attached with Energy Dispersive X-ray analysis (EDS) respectively. X-ray fluorescence (X-RF) X-rays fluorescent (XRF) was conducted using XRF spectrometer (Model MiniPAL 4 Brand: PANanalytical PW4030).X-RF analysis was carried out to determine elemental composition of the materials, which is based on the principle of individual atoms.When the sample is irradiated by X-rays, it measures the individual component wavelengths of the fluorescent emission produced by the sample. X-ray diffraction (X-RD) First, the specimen was pressed in stainless steel holder and then the Quantitative mineralogical evaluation was conducted by X-ray diffraction (X-RD) using CuKα radiation, 0.02° step size, 3 s counting time, 10° b 2θ b 80° range and Rietveld refinement method (X′Pert MPD -PANalytical X-ray B.V.).This method is designed to obtain a high quality diffraction data, associated with ease of use and flexibility to quickly switch to different applications. Fourier transform infrared spectroscopy (FTIR) Transmission FTIR spectra of composite films were recorded from thin KBr disc of the samples with Perkin Elmer 2000 FTIR spectrophotometer at room temperature.The samples were scanned from 4000 to 400 cm −1 with resolution of 0.4 cm −1 .By using FTIR, the functional groups in a molecule can be identified. Scanning electron microscopy / energy dispersive x-ray spectroscopy (SEM-EDS) The morphology and elemental chemical composition of the limes were determined respectively by scanning electronic microscopy (SEM)using secondary electrons detector and energy dispersive X-ray analysis (EDS) (LEO Stereoscan 440).The specimens were coated by an extremely thin layer of gold (1.5 -3 nm) using sputter coater machine.The purpose of specimens coating is to avoid the poor resolution of the image and also to prevent the electrostatic charging during test.The test was conducted at 1000x magnification. Results and discussions The chemical composition of bulk limestone is listed in Table 1.The high contents of Calcium oxide (CaO) (97.58 wt.%) in the sample indicated that the sample was high limestone purity [7], but silica constituted the common impurity (0.90 wt.%).The high percentage of the main element (CaO) in the specimen showed high enrichment factor thus concerning cement as the main contributing source to airborne particulate matter in these factories and environs.This has a clear environmental implication, which should be of interest to the environmental protection agency and also to the government.The occurrence of faint magnesium oxide in the sample bears witness to the presence of trace amounts of smectite [7,8].It is also found that a trace amount of Al2O3, Fe2O3, and SrO elements are existed in the sample.Fig. 2 shows the XRD diffractograms of the representative limestone samples, indicating the presence of characteristic peaks of calcite as identified by the distinctive reflections at 3.85-3.86A ̊ (102), 3.03 A ̊ (100), 2.84 A ̊ (006), 2.49 A ̊ (110), 2.28 A ̊ (113), 2.09 A ̊ (202), 1.97 A ̊ (108), 1.87 A ̊ (116) and 1.60 A ̊ (212).Furthermore, the sample showed additional peaks prevailing at 3.33-3.34A ̊ (101), 1.54 A , (211), 1.37 A ̊ (203) and 1.28 A ̊ (104) showed the presence of quartz [7].Detailed clay mineralogy was investigated and identified by the characteristic reflections according to Moore and Reynolds [9].Fig. 3 shows the FTIR spectrum of the limestone sample.It shows the characteristic bands of calcite at 1,419, 874.08 and 712.20 cm -1 .The spectrum peaks appearing at 1,799 and 2,513.04cm -1 are also an indication of the presence of calcite [10][11][12][13].These data indicate that the studied limestone is mainly composed of calcium in the form of calcite as identified by its main absorption bands [13].The reference bands observed at 1,419, 874.08 and 712.20 cm -1 can be assigned to the asymmetric stretching, out-of-plane bending and inplane bending modes of CO3 2-, respectively [14].Gunasekaran and Anbalagan mentioned in their research that the observed out-of-plane bending mode occurs at 877 cm -1 for 12 C.This band shifts to lower wave numbers for other isotopes of carbon ( 13 C and 14 C).However, limestone sample has a splitting band at 874.71 cm -1 [12].This clearly indicates that there is no isotopic shift.According to the author in his research, the bands observed at 1,799 and 2,513 cm -1 are attributed to the v1 + v4 combination mode.Moreover, the stretching vibrations of the surface hydroxyl groups (Si-Si-OH or Al-Al-OH) were found at 3,544.52 and 3,619.73cm -1 [15].Infrared techniques have been frequently used for the identification of clay minerals [10,11] as well as the natural calcite minerals [12].The observation of the morphology using the scanning electron microscope of the limestone, with three different sizes (fine, med, and coarse size) is reported in Fig. 4 a, b, and c respectively.It's clearly observed that the samples have different sizes due to different grinding and sieving processing applied on the samples.The samples show compact morphology, so the samples have less porous and the carbonates represent the main phase with calcite peaks [16].Furthermore, the samples have morphology with angular grains of quartz size, covered with fine particles and the grains of calcite appear clearly.This analysis of the morphology by SEM is associated by EDS analysis of the elements exist on choised line on area with maximum amount of informations as it is showing in Fig. 4 and Table 2 respectively.The presence of Ca, O, and C, as major element confirms the importance of carbonate phases, CaCO3 compound in the samples (Table 2).
2,116.8
2018-03-01T00:00:00.000
[ "Materials Science", "Environmental Science" ]
The influence of green finance on economic growth: A COVID-19 pandemic effects on Vietnam Economy Abstract Environmental protection and high economic growth are the global requirement and have attracted the special attention of researchers and policymakers. Thus, the current study is also going to examine the impact of green finance that includes green investment and green loan on the economic growth of Vietnam. The data have been obtained from the central bank of Vietnam and World Bank Indicators (WDI) from 1986 to 2019. This study also executed the Autoregressive Distributed Lag (ARDL) approach to examine the links among the variables. The results exposed that green finance along with all control variables have a positive association with economic growth. These outcomes have guided regulators to increase their focus on green finance that could increase the economic growth in the country. Introduction For many decades, the reforms in numerous countries endorsed all possible aspects to develop the economy with center stages of economic growth. However, economic growth includes various ABOUT THE AUTHOR Ngo Quang Thanh is a Research Fellow of School of Government at University of Economics Ho Chi Minh City, Vietnam. His research interests include Innovation, Corporate Social Responsibility, Finance, Governance, and Political Economy. He has published more than 20 research articles in International Journals (SSCI, SCIE, ABDC, and SCOPUS). He has vast experience of publications in reputed publishers (ELSEVIER, SPRINGER, TAYLOR, and FRANCIS PUBLIC INTEREST STATEMENT Environmental protection and high economic growth are the global requirement and have attracted the special attention of researchers and policymakers. Currently, Covid-19 has a rigorous impact on the economic growth all over the world due to long-term economic lockdown. Thus, the current study is also going to examine the impact of green finance that includes green investment and green loan on the economic growth of Vietnam. The results exposed that economic growth is significantly affected by the Covid-19 lockdown and green finance along with all control variables have a positive association with economic growth. These outcomes are guided to the regulators that they should increase their focus on green finance in this lockdown situation to improve the economic condition, especially in Vietnam. elements that establish various opportunities for society and the economies of various countries. The economy of Vietnam has suffered many variances of numerous variables, which posed a significant impact on economic growth (Broadstock et al., 2020). After the Vietnam War, many circumstances prevailed in the economy, which, after the strong support of developed economies, was able to recover. A complete period is usually required to support the crashed economies, but the dominance of numerous variables and factors placed externally poses a significant impact (Teoh et al., 2020). After launching the numerous reforms related to political and financial structure, certain past impacts positively the Vietnam economy. From 2002 to 2018, numerous people lived in a poverty line, which was significantly pulled out with developed countries' dominating influence. GDP per capita of Vietnam economy enhanced up to 2.7 times by reaching US $2,700 during 2019 where 45 million people significantly sorted out from the poverty line. There is a sharp decline during 2019 among the poverty rate by 70% toward 6%, which showed the reduction in poverty rates by 3.2$ per day. While some ethnic people lived in Vietnam with the poverty status of 86%, similar GDP was presumed during 2018-2019, where estimation was depicted to be 7% with the fast growing rates in the respective region. The ongoing situation of the Covid-19 pandemic illustrated remarkable effects on the Vietnam economy where green finance enabled significant measures to sustain the economy's environment. Various proactive approaches used in Vietnam to establish dominant alliances support the economy, but macroeconomic factors also stated a significant influence among them. Covid-19 established all possible effects over the economic growth with green finance dominance, which helped Vietnam to pose a sustainable environment for economic growth (Caldecott, 2020). The serious contradiction also eminently discussed in the environment of economic growth and green finance's resilience established positive measures though. Plenty of challenges effectively prevailed in Vietnam's economies, but the effectiveness of green finance inserted possible influences with covering measures to safe economic growth. The total contribution of services output and goods with the relativeness of population contributes toward economic growth, but the prevalence of Covid-19 disrupted every sort of distribution means. Covid-19 pandemic affected the economic growth from various perspectives, but the green finance endorsed various measures to secure the shocks of Covid-19 striving around the world (D'Adamo et al., 2020). Wide discrimination prevails among the societies of urban and rural income generators, which endorse possible economic growth. Although income generation is denoted as an important element, the pandemic Covid-19 induced a dominant impact, which reduced income in both areas (Yuan et al., 2020). Covid-19 is marked as a general slogan for the disruption of economies, but a shutdown of numerous countries blames the capacities of Covid-19 locally and internationally. People used to spend in the countries to enable economic growth, but the prevalence of Covid-19 affected the needs and movement of currencies among people and sellers. Expenditure states the importance of green finance with the relativeness of Covid-19, which elaborates the proxies with the inclusion of per capita expenditure, contributing to significant economic growth. Different means also prevail, which restrained the per capita expenditure, including Covid-19, which continuously affects global economies (Nurrika et al., 2020). Vietnam is one of Southern East Asia's fastest-growing economies with a GDP growth rate of 7.5% (2001)(2002)(2003)(2004)(2005), then down to 6.3% (2006-2010) and 5.9% (2011-2015), representing an expansive paradigm of non-sustainable production. Moreover, Vietnam's development was focused on a carbon-intensive road that placed great environmental strain. The Government of Vietnam has developed significant plans and policies, including a Social Economic Policy (SEDS 2012-2020 and five year-SED), Vietnam Climate Change Strategy (VCCS 2011), Vietnam's Green Growth Strategy (VGGS 2012), Sustainable Development Strategy (SDS, 2013) and a Master Plan on Economic Reform, in an attempt to encourage sustainable development and green growth (MPER, 2013(MPER, -2020. In these policies, Vietnam's average GDP growth rate goal will be 6.5% to 7% in 2016-2020. Vietnam needs at least USD 30.7 billion to fund the Green Development Plan by 2020; that is, it required 15 percent of Vietnam's 2015 GDP and 21.2 billion USD between 2021 and 2030 for Vietnam's INDC. Vietnam's economy and financial sector have a very difficult condition owing to very large budgetary deficits of 5.5 percent of GDP (2010) and 6.1 percent of GDP (2015) and a very high budgetary deficit (2015). According to a sustainable urban development model, the top-down strategy and green credit programs would help channel funds into the green economy to restructure the Vietnamese economy. In the light of current large government deficits and public debt, mobilizing financial resources primarily includes the private and banking industries, which possess an asset amount equal to 250 percent of GDP and a significantly expanding stock market of 34.5 percent of GDP market capitalization (2015). The Green Financial Policy System and the Green Financial Credit Scheme would assist Vietnam in adopting the Green Growth Plan successfully, responding to climate change and, in particular, fulfilling INDC Vietnam's obligations under the Paris Agreement. If Vietnam achieves its green financial requirements of roughly US$ 30.7 billion, 85.12 million metric tons of CO2 will be lowered by 2020. With US$21.2 trillion to fund all green industries between 2021 and 2030 under Vietnam's INDC commitments of 197.9 Mt CO2, it will cut Vietnam's overall CO2 emissions by 25 percent. With the passage of time, the concept of green financing is getting more attention from the world. The countries are focusing more on green finance development. Similarly, Vietnam's green finance demand for the year 2020 is given in Table 1. Green financing is under the core focus of the world due; it benefits multiple other sectors of the economy. These benefits from the green finance are also urging the world to pay more attention to the development of this form of financing. The demand for green financing all around the world is increasing over time. Similarly, green financing development is also getting more attention in Vietnam. The green finance analysis from 2011 to 2016 in Vietnam is given in Figure 1. The investment in the green finance follows the increasing trend from 2011 to 2016. In 2011, the investment in green finance was about US$200 million but in 2016, the green finance investment reached about US$700 million. The rapid growth can be seen from 2015 to 2016. The objective of the ongoing research is to measure the impact of green finance and Covid-19 on the economic growth of Vietnam. This study has contributed to the knowledge of the green finance with reference to the Covid-19. The investigation of green finance impact on the economic growth in the Covid-19 pandemic situation is one of the first attempts that could contribute comprehensively to the existing literature of green finance. The examination of Vietnam economy during Covid-19 with green finance is also one of the first attempts and significant contribution of the study. Literature review The literature clearly stated the importance of economic growth for the countries, including numerous opportunities for the sustainable development of the economy. Various finance sources help economies establish strong measures to develop the economy (Anh Tu et al., 2021;Chien et al., 2021a), but the inclusiveness of green finance prominently stated many factors and Covid-19. Numerous authors argued that establishing economic growth measures, the proper sustainability of financial contributions founded eminent means of economic growth. The study widely stated the financial indicators that contribute a certain extent to the economy, but the development of green finance was found as an effective element with Covid-19. Although financial development tends to be an important factor among economic growth, social and other factors are also required to react with economic growth sustainability Mallick & Rahman, 2020). The imperfection of infrastructure also induces a dominant impact on the economy, but the inclusiveness of various other elements with Covid-19 contributes positive domains to enable a safe environment. Indicators of the economy pose the direction of economy striving toward the development context or declining context; therefore, green finance positively impacts the economy . During the orientation of Vietnam's economy internationally, various strengthening measures depicted the robust effects of different green finance instruments. Resilience and strength dominate as important elements for any country, helping economies reach the development bridges. The growth of economies prevails among the financing elements that retain a prominent aspect of initiating financial contributions, whether locally or internationally. The significance of green finance positively enumerates economic growth due to the relativeness of economic growth elements. Literature specified the importance of Vietnam's economy, which got disrupted during the initiation of war and Covid-19 from the other countries to refrain the minerals or economy. A broad disparity prevails among the well-cultured and well-developed communities, while the financial means still dominate among them with considerable factors (Kim & Kang, 2020) (Kim & Kang, 2020). Numerous implications often prevail among the partnership by stressing the ratio of urban and rural wages, which tend to be affected by the growth of Covid-19. The countries utilized different channels to allow healthy steps, but the aftereffects of Covid-19 widely disturbed the economies with numerous other green finance effects. It depends on the rural and urban citizens to control their commitment and the quality of living against the economies; thus, the large difference in income presents potential consequences. Many factors contribute to establishing positive economic growth; therefore, green finance is an important element in the literature posing eminent influence (Chien et al., 2021d). Green finance factors include many elements that directly and indirectly impact economic growth, but the GDP relevance and various economic indicators positively help the economies. The robust impact of Covid-19 and green finance supported the export and demand of manufacturing and industrialization (Chien et al., 2021e), which contributes toward growth. Although the resilience of many factors prevails in the literature as influential, the extent of major factors positively impacts economic growth. The study also compared various economies with the relevance of Vietnam's economy, which has depicted the significant growth in recent years, stating current Covid-19. The booming effects of the manufacturing sector also illustrated the strong impact on economic growth with industrialization. However, foreign direct investments are also denoted as an important macroeconomy element and pose as an eminent element of green finance that contributes to economic growth. It is dependent on the development of various financial institutions, which help different businesses through the innovation of green finance, an eminent contributor to the economy . Usually, some projects help to provide profitable means of developing and sustaining the economy; therefore, different expanding green loan tools develop the economic conditions (Chien et al., 2021f;D'Adamo et al., 2020). Literature specified the importance of green loans for economic growth, but the legitimate requirements also prevail in the economic study with the relevance of policies and regulations. Some interest and Covid-19 effects also countered dominating influence over the economic growth (Li et al., 2021b), but green loans' efficiency usually recovers the prospects of negativity (S.-Z. W. Li, Chien, Hsu et al., 2021). Essentially, it is important to analyze the readiness and utilization of positive green loans, which help develop economic growth and the development of the economy. Green loans' importance also countered as expensive to some economies, especially suffering from huge loans of developed countries and financial institutions (Taghizadeh-Hesary & Yoshino, 2020). The higher interest rates exert an eminent impact on the economic growth; therefore, some emergent means are usually required to recover the loss assumptions during Covid-19. Implementation and development of economic growth require the modeling of various investments that relate to green loans. Although investments have pose positive impacts, the legitimation of a green loan environment influences the growth of economies. Either some domestic means also disrupt the growth of economies, but the influence of Covid-19 and many other factors contributes significant measures to economic development. Green investment tends to have a positive relationship with establishing strong economic growth where investors tend to spend their amount for significant profits. Covid-19 endorses an eminent impact on economic growth, while green investment countered as a positive development toward the strong economy . The enumeration of green investment proclaimed the robust influence of international investors, but the sudden influence of Covid-19 affected the economies globally. Many elements that prevail domestically influence economic growth, requiring a quantitative approach to develop the economy and enumerate the economic growth factors. Literature viably discussed the eminence of quantitative changes that arrived during Covid-19, which affected the economies of developed countries and the economy of Vietnam. Socially responsible countries manage to have the dominant source of green investments to establish the socioenvironment and economic growth (Gajjar, 2020;Nguyen et al., 2021). Different investing activities also dominate in the literature, inserting a significant impact on the developing economies, which strives for positive economic growth (Zhuang et al., 2021). Although some alternate projects designed locally also induce positive reflections on the establishment of the economy, international influence poses different aspects (Othman et al., 2020;Sadiq et al., 2020). Globally, the changes affect economic conditions, but the discovery of some green investment elements helps different countries to enable secure means of economic growth. Production and manufacturing also got affected during the tenure of Covid-19, but the influence of green investment covered some extent of losses. The study initiated the pandemic losses and found significant spans to recover the prospect amounts, but Covid-19 shut down all aspects of covering losses. A green investment where applied have positive means of creating safe measures to recover the losses, but Covid-19 influence continued with various aftershocks. Therefore, some demographic areas also denote the effective placement of green investments, but green investment's relativeness includes a higher rate of interest (Han et al., 2020). Literature used various green investment elements, which helped economies enable safe means of rising economic growth. Some effective measures are usually required to save green investment, which could help counter the losses initiated during the Covid-19 pandemic. Usually, the green investment includes numerous elements through which the treatment of green investment could be initiated. Therefore, mutual funds, traded funds, various securities, and bonds were dominated as important elements that could contribute a significant portion to the economy's affected area. At a higher rate of interest, the economy could subsequently manage to strive for the situation but could face various restrictions after attaining green investment from developed countries (Taghizadeh-Hesary & Yoshino, 2020). The themes of investments look forward with a certain objective, but the robust influence of Covid-19 inserted an immense impact on the economies globally. The quality of financial contribution discusses green finance factors, where the element includes green finance, which dominates with certain influences on economic development and economic growth (Chien et al., 2021g;Hsu et al., 2021). Literature mentioned the nomination of various elements that prevail among the green finance for a certain term, which in a short span discussed all possible aspects of economic growth (Abbas, 2020; Kennedy et al., 2020;Vermeulen et al., 2020). The study used various variables prevailing in financial development; therefore, green finance inserted a significant impact on the economy with positive sources and elements with Covid-19. It depends on the premise's management, which elaborates the green investment as a study of an increasingly advanced variable in economic growth. Green finance dominated in the literature as an important element in the economy, which poses various impacts on the economy and societies (Lawrence, 2020;Li et al., 2021c). The output of any country is usually enumerated by the effective investment in green finance, which significantly accounts for many people (Bassino & Van Der Eng, 2020;Masitenyane et al., 2020). Prominently, the gross domestic product is divided by its population, which endorses the eminent influence over economic growth. Although the population dominates as an important element, the prevalence of green finance and Covid-19, which insert a role in the economy. The study used green finance for positive perspectives of the economy, but some negative aspects were also highlighted in the literature that could disrupt economies. Societies and industries are significantly indulged with economic conditions; therefore, green finance provides easing measures for sustainable efforts and the rise of economic growth (Ncube & Koloba, 2020;Žmuk et al., 2020). Totaling the amounts of services and goods output referred by the industries contributes a significant portion to the economic growth and includes the eminent portion of green finance with effects of Covid-19. The population endorses positive contribution to the economic growth, but the empowerment of population through employment and various opportunities establishes numerous positive measures. Significantly, the borders of various countries also induce neighbors' economic growth with trade dominance and various other factors (Matthews & Mokoena, 2020;Soto, 2020). Literature mentioned the green finance as widely stated green finance inclusiveness of better standard for investing measures to the economic growth. Covid-19 affected the standard of living and also relates to green finance, which contributes to effective economic growth. A large difference prevails among the well cultured and well-developed societies, while the financial means also dominate among them with vast factors (Kim & Kang, 2020). Numerous consequences also prevail among the relationship by emphasizing the ratio of urban and rural income, which tend to be affected by the rise of Covid-19. The countries utilized different channels to enable safe measures, but the aftereffects of Covid-19 widely disrupted the economies with various other green finance effects. It depends on the rural and urban residents to manage their contribution and the standard of living toward the economies; therefore, the wide disparity of income poses possible effects (Tsunga et al., 2020;Y Huang & Zhang, 2020). Certain gaps also prevail with the significance of rural and urban income where discrimination of people and needs widely states economic growth. Some restrictions also state the improving economy measures, which have a dominant impact on economic development, while economic growth links with green finance. An artificial means of green finance also imports various elements to the economy but emphasizes the rural and urban income related to economic growth. The study used a wide indication of green finance by stating variation of rural and urban income gaps due to inequalities (Chen et al., 2020). Therefore, inflation and foreign capital directly link with urban and rural income except for trade openness. The study used various financial indicators to analyze the dominant impact on economic growth; therefore, green finance was prominently found indulged in financial indicators. The influence of green finance is widely discussed in the literature using immense variables that were found to be proactive in the quality of economic growth Sadiq et al., 2021a;Weeks et al., 2020). The innovative approach used by various authors mentioned green finance's eminence with per capita expenditure and Covid-19, which strongly communicates with the economy. Some interpretations of green finance elements positively denote the significance of financial needs required by the economies of various countries Sadiq et al., 2021b). Prospects of financial development include significant measures with the relevance of governing environmental green finance. Literature specified the importance of Covid-19 and green finance in economic growth, which asserted the improved measures with a great role in the progress and development of the economy (Chai et al., 2020;Nawaz, Seshadri et al., 2021;Xueying et al., 2021). Different analyses with the respective driving economic growth interpreted green finance's eminence with relativeness of per capita expenditures. The mainstreams of economic growth usually dominate the growth of industries, which requires green finance eminence through a variety of essentials inserting a positive role. The determination of per capita expenditures denotes green finance's significance, which helps various countries, whether fully or partially, with various expenses (Chien et al., 2021i;Liu et al., 2021;Marjit et al., 2020). The study used expenditures as a dominant means of introducing the elements of green finance, which impacts economic growth through various other elements that prevail in the literature. Most countries calculate the modes of expenditures, which posed the dependence of population and contribution (Bayarbat & Li, 2020;Chien et al., 2021h). Usually, economic growth not only relates to certain elements of green finance, but a variety of other factors also impacts economic growth, and Covid-19 tends to be some of them. Different green finance tools include the expenditures, which totals the per capita expenditures where people assessed based on lending toward them. Various divisions among the calculation of per capita expenditures state that the influential means of smaller and larger perspectives belong to the green financing measurements H. Sun et al., 2020). Literature established the significance of well-being, which accounts for the percentage of human capabilities as dominant means of income distribution and lending distribution is established. The study used the elements of per capita expenditures by focusing on population significance, which describes the rise and sustainability Setyawan et al., 2020). Per capita expenditure denotes various means of economic performance, which emphasized the comparison of various living standards and various environments of the economy. Research methods This study is going to examine the impact of green investment and green loans on the economic growth of Vietnam. The control variables that have been used in the study include the per capita GDP, urban-rural income ratio, and per capita expenditures. The data have been obtained from the central bank of Vietnam and WDI from 1986 to 2019. The estimation equation of the current study is as follows: This study has adopted economic growth as a predictive variable that is measured as the GDP growth annual percentage, while the green loan is measured as the ratio of green loans to loans and used as a predictor. In addition, green investment has also been used as a predictor and measured by the ratio of investment in environmental protection projects to GDP. However, the urban-rural income ratio is also used as a predictor along with per capita expenditures that are measured as the government per capita expenditures (annual %). These measurements are highlighted in Table 2. The selection of the appropriate model among the pooled OLS regression, error vector model, and ARDL has been made by using the unit root test. If the probability values of all the variables are less than 0.05 at the level that means all variables are stationary at the level and pooled OLS regression is an appropriate model for the study. However, if some variables are stationary at the level and some variables are stationary at the first difference, then ARDL is an appropriate model for the study. If all the variables are stationary at first difference, then the error vector model is an appropriate model for the study. The estimation of the unit root test is given below: Thus, the individual variable stationarity has been examined separately and estimation models of each construct are given below: This study also executed the ARDL approach to examine the links among the variables and for the execution of the ARDL model, first, the present study checked the co-integration with the help of the ARDL bond test. The cointegrating estimation is given below: For short-run nexus among the variables, the current study estimated the error correction model, and its estimation equation is given below: Results and discussion This study shows the descriptive statistics first in the findings section and the statistics show the that average EG index is 0.719, while the average green loan ratio is 0.748. In addition, the average green investment is 32.457. Finally, the average urban-rural income ratio is 1.534, while the per capita expenditure on average is 0.857. These values are given in Table 3. The statistics related to the understudy constructs are also presented in the graph that has also shown the mean and standard deviation along with minimum and maximum values of the constructs. These values are highlighted in Figure 2. This study has also shown the variables in the form of scatterplots. The scatterplots of each construct such as EG, GL, GINV, URIR and PCE are given in Figure 3. The nexus among the variables is also exposed in the correlation matrix that shows the positive association among the green loan, green investment, and economic growth while negative among URIR, per capita expenditures, and economic growth. These links are shown in Table 4. The results of the ADF test show that GL is stationary at a level, while EG, GINV, URIR, and PCE are stationary at the first difference, which is an indication that ARDL is the appropriate model of the study. These values are highlighted in Table 5. The ARDL bound test shows that the calculated f-statistics is 25.73, which is larger than the critical values at five percent level of significance. This shows that co-integration has existed in the model. These values are shown in Table 6. The results exposed that green finance along with all control variables such as urban-rural income ratio and per capita expenditures has a positive association with economic growth in the short run. These links are shown in Table 7. The results also exposed that green finance such as green loans and green investment has a positive association with economic growth in the long run. However, urban-rural income ratio and per capita expenditures have an insignificant positive association with economic growth in the long run. This nexus is also shown in Table 8. Robustness analysis The results of robustness analysis show that the GINV has positive and significant nexus with EG of the Vietnam in the short run, while GL has positive but insignificant nexus with EG of the Vietnam in short run and URIR has negative and significant nexus with EG of the Vietnam in the short run. These values are mentioned in Table 9. The results of robustness analysis also show that the GINV has positive and significant association with economic growth, while GL has positive but insignificant nexus with EG of the Vietnam in the long run. However, URIR has negative and insignificant nexus with EG in the long run. These values are mentioned in Table 10. Discussions The results have revealed that the policy to grant green loans even during the prevalence of pandemic Covid-19, which is a great threat to the natural environment, proves to be in a positive relationship with the economic growth rate. These results are approved by the research investigation of Z. Li et al. (2018), which gives deep insight into the inclusion of green practices in loan policies and their consequences. It implies that the consideration of the government regulators' environmental requirements by financial institutions while granting loans improves the country's economic growth. Our study results are approved by the research of Cui et al. (2018), who focus on the fact that the economic growth rate is accelerated by the green development in the individual financial organizations. The grant of loans for the investment in eco-friendly projects on favorable conditions helps organizations carry healthy programs, leading to environmental, operational, and financial performance, which contribute to the country's economic growth. The study results reveal that green investment has a positive association with the country's economic growth. The study examines that the prevalence of Covid-19 investment in the projects launched for the achievement of the country's environmental goals accelerates the rate of economic growth. These results are approved by the studies of Liao (2018), which show that the encouragement of investment in environmental projects, especially at the time when there is a severe threat to the country's natural resources and residents, proves to be beneficial for the economy to grow both within the country and in the world economy. These results are also in line with the previous studies of H. Sun et al. (2019), which imply that encouraging the implementation of green practices in the investment policies plays a significant role in economic growth. Different means of tradition in Vietnam have an effective approach, but the financial means of introducing eminent elements over the economic growth attained much importance. To drive toward positive economic growth, various elements of significant opportunities could not be overlooked. Although plenty of resources play a positive role in establishing economic growth, the eminence of quality economic development seeks the significance of green finance . A new term of green finance provided all means to ease the economic growth where certain improvements are also required to achieve the task. The population of Vietnam reached 96.5 million during 2019 from 60 million since 1986 and is also expected to expand to 120 million in 2050. The capital index of Vietnam also increased from 0.66 to 0.69 from 2010 to 2020. Vietnam's credit grants induced 24.94 billion US dollars through 210 operations with some commitments of active projects of US $7.41 billion. Vietnam's economy indulged in the projects of various plans, which are dominantly inserted in the surrounding sites. Therefore, the certified green loans of Vietnam value $186 million to be used by various energy projects. Different financing companies invested a significant portion of green loans amounting to $27.9 million, including various commercial banks. Some syndicated loans also contributed $148.8 million, where the infrastructure loan induces $9.3 million for the prevailing projects. Many countries invest in lower economies with the significance of the better value of exchange rates to earn profits and oblige the economies. Therefore, the significant impact of Covid-19 in Vietnam has emerged the lower rates of inducing investments. During Covid-19, various investments from developed countries were also restrained due to low margins and a high probability of losses (An & Pivo, 2020;Flores & Chang, 2020). It has become clear by the study findings that the ratio of urban-rural income is considerably linked with the country's economic growth when it is observed during the prevalence of Covid-19 pandemic. These results match with the studies of Su et al. (2015). These studies indicate that both the urban and rural are significant to the country as both contribute to the land's economic growth. The metropolitan area gives better quality products, so the higher urban-rural income ratio means better economic growth. These results also agree with the previous studies of Wang et al. (2016), which show that the increasing ratio of urban-rural income indicates the more and better quality of finished goods production, which determine the better GDP rate, which is the most crucial indicator of country's economic growth. The study findings have indicated that the per capita expenditure in a country has a considerable positive association with the economic growth rate. For the stimulation of economic activities within a country, the country's population's utilization of final production is necessary. Thus, the more the per capita expenditures, the more is the stimulation in the economic activities and better is the economic performance. These results are in line with the previous studies of Quy (2017), which indicate that the expenditures incurred to the individuals on the acquisition of basics and facilities bring stability in the economic activities and overall economic growth as they set a higher production rate and greater marking, which are considered the essentials of economic growth. These results are also approved by the research studies of Zhao et al. (2017), which indicate that in the countries where the per capita expenditure is more in quantity, the economy grows at a rapid pace. Economic development considers the strong relationship between economic growth and green finance. In particular, green finance exerts positive impacts on economic growth where the possibility of green loans also dominates with an effective role in a green economy (Aymerich & Herce, 2020; S. . The green environment domains include the gap in people's wealth, economic growth, and per capita with the eminence of green finance. Ultimately, green loan factors contribute to effective policies and positive measures to enable economic growth, but Covid-19 influence both of them. The study believed that the importance of various variables related to green finance dominates with the significance of financial projects that support economies' growth. Some sort of negative influences is seen with the prevalence of green elements and Covid-19, which do not relate directly but indirectly pose an impact on economic growth. The literature described the principles of green loans denoting the issuing authorities, specifically in some countries. Various countries founded indulged in providing green loans to developing countries with various loan instruments by indulging specific properties and contracts as collateral securities. The transition of green loans and Covid-19 influences economic growth. Finance includes various types that help sustain the economies of developing countries usually affected by various shocks, especially Covid-19. Green loans established significant measures to provide a friendly economic growth environment, which usually denotes various appeals and projects (Sgammini & Muzindutsi, 2020;Xu & Li, 2020). Conclusion and policy implications The study analyzes the impacts of Covid-19, a contagious disease prevailing on economic development. The prevalence of this threatening pandemic adversely impacts the well-being of all the segments of the population. It snatches the equal opportunities of growth from all the population segments as the economic activities are jammed, and the living standard of different sets of people is disturbed. The study tests the role of the element of environmental concern included in the other areas in the maintenance of the status of economic development in Pakistan, and it proves its significance with the statistics from the population of Pakistan. The favorable green loan policies protect the natural environment and reduce the negative impacts of a Covid-19 pandemic on the economic growth rate. The well-being of all the population segments is not much affected by the contagious pandemic-like Covid-19 under the high encouragement of investment in the projects whose purpose is to remove pollutants from the natural environment and ensures its protection. The high rate of per capita GDP helps the economy to remain stable for the increased GDP, which means the strong financial position of the economic institutions that they can afford the costs incurred on the eco-friendly technology, techniques, and resources and the training and education of employees for creating environmental awareness in them. The study has investigated the urban-rural income ratio and the per capita expenditures. It checks their role in removing adverse impacts of the prevalence of severe pandemic of Covid-19 on the economic growth. The present study has much theoretical significance. It is a distinction in the past literature on economic development. Usually, reviews by Easterly and Levine (2016) talk about economic development as a whole, but this study addresses the only economic development. This study analyzes the growth of economic activities, the nature of economic conditions, and the stability rate in them. The study examines the economic growth of the Vietnam economy while being suffered from contagious diseases like Covid-19, a severe threat to the societies and economies across the world. The study states that green finance and different financial factors affect the status of economic growth. The paper considers green finance like a green loan and green investment and economic factors like per capita GDP, urban-rural income ratio, and per capita expenditures while checking the movement in economic development. As far as the study's empirical implication is concerned, it has vital significance to the economist to examine the rate of economic growth and accelerate. The study elaborates on how economic growth can be enhanced with the proper implementation of green practices of credit policy of financial institutions and the issuance of financial securities. With the higher per capita GDP and higher per capita expenditures, the growth rate of the country increases. Besides, with the achievement of a higher urban-rural income ratio, the economic growth rate can be increased. This study has guided regulators to increase their focus on green finance that could increase the economic growth in the country. Limitations and future directions There are several limitations that our research still has, and that must be removed by the upcoming studies for better results. The most important among them is the analyses of the impacts of a limited financial area where green practices are now being implemented. So, the scope of the study is limited, and it must be expanded by future writers with the addition of financial areas having green implications. Moreover, the study applies a particular research sampling technique to analyze the material in support of the study. Although this quantitative research technique has been used with the undertaking of all necessary steps, it is recommended to future scholars to turn their attention to other sampling techniques for better results. The source, which has been in this research for the collection of required quantitative material to approve the concepts of this work, is particular while for better support, data should be acquired from more sources. Our research on the effects of the introduction of green practices in finance and different economic areas on the financial position has been carried out in the situation where there is Covid-19 pandemic on the peak and the nature and health of people are threatened. Thus, for the reconfirmation of the same results, these variables and their mutual association must be analyzed in the case of the usual social and economic situation so that accuracy of the results can be maintained in any time duration. Only Vietnam's economy is addressed by this research study, which has a different culture, geography, and social behavior from others, which has a profound impact on the economic activities and growth rate. Thus, the study conducted considering the economic factors of Vietnam's economy is not equally suitable for other countries that have their own culture, geography, and social conduct. While replicating the research results, future scholars should take the standard economy for generalizability. Highlights (1) Currently, Covid-19 has a rigorous impact on the economic growth all over the world due to long-term economic lockdown. (2) Green finance has significantly improved the environmental conditions that ultimately improve the economic conditions. (4) The policymakers should increase their focus towards green finance in this lockdown situation to improve the economic condition, especially in Vietnam.
9,269.4
2021-01-01T00:00:00.000
[ "Environmental Science", "Economics" ]
The method of smelting metals from charge with low metal content in a furnace with bottom electrodes and the first laboratory studies Liquid phase smelting reduction process for utilization of ferrous waste in the electric furnace with stationary bottom electrodes by carbon-thermal reduction is proposed and tested in laboratory conditions. The principal possibility of melting and reduction of iron heat generated in the slag bath between the bottom electrodes is shown. Introduction In the conditions of the high and unstable prices of scrap and crude iron, applied in steelsmelting production, there is an actual problem of using metallized charge, received not only from ore materials (DRI), but also from industrial ferrous waste (scale, slag, dust and sludge), as partial replacement of traditional types of charge materials.The technogenic waste saved up earlier and reproduced at the metallurgical enterprises makes actual development of new power effective ecologically safe technologies of their recycling. Attempts of utilization of technogenic ferrous waste in the form of nonmetalized briquettes with addition of carbon contain reducing agent and binding by use those in the electric arc furnace charge instead of part of traditional scrap metal (up to 10-15%) had no notable success [1].Briquettes were destroyed because of the low mechanical strength and iron oxides almost completely passed into slag.Therefore metallization of oxide materials is represented necessary preliminary procedure of their recycling.DRI in world practice is produced by solid phase ("Midrex", "Energiron-HYL", "ITmk3" etc) and liquid phase processes ("Corex", "OxyCup" etc) [2] and the last are more productive and less critical to the quality of the initial charge.High capital intensity is a characteristic feature of all mentioned technologies provide economically accepted payback period at annual output no less 200 thousand tons of product.In the context of typical mini-mill is actual technology of recycling ferrous waste in more less amount corresponding to the volume of their formation (20-80 thousand tons per year) with a payback period of investment is not more than 1-2 years.None of mentioned processes does not meet these requirements, but "ITmk-3" and "OxyCup" seems the most acceptable forgiven purpose. Process "ITmk-3" [3] is the production of iron in the form of melted pellets without using the coke and sinter.In the rotary hearth furnace pellets as a thin layer is charged which made from metal waste, low-grade coal and binder.Pellets are heated to 1350-1450 0 С in the furnace, iron recovers quickly, carburized and is partially melted.The whole process takes about 10 min.The output produced granular iron with an iron content of 95-97%.Energy consumption is 13.5 GJ/ton of product.The disadvantages of the process applies its dependence on natural gas, which is used for heating furnace and high performance threshold, under the terms of an acceptable payback period -not less than 200 thousand tons per year. Process of "OxyCup" [4] consists in reduction and melting briquettes in the shaft furnace, which contain technological ferrous waste, reducing agent in the form of coke breeze, lime as a flux and binder.The product is a liquid cast iron containing up to 4% carbon.The fuel and coke consumption are 1100-1200 Nm3 and 200-300 kg per 1 ton of product respectively.The process duration is about 1.5 hours.The disadvantages of technology are costly production of briquettes (mixer, press, drying), and also the lack of flexibility regarding the performance -not less than 200 thousand tons per year, which is associated with an acceptable payback period. We conducted laboratory studies of two-stage process [5] of recycling waste iron in an electric arc furnace, in which the first stage of pre-reduction of compacted solid waste is carried out in the chamber, installed under the roof of electric arc furnace, by off-gases and second step is liquid phase reduction in a the molten bath to produce crude iron or steel semi-product.However, the problems associated with the volatility of the flue gas reduction potential and a combination of different physical nature of the stages in one unit, significantly impede achievement of high energy efficiency of technology. The methods description and first laboratory results The new technology of ferrous waste utilization [6,7] base on liquid phase meltingreduction process in universal electrical furnace with stationary bottom electrodes, providing energy supply into molten slag bath.Process use carbothermic recovery of oxide materials, including technogenic waste in the furnace (fig.1), where the main part of the energy, necessary for process, is allocated in the slag layer due to Joule heat.The technological scheme allows additional heat sources (exothermic reactions in bath due to oxygen blowing) and off-gas sensible heat partial utilization for preheating and prereducing of initial materials in the feeding shaft.In the case of using a direct current a certain development is gained by electrolysis. The tests 1-3 were performed on an alternating current for the following electrical parameters: voltage 68-75 V, current 1.75-1.90kA, the test 4 -on DC under the same electrical parameters to assess the role of electrolysis in the recovery process.As an initial charge tested following technogenic ferrous waste: in tests 1 and 4 rolling millsscale, in test 2 -blast furnace sludge, in test 3 -electric arc furnaces melting dust.As the reducing agent in tests 1-3 used low grade coal in an amount of 25% waste, as slag-forming -lime in an amount of 10% waste.In the test 4 carbonaceous reductant and purge by oxygen was not used.The final product of the process was liquid crude iron. According to data, the overall specific energy consumption was on a laboratory unit 12-13 MJ/kg of product, which corresponds to parameter of the "ITmk-3" process. The experiments have shown the increased refractory wear near the bottom electrodes.The similar problems exist in DC EAF with bottom electrode [8].The reason for the increased wear is connected with vortex flows of liquid metal caused by spatial non homogeneity of electro-magnetic fields and temperature.Therefore, the most important objective is to estimate the factors influencing on refractory wear near the bottom electrodes, in particular, electro-vortex factors.Liquid phase smelting reduction process in an electric furnace with bottom electrodes was tested in a laboratory unit 50 kg of the liquid metal.Operating parameters and results are given in Table 1. Conclusions Liquid-phase carbon, thermal melting and recovery process in the electric furnace with bottom electrodes is developed and tested in the laboratory.Process isn't critical to a choice of charging ferrous materials: slag, dust and sludge domain and steelmaking. Using the bottom electrode can significantly reduce investment costs for equipment, improve energy efficiency and environmental safety of the process. The possibility of producing molten iron of the main types of man-made iron metallurgical wastes is shown.The yield is 71 ... 94%, power consumption 2.12 ... 2.29 kWh/t of product, the total energy consumption -12 ... 13 MJ / kg product.The possibility of recovery of copper from copper smelting slag waste production is shown. The reported study was funded by RFBR, according to the research project No. 16-38-60172 mol_а_dk. Table 1 . The operating data and results of experimental heats.
1,603
2018-10-30T00:00:00.000
[ "Engineering", "Materials Science" ]
Synergistic Information Transfer in the Global System of Financial Markets Uncovering dynamic information flow between stock market indices has been the topic of several studies which exploited the notion of transfer entropy or Granger causality, its linear version. The output of the transfer entropy approach is a directed weighted graph measuring the information about the future state of each target provided by the knowledge of the state of each driving stock market index. In order to go beyond the pairwise description of the information flow, thus looking at higher order informational circuits, here we apply the partial information decomposition to triplets consisting of a pair of driving markets (belonging to America or Europe) and a target market in Asia. Our analysis, on daily data recorded during the years 2000 to 2019, allows the identification of the synergistic information that a pair of drivers carry about the target. By studying the influence of the closing returns of drivers on the subsequent overnight changes of target indexes, we find that (i) Korea, Tokyo, Hong Kong, and Singapore are, in order, the most influenced Asian markets; (ii) US indices SP500 and Russell are the strongest drivers with respect to the bivariate Granger causality; and (iii) concerning higher order effects, pairs of European and American stock market indices play a major role as the most synergetic three-variables circuits. Our results show that the Synergy, a proxy of higher order predictive information flow rooted in information theory, provides details that are complementary to those obtained from bivariate and global Granger causality, and can thus be used to get a better characterization of the global financial system. Introduction Many countries have equity markets. The overall performance of these markets is typically summarized by stock market indices. Economic globalization has interconnected financial markets of different countries. Market movements, and economic and financial news generated or associated with a specific market are almost immediately transmitted to the other markets by professional information providers, media, and social media, making the global financial system highly interconnected. The influence of foreign investment on emerging countries has been investigated thoroughly in [1], and it has been shown that emerging and mature markets are much more integrated today than in the past. The influence among Pacific Rim countries has been explored in [2]. Moreover, it is well flow among market indices of financial markets located in Europe, America, and Asia. We focus on the information flow originating in the European and American markets and impacting on Asian financial markets. Data We consider seventeen stock market indices that belong to three groups: 4 indices of American stock markets (labeled as AM), 7 indices of European stock markets (labeled as EU), and 6 indices of Asian stock markets (labeled as AS). In particular, the indices of American stock markets are 31 December 2019 have been collected from Quandl [38] and Yahoo Finance [39]. During the investigated years, several financial crises occurred. It is worth mentioning (i) the crash of the dotcom bubble, whose bubble burst lasted from March 2000 to October 2002, with effects until the beginning of 2003; (ii) the Global Financial Crisis of [2007][2008][2009], which had such a global impact as it spread over most of the countries like an unstoppable domino; (iii) the European sovereign debt crisis, started in correspondence of the August 2011 stock markets fall, when the European stock markets suffered heavy losses due to fears about the world economic outlook; and (iv) the Chinese stock markets turbulence in 2015-2016. As so many events occurred, each with its own peculiarities, we decide to adopt a window approach, selecting non-overlapping windows. Varying the width of the time windows, we realize that the synergistic information flow appears to be localized in time rather than being a continuous exchange of information. However, application of PID requires a suitable number of samples, therefore a proper localization of the events (when synergistic dependencies occur) is unfeasible; indeed, in order to have statistical reliability of results, the window cannot be too small. In this paper, we show the results for windows corresponding to one calendar year, a conventional and easily interpretable duration, and leave to further research the development of methods to deal locally in time the issue of synergistic information flow. Denoting p C i (t) the closing price of the i-th stock market index on day t, daily logarithmic returns are calculated for every market index as The same procedure is applied to the opening price for the i-th stock market index p O i (t) to obtain the overnight change We verify that both x and y variables can be treated as stationary variables by performing an Augmented Dickey-Fuller test. The property of stationarity is a necessary condition for the information theoretical analyses that we apply in this work. In this type of study, it is very important to properly take into account the time zone effect [26]. The selected stock markets operate in different time zones and the opening and closing times of markets differ accordingly. In order to avoid the bias due to the time zone effect, in this paper we analyze only the information flowing in circuits made of three markets, where the target belongs to the AS group and two drivers belong to AM and/or EU groups. Moreover, we concentrate on the prediction of the overnight change of asiatic markets based on the knowledge of European and American markets closing prices at the day before. This choice ensures that the target variable cannot receive information from the driving variables in the same day. Consequently, we label stock market indices of the AS group as the y(t) time series, while markets in AM and EU groups are associated with the x(t) time series. In other words, we study the predictive information flow in pairwise directed interactions x α → y γ and triplet circuits {x α , x β } → y γ , where α and β are in AM or EU groups, and γ is in AS group. It is worth mentioning that due to the timing of markets openings, the same analysis would not be possible for circuits with drivers in Asia and Europe and the target being American, indeed the European markets close when American markets are already open, therefore the informational character of such triplets would not be comparable with those of circuits America-Europe → Asia. Particular care has been spent to cope with the problem of missing records arising, e.g., when stock markets are closed in some countries due to national holidays. To cope with this, for each triplet of stock market indices, the samples for the estimation of causalities have been constructed taking just the days where data of all the three indexes were available as well as records of the following day. Methods In the next sections, we provide details of the adopted prediction measures and statistical methodology. Bivariate Granger Causality Let us consider the overnight change series of the i-th stock market index, y i , as the target variable (i = 1, . . . , m), and the daily return series of the j-th stock market index, x j , as the driver variable (j = 1, . . . , n) measured in a given time window; in this work, m = 6 is the number of AS markets and n = n 1 + n 2 , where n 1 = 4 AM markets and n 2 = 7 EU markets are considered. Then, calling (y i |Y i ) the mean squared error prediction of y i (t) on the basis of its past states Y i (t) = {y i (t − 1), y i (t − 2), . . . , y i (t − l)}, and (y i |Y i , X j ) the mean squared error prediction on the basis of both Y i (t) and X j (t) = {x j (t − 1), x j (t − 2), . . . , x j (t − l)}, the bivariate Granger causality (GC) is defined as the following statistics [40], Repeating this evaluation for each i ∈ {1, . . . , m} and j ∈ {1, . . . , n}, we obtain the pattern of bivariate causality from any AM/EU stock market index to any AS index in the given window. Global Granger Causality In the present study, we consider an overall measure of predictive information transfer between two groups of variables, see in [22], computing the global Granger causality (GGC) from European and American markets to the Asian market as follows, where (y i |Y i , X) is the mean squared error prediction of y i (t) on the basis of both its past states Y i (t) and the past states of all the variables related to AM/EU stock market indices, collected in the vector X(t) = [X 1 (t), . . . , X n (t)]. For each AS stock market index y i , G i measures the information provided by all the AM/EU stock market indices {x 1 , . . . , x n } about the future value of y i ; the result is then averaged over the m AS indexes to get the global measure. As far as the order of the model is concerned, we fix l = 1, as we are interested here in the immediate influence, namely, in how the present record influences the state of the next record. Because of the high efficiency of information spreading in financial systems, and due to the stylized fact that the autocorrelation of the index return vanishes in a very short period of time, the choice l = 1 is robust against spurious causality due to longer memory effects. A similar choice has been adopted in several studies dealing with transfer entropy, Granger causality and global transfer entropy [12,14,16,18,19,21,27]. In order to evaluate the square prediction errors leading to the Granger causality measures, we use linear models. Moreover, to assess the statistical validity of the GGC, we estimate its value expected under the null hypothesis of independence by using surrogate random time series of the target stock market indices obtained with the method described in [41]. Specifically, we generate surrogate data of the target time series by using the Iterative Amplitude Adapted Fourier Transform (IAAFT) algorithm of Schreiber and Schmitz [42], and we consider the empirical GGC value compatible with zero when we cannot reject the hypothesis that such value is generated by a randomized version of the empirical data. The threshold for statistical significance used in our tests is 0.05. The present validation procedure is the most common in the Granger causality literature, and requires stationarity of processes. However, other choices are possible, e.g., bootstrap. Partial Information Decomposition The partial information decomposition (PID) is obtained starting from the GC from a pair of drivers, comparing these values with the GC from single drivers as detailed in [33]. Hereafter, we briefly recall the approach. The GC from the pair of stock market indices x j and x k to the target stock market index y i (j, k ∈ {1, . . . , n}, j = k; i ∈ {1, . . . , m}) is defined as The information decomposition is defined as where the pairwise GC G k→i is given by (3) and a similar expression holds for G k→i . In the above definitions, the terms U j,i and U k,i quantify the components of the information about the target y i which are unique to the sources x j and x k , respectively, thus reflecting contributions to the predictability of the target that can be obtained from one of the sources when it is treated as the only driver, and not from the other source. Each of these unique contributions sums up with the redundant information R jk→i to yield the information transfer between one source and the target according to the classic Shannon information theory. The term S jk→i is called Synergy and refers to the ability of the two sources to provide additional information about the target when they are considered jointly as information sources. In other words, it is the information that is uniquely obtained by using the two sources x j and x k together, but not considering them alone. As, in the above definitions, four quantities are unknown and just three equations are at hand, the information decomposition in unique, redundant, and synergistic parts is a missing piece in classical information theory. To obtain the Synergy measure S jk→i , we adopt the prescription of [43], and take as the Redundancy R jk→i the minimum between the two pairwise Granger causality indices G j→i and G k→i . Furthermore, in PID analysis, in order to assess the statistical significance of the empirical values of Synergy, we generate surrogates of the target time series by the IAAFT algorithm [42], and we consider compatible with a zero value those values of the Synergy for which the null hypothesis of uncoupled processes is not rejected at the 0.05 statistical threshold. Pairwise and Global Granger Causality We start considering the pairwise GC of the data set, with the aim of finding those stock market indices with the strongest influence on the group of Asian stock market indices, as well as the most influenced Asian stock market index. In Figure 1 We also compute the Global Granger causality from the 11 American and European stock market indices and each of the Asian stock market indices. In Table 1, we summarize the values of GGC for each target Asian stock market index as a function of the calendar year. The GGC results are similar to the results obtained for the pairwise GC. In fact, GGC from American and European stock market indices is detected for all calendar years for KOSPI 200, Hang Seng, Nikkei 225, and Straits Times indices, whereas for the SSE Composite and BSE Sensex, the estimated GGC values are lower than for the other indices, and for some years they are so low that they turn out to be compatible with the one observed for a randomized version of the target (in this case they are not reported in the Table). In summary, also this measure shows that the Shanghai Stock Exchange and the Bombay Stock Exchange are less effected than the other considered Asian stock market indices from the performances of the selected American and European stock market indices. (4)) for each calendar year. For each Asian stock market index target, the GGC is computed by using the 11 American and European stock market indices investigated in this paper. The values in parenthesis represent the 5 and the 95 percentile of the GGC computed for the IAAFT surrogates. Values labeled with an asterisk are compatible with the values obtained for surrogate data. When this occurs we say that the estimation of the variable is not statistically validated in the considered time window. Synergy In this section, we present our results about the Synergy associated with pairs of stock market indices located in America and Europe when they are used to predict the overnight return of some Asian stock market indices. In Figure 2, for each Asian stock market index considered as a target, we show the value of the Synergy for all possible n(n − 1)/2 = 55 triplets of of stock market indices involving the Asian Target and the n = 11 European and American stock market indices. Moreover, for this metric we evaluate with a statistical test whether the measured Synergy is statistically distinct from zero (in these tests we again use 0.05 as a statistical threshold). When the test rejects the null hypothesis that the estimated Synergy is compatible with the one obtained by using a randomized target, we call the time window used to compute the Synergy a validated window (i.e., a time window where the estimated Synergy is statistically distinct from a value obtained with a randomized target). In Figure 3, we show a scatter plot of the average Synergy associated with each triplet of stock market indices averaged over all 20 time windows as a function of the number of validated windows. The panel shows that the average Synergy has an approximately quadratic relation with the number of validated windows, suggesting that the triplets whose synergistic influence occurs for more years are also those characterized by highest values of the synergy. In the scatter plot, the color of dots is chosen according to the target stock market index. All the results shown so far refer to the overnight return (difference between the logarithm of the target index at the opening minus the logarithm of the target index at the closing of the previous day). We have also computed the Synergy for the daily return of the target index (i.e., difference between the logarithm of the target index at the closing minus the logarithm of the target index at the closing of the previous day); this was obtained using for the Asian stock market indexes the variables x i and X i in place of y i and Y i in Equations (3) and (5) during PID analysis. The results obtained for all 330 triplets are shown in Figure 4. The figure shows the average Synergy of each triplet both when the target is the overnight return of the Asian stock market index (blue bars in the figure) and when the target is the daily return (close to close labeled as red bars in the figure). The average Synergy for the overnight returns is larger than the average Synergy for the daily return for the large majority of triplets. In fact only a few exceptions are observed and they occur for low values of the average Synergy. This observation suggests that the information associated with the closing price of European and American stock markets is incorporated into the price dynamics of the Asian stock market indices immediately after the opening of the Asian markets. Discussion and Conclusions Use of causality analysis of stock market index returns in the description of the information flow occurring in the global financial system has received growing attention during the last years. In the present work, we provide the first study of the information flow detected among groups of three stock market indices over a period of twenty years. Our analysis is performed by investigating the so-called Synergy, an information theoretical measure that has been recently introduced to account for multivariate interaction effects in causality analysis. The global financial system is operating worldwide in all continents. For this reason, the activity of different markets is scheduled at different time intervals due to the presence of different time zones. To investigate information flows compatible with the sequence of market activities occurring worldwide in a trading day, we consider information flow that has targets in Asian markets and driving signals in previous European and American markets. Moreover, in the regression models we choose to focus on a specific form of information flow. We consider the driving signals as originated by the closing returns (close to close daily return) of European and American stock market indices, and we consider as target signal the subsequent overnight change of Asian stock market return (open to close daily return). To our knowledge, this is the first time this choice is adopted in a causality analysis of stock market indices. Our results show that predicting the open to index return leads to higher causality metrics with respect to those that one would obtain predicting the close to close returns, see Figure 4. We interpret this result as an evidence that markets digest quite quickly the information flow originated in stock markets of other countries. In addition to the Synergy investigation, we also estimated bivariate GC and GTE between driving and target indices. Concerning bivariate GC analysis, we find that the most important sources of information are the US indices SP500 and Russell 2000, whereas the most influenced Asian stock market indices are KOSPI 200, NIKKEI 225, HSI, and STI (especially from American stock market indices). For these indices, the information flow is detected for all years. The information flow of European stock market indices is less pronounced and more localized in time especially during the years of the financial crisis originated in 2007-2008 and turned out into sovereign debt crisis into 2011-2012. This years of crisis are also the years when the information flow is observed for SSE Composite Index and BSE Sensex Index. A similar temporal pattern is observed for the GTE with highest values of this metrics observed during the years 2007-2012. Coming back to Synergy results, it is worth noting that the highest values of Synergy are observed when the two stock market index drivers involve an European and an American stock market index (see Figure 2). Moreover, Synergy seems more relevant when a middle size American market is involved. In fact, the highest values of the Synergy are observed when driving indices include IBOVESPA or TSX, although their influence is rather low with respect to SP500 and Russell 2000 in the bivariate GC analysis. It is well known that both China and Japan hold huge investments in Brazil, and our analysis suggests that information about the Brazil main stock market index is informative for HSI and NIKKEI 225, jointly with information from other European stock market indices. Our results thus show that the Synergy, i.e., a proxy of higher order information flow rooted in information theory, provides details that are complementary to those obtained from the bivariate and global GC analysis, and can thus be used to get a better characterization of the global financial system. In order to better characterize higher order dependencies of global financial market, further research will be devoted to develop methodologies capable to estimate locally in time the synergistic information flow, indeed the synergistic information flow appears to have a localized nature rather than resembling a nearly continuous exchange of information.
5,143.6
2020-09-01T00:00:00.000
[ "Economics", "Business" ]
Therapeutic Potential of Pretreatment with Exosomes Derived from Stem Cells from the Apical Papilla against Cisplatin-Induced Acute Kidney Injury Acute kidney injury (AKI) is the most serious side effect of treatment with cisplatin in clinical practice. The aim of this study was to investigate the therapeutic effect of exosomes derived from stem cells from the apical papilla (SCAPs) on AKI. The medium from a SCAP culture was collected after 2 d of culture. From this, SCAP-derived exosomes (SCAP-ex), which were round (diameter: 30–150 nm) and expressed the characteristic proteins CD63 and CD81, were collected via differential ultracentrifugation. Rat renal epithelial cells (NRK-52E) were pretreated with SCAP-ex for 30 min and subsequently treated with cisplatin to induce acute injury. The extent of oxidative stress, inflammation, and apoptosis were used to evaluate the therapeutic effect of SCAP-ex against cisplatin-induced nephrotoxicity. The viability assay showed that the survival of damaged cells increased from 65% to 89%. The levels of reactive oxygen species decreased from 176% to 123%. The glutathione content increased by 78%, whereas the levels of malondialdehyde and tumor necrosis factor alpha (TNF-α) decreased by 35% and 9%, respectively. These results showed that SCAP-ex can retard oxidative stimulation in damaged kidney cells. Quantitative reverse transcription–polymerase chain-reaction gene analysis showed that they can also reduce the expression of nuclear factor-κβ (NF-κβ), interleukin-1β (IL-1β), and p53 in AKI. Further, they increased the gene expression of antiapoptotic factor B-cell lymphoma-2 (Bcl-2), whereas they reduced that of proapoptotic factors Bcl-2-associated X (Bax) and caspase-8 (CASP8), CASP9, and CASP3, thereby reducing the risk of cell apoptosis. Introduction Acute kidney injury (AKI) is a clinical condition characterized by a rapid decline in renal excretion function within hours or days, as well as the accumulation of nitrogen-containing metabolites (e.g., creatinine and urea) and other clinically unmeasurable metabolic waste. Other common clinical manifestations include reduced urine output, accumulation of metabolic acids, and increased potassium and phosphate concentrations [1,2]. There are many causes of AKI, including kidney surgery, medication, and sepsis. Nephrotoxic drugs are an important cause of AKI. Severe AKI is associated with uremia, which results in the deterioration of kidney function and other organs throughout the body. These effects may lead to chronic kidney disease, a permanent requirement for hemodialysis, or death. There are many commonly prescribed drugs (e.g., aminoglycosides, angiotensin-converting enzyme inhibitors, and calcineurin enzyme inhibitors, nonsteroidal anti-inflammatory drugs) for this situation [3]. Losartan, a selective competitive angiotensin type II receptor antagonist, reduces the risk of progression to chronic kidney disease and death [4]. In this study, we used losartan as a positive control for comparison with the therapeutic effect of SCAP-ex. Cisplatin is a platinum-containing antineoplastic agent that is effective against a variety of human tumors, including bladder, head and neck, lung, ovarian, and testicular cancer [5]. Cisplatin forms coordination bonds with DNA bases, thereby deforming the DNA structure. This leads to inhibition of DNA replication and transcription, and induces apoptosis. However, treatment with cisplatin has been linked to the development of drug resistance and numerous undesirable side effects (e.g., severe kidney problems, myelosuppression, allergic reactions, reduced immunity to infections, peripheral neuropathy, gastrointestinal distress, hemorrhage, and tinnitus or hearing impairment) [6]. These side effects, particularly nephrotoxicity, are important factors limiting the efficacy of cisplatin in cancer therapy. Elucidation of the mechanism underlying cisplatin-induced nephrotoxicity may assist in protecting the kidneys and reducing the toxicity of cisplatin [7]. Following its entry into renal tubular cells, cisplatin activates signaling pathways such as sirtuin 1 (SITR1), mitogen-activated protein kinase (MAPK), p53, and reactive oxygen species (ROS), leading to cell death [8]. Moreover, it induces the production of tumor necrosis factor alpha (TNF-α), which aggravates the inflammatory response and accelerates the necrosis of renal tubular cells. In addition, cisplatin can also cause ischemic necrosis of the renal vascular structure, which results in further deterioration of renal function. These effects lead to the clinical development of AKI. Oxidative stress is the one of most important mechanisms involved in cisplatin toxicity. Besides DNA damage, it triggers cell death by apoptosis. In addition, the apoptotic pathways (extrinsic and intrinsic) involved in this process differ with the type of cancer [9]. Stem cells are undifferentiated primitive cells in the human body that can independently divide and proliferate, as well as differentiate into a variety of cells with specific functions. Stem cells from the apical papilla (SCAPs) are a type of mesenchymal stem cells (MSCs) [10]. SCAPs in the root apex of adolescent permanent teeth are characterized by a high potential for proliferation, self-renewal capacity, and low immunogenicity [11]. At present, only stem cells from human exfoliated deciduous teeth are used to treat kidney damage [12]. However, recent studies have found that stem cells can repair damaged tissues through paracrine and anti-inflammatory mechanisms [13]. Following damage to local tissue, MSCs are activated and secrete a variety of cytokines, forming a local microenvironment that is conducive to tissue repair. The secretory function of stem cells gives rise to the concept of cell-free therapy. Compared with stem-cell transplantation, cell secretions can avoid problems such as tumor formation and immune rejection. Exosomes are lipid bilayer vesicles, 30-150 nm in size, that are secreted by the cell when the multivesicular bodies in the cell fuse with its plasma membrane. Exosomes are equivalent to the cytoplasm enclosed in a lipid bilayer and are rich in nucleic acids (such as microRNA, long noncoding RNA, circular RNA, messenger RNA, and transfer RNA), protein, cholesterol, and other typical cytoplasm contents [14,15]. The characteristic surface proteins of exosomes generally include CD63, CD81, CD9, tumor susceptibility 101 (TSG101), and heat shock 27 kDa protein 1 (HSP27). In terms of their biological function, exosomes can deliver various receptors, proteins, genetic material such as DNA and microRNA, and lipids to target cells. The target cells incorporate these rich inputs from exosomes in three main ways: receptor-ligand interaction, direct fusion with the plasma membrane, and endocytosis, which allows the exosome contents to protect and heal the cell [16]. An increasing number of studies have shown that the exosomes of MSCs exert a protective effect against AKI [17]. In a model of ischemia-/reperfusioninduced AKI, adipose MSC-derived exosomes offered protection from the AKI [18]. In addition to AKI, the therapeutic efficacy and protective mechanism of exosomes in various kidney diseases/disorders have also been explored, including lupus nephritis (LN), other glomerular diseases, diabetic nephropathy (DN), polycystic kidney disease (PKD), renal fibrosis and chronic kidney disease (CKD) [16,19,20]. Therefore, the aim of this study was to investigate the therapeutic effect of pretreatment with SCAP-derived exosomes (SCAP-ex) against AKI. SCAP-ex were obtained by ultracentrifugation and identified via Western blotting. NRK-52E cells were pretreated with SCAP-ex, followed by induction of nephrotoxicity by treatment with cisplatin. The results confirmed the protective effect of SCAP-ex against inflammation, oxidative stress, and apoptosis. Characterization of SCAPs and SCAP-ex SCAPs were isolated via enzyme digestion. The subcultured cells obtained from a single colony after isolation were gradually cultivated as an adherent monolayer and had a fibroblast-like morphology ( Figure 1A). These SCAPs exhibited positive expression of the important MSC markers CD90 (99.78%) and CD73 (99.55%), as well as the endothelial progenitor marker CD105 (79.63%). However, they expressed minimal levels of the hematopoietic markers CD34 (0.09%) and CD45 (0.2%) ( Figure 1B). Western blotting was used to analyze the expression levels of the characteristic proteins in SCAP-derived exosomes. As shown in Figure 2A, SCAP-ex expressed the proteins characteristic of exosomes, such as CD63 and CD81. Figure 2B shows the morphology and size of exosomes, as assessed using TEM. The structure of the exosomes was round (diameter: 30-150 nm), which is consistent with the typical structure of exosomes. The particle size distribution of SCAP-ex was analyzed via NTA. As shown in Figure 2C, the median particle size was 147 nm, the average was 168.2 nm, and the mode was 116.3 nm. These results show that SCAP-ex, with respect to marker proteins, morphology, and particle size, show characteristics typical of all exosomes. SCAP-ex Protected Cisplatin-Treated NRK-52E Cells The results of the MTT analysis showed that the survival of NRK-52E cells treated with SCAP-ex at different concentrations was close to 100% ( Figure 3A), suggesting that SCAP-ex are nontoxic to NRK-52E cells. Additionally, treatment with cisplatin under different concentrations reduced the survival of NRK-52E cells in a dose-dependent manner ( Figure 3B). After treatment with 15 µM cisplatin, approximately 60-80% of NRK-52E cells survived; hence, this concentration was used for subsequent experiments. NRK-52E cells were pretreated with 40 or 80 µg/mL SCAP-ex or 10 µM losartan, and subsequently treated with 15 µM cisplatin. The cell survival rate increased significantly ( Figure 3C), and 80 µg/mL SCAP-ex exerted the strongest preventive and protective effects against cisplatin-induced injury, as was the case for losartan. Thus, a concentration of 80 µg/mL SCAP-ex was used in subsequent experiments. SCAP-ex Promoted Cellular Vitality in Cisplatin-Treated NRK-52E Cells In the cell-vitality assay we investigated the levels of free thiols by determining apoptosis and cell survival rates. Normal cells and early/late apoptotic cells exhibit different levels of fluorescence in response to the VitaBright-48 reagent, as shown in Figure 4A. Normal and early apoptotic cells are located in the lower right and left corners of the image, respectively. In the late stages of apoptosis, the cell membrane is severely damaged; thus, the PI reagent can enter the nucleus and emit fluorescence there. Therefore, late apoptotic cells can be distinguished using this reagent, and they can be seen in the upper area of the image. According to the quantitative results shown in Figure 4B, after pretreatment with SCAP-ex, the proportion of cisplatin-treated NRK-52E cells that were normal increased from 82.0% to 92.9%, while that of the early and late apoptotic cells decreased from 18.0% to 7.1%. Following pretreatment with losartan, the proportion of normal cells increased from 82.0% to 91.4%, while that of the early and late apoptotic cells decreased from 18.0% to 8.6%. These results revealed that SCAP-ex reduces the cell apoptosis caused by cisplatin, and its protective effect is similar to that of the clinical drug losartan. SCAP-ex Improved ROS/GSH/MDA/TNF-α Expression in Cisplatin-Treated NRK-52E Cells Oxidative stress is an important mechanism in AKI. During acute cellular injury, mitochondria release excess ROS, increasing cell damage. After pretreatment of NRK-52E cells with SCAP-ex, the ROS levels in cisplatin-treated NRK-52E cells decreased from 176.4% to 123.2%. Following pretreatment with losartan, the ROS levels decreased from 176.4% to 114.1% ( Figure 5A). These results showed that SCAP-ex can reduce the ROS content of cisplatin-treated NRK-52E cells and thus retard oxidative stress in cells. Using the enzyme-cycle method, we used GSH reductase to quantify GSH, with the calibration curve y = 0.0789x + 0.3491 (R 2 = 0.984). After pretreatment with SCAP-ex, the levels of GSH in cisplatin-treated NRK-52E cells increased from 9.3 to 16.27 µmol/g protein ( Figure 5B). Pretreatment with losartan yielded similar results and increased the levels of GSH. MDA and thiobarbituric acid (TBA) react at high temperature (90-100 • C) and under acidic conditions to form the MDA-TBA complex, which can be measured using the colorimetric method at 530-540 nm wavelength, using the calibration curve y = 0.0005x + 0.0593 (R 2 = 0.9879). After pretreatment with SCAP-ex, the levels of MDA in cisplatin-treated NRK-52E cells decreased significantly from 3.36 to 1.94 µmol/g protein ( Figure 5C). Pretreatment with losartan yielded similar results. We performed ELISA at 450 nm to quantitatively measure the TNF-α content of the cells. Following pretreatment of NRK-52E cells with SCAP-ex, the levels of TNFα in cisplatin-treated NRK-52E cells decreased from 22.13 to 20.31 pg/mL ( Figure 5D). Pretreatment with losartan exerted a similar effect. Quantitative RT-PCR Assay of Cisplatin-Treated NRK-52E Cells Pretreated with SCAP-ex Genes whose expression changed in response to damage caused by cisplatin-induced inflammation and apoptosis in NRK-52E cells included nuclear factor-κβ (NF-κβ), interleukin-1β (IL-1β), B-cell lymphoma-2 (Bcl-2), Bcl-2 associated X (Bax), p53, caspase-8 (CASP8), CASP9, and CASP3. Following pretreatment of NRK-52E cells with SCAP-ex and induction by cisplatin, the fold change of NF-κβ and IL-1β in their expression levels in response to the addition of cisplatin decreased from 4.61 to 2.07 and from 3.68 to 1.28, respectively. Pretreatment with losartan resulted in only slight decreases in expression ( Figure 6A,B). The fold change in the gene-expression level of Bcl-2 increased from 1.47 to 3.81, whereas that of Bax decreased from 1.84 to 1.23; of p53 decreased from 2.47 to 0.8; and of CASP8, CASP9, and CASP3 decreased from 2.54 to 0.61, from 2.16 to 1.09, and from 2.38 to 1.36, respectively. Pretreatment with losartan resulted in weaker effects than pretreatment with SCAP-ex ( Figure 7A-F). Discussion The therapeutic concept of cell-free therapy is based on the secretory function of stem cells. Compared with stem-cell transplantation, treatment with cell secretions can greatly reduce the risk of the tumor development, immune rejection, and ethical concerns [21,22]. SCAPs are a newly discovered type of MSCs that reside in the apical papilla of immature permanent teeth. When separated from the tip of the root and minced, they are found to contain the MSCs-associated positive markers CD73, CD90, CD105, and negative markers CD34 and CD45, indicating that they are not of hematopoietic origin. Therefore, SCAPs comprise a unique undifferentiated stem-cell lineage and are characterized by a high proliferative potential, self-renewal capacity, and low immunogenicity [13]. Based on our Western blotting analysis, SCAP-ex expressed the characteristic exosomal proteins CD63 and CD81. We observed the morphology and structure of SCAP-ex under TEM and found them to be round and cup-shaped, with a fingerprint-like membrane structure. NTA analysis and TEM observation showed that the particle size of these SCAP-ex was consistent with typical exosomes (i.e., 30-150 nm) [23]. These results show that isolated SCAP-ex may be a novel therapeutic agent for endodontics and other regenerative-medicine applications [24,25]. Previous studies have demonstrated that stem cells from different sources or derived exosomes exert therapeutic effects against cisplatin-induced AKI [17,26]. However, most are derived from bone marrow or umbilical-cord blood and are difficult to obtain; in contrast, it is relatively easy to obtain dental-cusp stem cells. At present, SCAP-ex are only used in dentine-pulp complex regeneration [25] and craniofacial soft-tissue regeneration [27]. Their therapeutic effect on AKI induced by cisplatin is unknown. The results of the MTT assay showed that the viability of NRK-52E cells pretreated with SCAP-ex at different concentrations was close to 100%. These findings demonstrated that these exosomes were not toxic to the cells. Treatment with different concentrations of cisplatin reduced cell viability in a dose-dependent manner. After pretreatment with SCAPex or losartan, however, cell viability following cisplatin treatment was significantly greater, demonstrating that SCAP-ex exerts protective effects on the proliferation of cisplatin-treated NRK-52E cells. The results of this study demonstrate that pretreatment with SCAP-ex could reduce cisplatin-induced ROS levels by 53.2%. Moreover, the levels of GSH increased by 78%, those of MDA decreased by 35%, and those of TNF-α decreased by 9%. These effects reduced the cisplatin-induced oxidative stress. Oxidative stress promotes cisplatin-induced nephrotoxicity via the accumulation of intracellular ROS [28]. This enhancement of the cells' antioxidative capacity may be the underlying mechanism through which SCAP-ex inhibited cisplatin-induced renal cell apoptosis. The rates of early and late cell apoptosis were reduced by 0.4% and 10.6%, respectively, whereas overall cell viability was increased by 10.9%. The improvement in cell apoptosis was better than that observed after pretreatment with losartan. Cisplatin also increases the concentration of TNF-α (a pleiotropic cytokine with endocrine, paracrine, and autocrine proinflammatory effects) and affects IL-1β and NFκβ [29]. After pretreatment with SCAP-ex, the expression levels of the inflammation-related factors NF-κβ and IL-1β in cisplatin-induced NRK-52E cells decreased by 52.9% and 76.3%, respectively. The antiapoptotic gene Bcl-2 was upregulated, whereas the proapoptotic genes Bax, p53, CASP8, CASP9, and CASP3 were downregulated. Because of the imbalance of Bcl-2 and Bax in the mitochondrial membrane, cytochrome c is released. This stimulates the downstream production of CASP9 and CASP3, and eventually induces apoptosis [30][31][32]. The pathway of inflammation and apoptosis caused by cisplatin revealed by our results is shown in Figure 8. Following entry into cells, cisplatin causes DNA damage, generates ROS, NF-κβ, IL-1β, and other factors, causing inflammation, inducing the production of p53, and stimulating Bax/Bcl-2 to act on mitochondria. These effects result in the production of CASP9 and CASP3, leading to cell apoptosis. TNF-α factor and CASP8 can also lead to apoptosis. Stem cells are used to treat damaged cells through paracrine therapy, and exosomal cell-free therapy may ameliorate cisplatin-induced nephrotoxicity through the inhibition of oxidative stress, the inflammatory response, and apoptosis. Isolation and Identification of SCAPs Apical papilla tissue was separated from the tip of the root following a standard operation procedure approved by the institutional review board of the Dental Clinic of Kaohsiung Medical University of Taiwan (KMUHIRB-SV(I)-20210047). SCAP cells were separated from apical papilla tissue using enzymatic digestion in a solution of 3 mg/mL collagenase type I (Worthington Biochemical, Lakewood, NJ, USA) and 4 mg/mL dispase (Sigma-Aldrich, St. Louis, MO, USA) for 1 h at 37 • C. Single-cell suspensions were obtained by passing digested samples through a strainer (70 µm) (Falcon; Thermo Fisher Scientific, Waltham, MA, USA). Cell suspensions were centrifuged at 1000 rpm for 10 min, and single cells were resuspended in culture medium composed of α-Minimal Essential Medium (Gibco; Thermo Fisher Scientific) with 10% fetal bovine serum and 1% antibiotic-antimycotic solution (Gibco; Thermo Fisher Scientific). Subsequently, the cells were incubated at 37 • C with 5% CO 2 . Subculturing was performed for ordinary cultures, and the medium was changed once every 2 d. The identification of isolated SCAPs was based on the presence of cell-surface molecules such as CD34, CD45, CD73, CD90, and CD105 and was performed using a FACSCalibur flow cytometer (BD Bioscience, Massachusetts, MA, USA). Isolation and Identification of SCAP-ex When the SCAP cells reached 80% confluence, they were cultured in serum-free medium containing 1% bovine serum albumin for 2 d. The medium supernatant was collected and centrifuged at 4 • C at 2000× g for 10 min and then 12,000× g for 30 min to eliminate dead cells and large-cell debris. The supernatant was filtered with a 0.22 µm filter and then ultracentrifuged at 100,000× g for 70 min (L-90K, Beckman, Indianapolis, IN, USA). The pellet was washed with 1 mL of phosphate-buffered saline (PBS) and again centrifuged at 100,000× g for 70 min (MAX-E, Backman, USA). The precipitate was suspended in PBS and quantified using the bicinchoninic acid (BCA) protein assay. The presence of proteins characteristic of SCAP-ex was examined using Western blotting. The morphometry of the exosomes was observed via transmission electron microscopy (TEM, FEI Tecnai, G2 F20 S-TWIN, Bellaterra, Spain). Finally, the size of the exosomes was measured via nanoparticle tracking analysis (NTA, NanoSight LM 10-HS, Malvern Panalytical, Malvern, UK). Cellular Viability and Protection following SCAP-ex/Cisplatin Treatment NRK-52E cells were purchased from the Bioresource Collection and Research Center (BCRC, Hsinchu, Taiwan). Cells were cultured as monolayers in Dulbecco's modified Eagle's medium containing 5% bovine calf serum and 1% penicillin/streptomycin in a humidified incubator with 5% CO 2 at 37 • C. The culture medium was replaced every 2-3 d. We sought to assess the cytotoxic effect of cisplatin or SCAP-ex on NRK-52E cells using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. This is a colorimetric assay was based on MTT (Sigma-Aldrich, Missouri, MO, USA) for assessing cell metabolic activity as an indicator for cell viability, proliferation, and cytotoxicity. For this purpose, 1 × 10 5 cells/well were seeded in 96-well plates containing culture medium with different concentrations of cisplatin or SCAP-ex. Cells in each well were treated with 5 mg/mL MTT at 37 • C for 4 h. The medium was removed and the formazan was solubilized in dimethyl sulfoxide. The metabolized MTT was measured based on optical density at 570 nm using a spectrophotometer (Multiskan FC, Thermo Fisher Scientific, Massachusetts, MA, USA). To evaluate the protective effect of SCAP-ex, NRK-52E cells were pretreated with 40 or 80 µg/mL SCAP-ex or 10 µM losartan for 30 min. Next, 0, 5, 10, or 15 µM cisplatin was added, and the cells were incubated for another 24 h, after which an MTT assay was performed. Cellular Vitality Assay NRK-52E cells (1 × 10 5 cells/well) were pretreated with 80 µg/mL SCAP-ex or 10 µM losartan in 24-well plates for 30 min. Next, 15 µM cisplatin was added and the cells were cultured for another 24 h. The cultured cells were then centrifuged at 1500 rpm for 5 min. The supernatant was removed and the precipitate was suspended in 200 µL of PBS. The suspension was mixed continuously while 800 µL of cold ethanol was added, and the cells were stored at −20 • C overnight. The thiol levels were measured using VitaBright-48 (VB48, ChemoMetec A/S, Lillerod, Denmark) and the number of dead cells was determined using propidium iodide (PI, ChemoMetec A/S, Denmark), according to the instructions provided by the manufacturer of the cell-vitality kit (ChemoMetec A/S, Denmark). The results were obtained using a NucleoCounter NC-250 and the NucleoView software (ChemoMetec A/S, Denmark). ROS Assay NRK-52E cells (1 × 10 5 cells/well) were pretreated with SCAP-ex or losartan, followed treated with cisplatin, as Section 2.4 process. The medium was removed, and the cultured cells were rinsed with PBS. Dichlorofluorescin diacetate (DCFDA, 100 µL) was added to each well, and the cells were incubated at 37 • C for 45 min in the dark. Subsequently, the DCFDA solution was removed, and the cells were rinsed twice with 100 µL of PBS per well. The samples were placed on a fluorescent enzyme-linked immunosorbent assay (ELISA) reader (Multiskan FC, Thermo Fisher Scientific, Massachusetts, MA, USA) to evaluate excitation/emission at 485/535 nm. Glutathione (GSH) and Malondialdehyde (MDA) Assay NRK-52E cells (1.5 × 10 5 cells/well) were pretreated with SCAP-ex or losartan, followed treated with cisplatin, as in the Section 2.4 process. The medium was removed, and the cultured cells were rinsed twice with PBS. The cells were lysed, deproteinized, and centrifuged at 10,000× g for 15 min at 4 • C. The supernatants were collected for the assay. To determine the activity of GSH S-transferase, 50 µL of sample and standard solution were added to a 96-well plate. After the addition of 150 µL of assay cocktail and shaking for 25 min in the dark, the absorbance at 405 nm was measured using an ELISA reader (Multiskan FC, Thermo Fisher Scientific, USA). For the MDA assay, cell lysates were centrifuged at 10,000× g for 15 min at 4 • C. Whole homogenates were collected for the assay. Sample and standard solution (100 µL) were added to an Eppendorf tube along with 100 µL of trichloroacetic acid (10%) assay reagent, and the solution was mixed well. Next, 800 µL color reagent was added, and the Eppendorf tube was placed in boiling water for 1 h. After 10 min of incubation on ice, the Eppendorf tubes were centrifuged at 1600× g at 4 • C. Each sample and standard was loaded into 96-well assay plates. The absorbance of each well was measured at 540 nm using an ELISA reader (Multiskan FC, Thermo Fisher Scientific, USA). MDA concentrations were calculated according to the instructions provided by the manufacturer (Cayman Chemical, Ann Arbor, MI, USA). TNF-α Assay NRK-52E cells (1 × 10 4 cells/well) were pretreated with SCAP-ex or losartan, followed treated with cisplatin, as in the Section 2.4 process. The culture medium was collected to determine the concentration of TNF-α secreted from NRK-52E cells using an ELISA MAX Deluxe set kit (BioLegend, San Diego, CA, USA). The absorbance at 450 nm was measured using a microplate reader and the concentrations were determined using standard curves. Reverse Transcription-Polymerase Chain Reaction (RT-PCR) NRK-52E cells (5 × 10 5 cells/well) were pretreated with SCAP-ex or losartan, followed treated with cisplatin, as in the Section 2.4 process. The total RNA from the NRK-52E cells was extracted using Trizol reagent (Ambion ® , Life Technologies™, Carlsbad, CA, USA) for 10 min, and the RNA quantity was determined using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, USA). Complementary DNA (cDNA) was synthesized from 1000 ng of RNA using the iScript cDNA synthesis kit (Bio-Rad, Hercules, CA, USA) and a thermocycler (5 min at 25 • C, 20 min at 46 • C, 1 min at 95 • C). RT-PCR was performed using the iQuant SYBR Green Supermix (Bio-Rad, Hercules, CA, USA) according to the instructions provided by the manufacturer. Initial denaturation was performed at 95 • C for 15 min, followed by 60 cycles of 95 • C for 15 s and 60 • C for 60 s. The relative gene-expression fold change was determined using the 2 −∆∆CT method; the levels were normalized to those of the housekeeper gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH). The used PCR primers were as follows: Bax (Forward: 5 -TGG AGC TGC AGA GGA TGA TTG Statistical Analysis All experiments were performed thrice for different samples. All data are presented as means ± standard deviation (SD) and performed using IBM SPSS Statistics Base 30U software through ANOVA analysis. Statistical comparisons were performed, and p values smaller than 0.05 were considered significant. Conclusions In summary, the results of this study indicated that SCAP-ex may protect NRK-52E cells from cisplatin-induced AKI by inhibiting oxidative stress, inflammation, and cell apoptosis. SCAPs are neural-crest-derived MSCs, and SCAP-ex contains numerous bioactive compounds that are key factors in stem-cell paracrine action. SCAP-ex, rather than MSCs, may thus be useful as a cell-free therapeutic strategy against AKI induced by chemotherapeutic agents.
5,926.2
2022-05-01T00:00:00.000
[ "Biology" ]
Distinguishing mutants that resist drugs via different mechanisms by examining fitness tradeoffs There is growing interest in designing multidrug therapies that leverage tradeoffs to combat resistance. Tradeoffs are common in evolution and occur when, for example, resistance to one drug results in sensitivity to another. Major questions remain about the extent to which tradeoffs are reliable, specifically, whether the mutants that provide resistance to a given drug all suffer similar tradeoffs. This question is difficult because the drug-resistant mutants observed in the clinic, and even those evolved in controlled laboratory settings, are often biased towards those that provide large fitness benefits. Thus, the mutations (and mechanisms) that provide drug resistance may be more diverse than current data suggests. Here, we perform evolution experiments utilizing lineage-tracking to capture a fuller spectrum of mutations that give yeast cells a fitness advantage in fluconazole, a common antifungal drug. We then quantify fitness tradeoffs for each of 774 evolved mutants across 12 environments, finding these mutants group into classes with characteristically different tradeoffs. Their unique tradeoffs may imply that each group of mutants affects fitness through different underlying mechanisms. Some of the groupings we find are surprising. For example, we find some mutants that resist single drugs do not resist their combination, while others do. And some mutants to the same gene have different tradeoffs than others. These findings, on one hand, demonstrate the difficulty in relying on consistent or intuitive tradeoffs when designing multidrug treatments. On the other hand, by demonstrating that hundreds of adaptive mutations can be reduced to a few groups with characteristic tradeoffs, our findings may yet empower multidrug strategies that leverage tradeoffs to combat resistance. More generally speaking, by grouping mutants that likely affect fitness through similar underlying mechanisms, our work guides efforts to map the phenotypic effects of mutation. Introduction How many different molecular mechanisms can a microbe exploit to adapt to a challenging environment?Answering this question is particularly urgent in the field of drug resistance because infectious populations are adapting to available drugs faster than new drugs are developed (Centers for Disease Control andPrevention, 2019, 2019;Ventola, 2015).Understanding the mechanistic basis of drug resistance can inform strategies for how to combine existing drugs in a way that prevents the evolution of resistance (Andersson and Hughes, 2010;Melnikov et al., 2020;Pinheiro et al., 2021).For example, one strategy exposes an infectious population to one drug (Drug A) knowing that the mechanism of resistance to Drug A makes cells susceptible to Drug B (Baym et al., 2016;Hall et al., 2009;Pál et al., 2015;Roemhild et al., 2020).Problematically, these multi-drug strategies perform best when all mutants that resist Drug A have the same tradeoff in Drug B (Figure 1A).If there are multiple different mechanisms to resist Drug A, some of which lack this tradeoff, treatment strategies could fail (Figure 1B), and they sometimes do (Abel zur Wiesch et al., 2014;Grier et al., 2003;Scarborough et al., 2020;Wang et al., 2019). Laboratory experiments that have power to search for universal tradeoffs -where all the mutants that perform well in one environment perform poorly in another -often find there are mutants that violate trends or the absence of trends altogether (Ardell and Kryazhimskiy, 2021;Gjini and Wood, 2021;Herren and Baym, 2022;Hill et al., 2015;Kinsler et al., 2020;Nichol et al., 2019).Another way to phrase this observation is to say that adaptive mutations often have collateral effects in environments other than the one in which they originally evolved (Pál et al., 2015).But these effects, referred to in some studies as pleiotropic effects, can be unpredictable and context dependent (Bakerlee et al., 2021;Chen et al., 2023;Geiler-Samerotte et al., 2020;Hinz et al., 2023;Jerison et al., 2020).In simpler terms, some mutants that resist Drug A will suffer a tradeoff in Drug B, but others may suffer a tradeoff in Drug C. To sum, observations from many fields suggest that the mutations that provide a benefit in one environment do not always suffer similar tradeoffs.This begs questions about the extent of diversity among adaptive mutations: does each one suffer a unique set of tradeoffs or can many adaptive mutants be grouped by their common tradeoffs?If there are only a few types of eLife digest Mutations in an organism's DNA make the individual more likely to survive and reproduce in its environment, passing on its mutations to the next generation.Mutations can alter the proteins that a gene codes for in many ways.This leads to a situation where seemingly similar mutations -such as two mutations in the same gene -can have different effects. For example, two different mutations could affect the primary function of the encoded protein in the same way but have different side effects.One mutation might also cause the protein to interact with a new molecule or protein.Organisms possessing one or the other mutation will thus have similar odds of surviving and reproducing in some environments, but differences in environments where the new interaction is important. In microorganisms, mutations can lead to drug resistance.If drug-resistant mutations have different side effects, it can be challenging to treat microbial infections, as drug-resistant pathogens are often treated with sequential drug strategies.These strategies rely on mutations that cause resistance to the first drug all having susceptibility to the second drug.But if similar seeming mutations can have diverse side effects, predictions about how they will respond to a second drug are more complicated. To address this issue, Schmidlin, Apodaca et al. collected a diverse group of nearly a thousand mutant yeast strains that were resistant to a drug called fluconazole.Next, they asked to what extent the fitness -the ability to survive and reproduce -of these mutants responded similarly to environmental change.They used this information to cluster mutations into groups that likely have similar effects at the molecular level, finding at least six such groups with unique trade-offs across environments.For example, some groups resisted only low drug concentrations, and others were unique in that they resisted treatment with two single drugs but not their combination. These diverse types of fluconazole-resistant yeast lineages highlight the challenges of designing a simple sequential drug treatment that targets all drug-resistant mutants.However, the results also suggest some predictability in how drug-resistant infections can evolve and be treated. tradeoff present in a collection of adaptive mutations, multidrug treatments that target tradeoffs may be more feasible. The goal of the present study is to count how many different types of adaptive mutation, each type being defined by its unique tradeoffs, exist in a population of drug-resistant yeast.This simple counting exercise is surprisingly difficult.One reason why is that the mutations that provide the strongest fitness advantage often dominate evolution.Thus, in the clinic, and in laboratory experiments, the same drug-resistant mutations repeatedly emerge (Berkow and Lockhart, 2017;Ksiezopolska et al., 2021;Lupetti et al., 2002;Melnikov et al., 2020), potentially leading to the false impression that the mechanistic basis of resistance to a particular drug, and the associated tradeoffs, are less varied than may be true.This problem is amplified by the limitations of bulk DNA sequencing methods which often miss mutations that are present in less than 10% of a population's cells (Good et al., 2017).A similar problem results from strategies to disentangle adaptive from passenger mutations that rely on observing the adaptive ones multiple times in multiple independent replicates (Martínez and Lang, 2023).In order to design better multidrug treatment strategies that thwart resistance, or to see if such strategies are feasible, we need methods to survey a more complete set of mutations and mechanisms that can contribute to resistance. Fortunately, single-cell and single-lineage DNA sequencing technologies are allowing us to more deeply sample genetic diversity in evolving populations of microbes beyond the mutations that dominate evolution (Schmidt and Efferth, 2016).Here, we leverage a cutting-edge lineage-tracing method to perform massively replicate evolution experiments in yeast (Saccharomyces cerevisiae).This method has been shown to reveal a fuller spectrum of mutations underlying adaptation to a particular environment (Levy et al., 2015).One key to its success is that it uses DNA barcodes to track all competing adaptive lineages, not just the ones that ultimately rise to appreciable frequency.Another key feature is that it captures adaptive lineages before they accumulate multiple mutations such that it is easy to pinpoint which mutation is adaptive.We apply this method to investigate mechanisms underlying resistance to a specific antifungal drug: fluconazole (FLU; Logan et al., 2022;Wang et al., 2022).Although serious fungal infections are most common in immunocompromised individuals, their impact on global health is still striking, resulting in over 1.5 million deaths annually (Iyer et al., 2022;Xie et al., 2014).By focusing on mechanisms of FLU resistance, we contribute to a growing literature about the tradeoffs that may be leveraged to design multidrug antifungal treatment strategies (Cowen and Lindquist, 2005;Hill et al., 2015;Iyer et al., 2022;Ksiezopolska et al., 2021).However, our primary goal is more generic: we seek to explore the utility of a high-throughput evolutionary approach to enumerate classes of drug resistant mutant and their associated tradeoffs.To enhance the diversity of drug resistant mutants in our experiment, we performed multiple laboratory evolutions in a range of FLU concentrations and sometimes in combination with a second drug.We did so because previous work has shown that different drug concentrations and combinations select for different azole resistance mechanisms (Cowen and Lindquist, 2005;Hill et al., 2015).Ultimately, we obtained a large collection of 774 adaptive yeast strains.But how do we know whether we succeeded in isolating diverse types of FLU-resistant mutants?Typical phenotyping methods, for example quantifying expression levels of drug export pumps (Miyazaki et al., 1998) or of the drug targets themselves (Palmer and Kishony, 2014), are low throughput and require some a priori knowledge of the phenotypes that may be involved in drug resistance.Alternatively, many studies focus on identifying the genetic basis of adaptation in order to glean insights about the underlying mechanisms of resistance (Cowen et al., 2014;Tenaillon et al., 2012;Venkataram et al., 2016).However, genotyping lineages from barcoded pools is technically challenging (Venkataram et al., 2016), and further, genotype does not necessarily predict phenotype (Brettner et al., 2022;Eguchi et al., 2019).For example, previous work using the same barcoded evolution platform used here discovered that the mutations that provide an advantage under glucose-limitation are in genes comprising a canonical glucose-sensing pathway (Venkataram et al., 2016).Yet despite this similarity at the genetic level, follow-up work showed that these mutants did not experience the same tradeoffs when exposed to new environments (Kinsler et al., 2020;Li et al., 2018b). Instead of trying to identify the phenotypic or even the genetic basis of adaptation, here we strive to enumerate different classes of FLU-resistant mutants.We sort evolved FLU-resistant yeast strains into classes based on whether they share similar tradeoffs across 12 different environments.The intuition here is as follows.If two groups of drug resistant mutants have different fitness tradeoffs, it could mean that they provide resistance through different underlying mechanisms.Alternatively, both could provide drug resistance via the same mechanism, but some mutations might also affect fitness via additional mechanisms (i.e. they might have unique 'side-effects' at the molecular level) resulting in unique fitness tradeoffs in some environments.Previous work is consistent with the idea that mutants with different fitness tradeoffs affect fitness through different underlying mechanisms (Li et al., 2019;Pinheiro et al., 2021;Rodrigues et al., 2016).Our work can be seen as part of a growing push to flip the problem of mechanism on its head (Kinsler et al., 2020;Li et al., 2019;Petti et al., 2023).Instead of using a mechanistic understanding to predict a microbe's fitness, here we use how fitness varies across environments to distinguish mutants that likely affect fitness via different mechanisms.This inverted approach to investigating the mechanisms by which mutations affect fitness has broad applications; it could be used to characterize dominant negative mutations (Flynn et al., 2024;Padhy et al., 2023), mutations with collateral fitness effects (Mehlhoff et al., 2020;Mehlhoff and Ostermeier, 2023), and in other high-throughput mutational scanning studies (Flynn et al., 2020;Fowler and Fields, 2014;Hinz et al., 2023;Starr et al., 2017). The key requirement to being able to implement this approach is having a large collection of barcoded mutants and the ability to re-measure their fitness, relative to a reference strain, in multiple environments, such as the 12 different combinations and concentrations of drugs surveyed here.Across our collection of 774 adaptive yeast lineages, we discovered at least 6 distinct groups with characteristic fitness tradeoffs across these 12 environments.For example, we find some drug resistant mutants are generally advantageous, while others are advantageous only in specific environments.And we find some mutants that resist single drugs also resist combinations of those drugs, while others do not.By grouping mutants with similar tradeoffs, we reduce the number of unique drug-resistant mutants from more than can be easily phenotyped (774) to a manageable panel of six types for investigating the molecular mechanisms by which mutations impact fitness. With regard to multidrug regimens that exploit tradeoffs (Figure 1), our finding of multiple mutant classes with different tradeoffs suggests this may not be straightforward.The outlook is further complicated by our finding that some classes of FLU-resistant mutant primarily emerge from evolution experiments that did not contain FLU.This, as well as limits on our power to observe mutants with strong tradeoffs, suggest there may be additional types of FLU-resistant mutant beyond those we sampled.These observations suggest multidrug strategies that assume resistant mutants suffer consistent or common tradeoffs will often fail. On the other hand, nuanced strategies to forestall resistance that allow for multiple mutant types are emerging (Gjini and Wood, 2021;Maltas and Wood, 2019;Wang et al., 2023).For example, one idea is to apply a drug regimen that enriches for mutants that suffer a particular tradeoff before exploiting that tradeoff (Iram et al., 2021).Another idea is to perform single-cell sequencing on infectious populations to discover which classes of mutants are present (Forsyth et al., 2021;Nagasawa et al., 2021) and design treatments specific to those (Aissa et al., 2021;Hsieh et al., 2022;Maltas and Wood, 2019).Our findings support that such ideas may be feasible by demonstrating that there are not as many unique fitness tradeoffs as there are mutations. More generally, our work -showing that 774 mutants fall into a much smaller number of groupscontributes to growing literature suggesting that the phenotypic basis of adaptation is not as diverse as the genetic basis (Iwasawa et al., 2022;Kinsler et al., 2020;Petti et al., 2023).This winnowing of diversity is important: it may mean that evolutionary processes, for example, whether an infectious population will adapt to resist a drug, are sometimes predictable (King et al., 2022;Kinsler et al., 2020;Lässig et al., 2017;Rodrigues et al., 2016;Wortel et al., 2023;Yoon et al., 2021). Barcoded evolution experiments uncover hundreds of yeast lineages with adaptive mutations In order to create a sizable collection of drug-resistant mutants, we performed high-replicate evolution experiments utilizing barcoded yeast (S. cerevisiae; Boyer et al., 2021;Levy et al., 2015;Li et al., 2018b).This barcoding system allows evolving hundreds of thousands of genetically identical yeast lineages together in a single flask.Each lineage is tagged with a unique DNA barcode, which is a 26 base pair sequence of DNA located within an artificial intron.Lineages with unique barcodes can be thought of as independent replicates of an evolution experiment.This high-replicate system has the potential to generate many different yeast lineages that differ by single adaptive mutations (Kinsler et al., 2020;Venkataram et al., 2016). We performed a total of 12 barcoded evolution experiments, each of which started from the same pool of approximately 300,000 barcoded yeast lineages (Figure 2A, B; Figure 2-figure supplement 1).These evolutions survey how yeast cells adapt to different concentrations and combinations of two drugs: fluconazole (FLU) and radicicol (RAD).FLU is a first line of defense against yeast infections, but over the past two decades diverse resistant mutations have been identified (Bongomin et al., 2017;Osset-Trénor et al., 2023;Rybak et al., 2022).Some earlier work suggested that FLU-resistant mutants are sensitive to the second drug we study, radicicol (RAD; Cowen et al., 2009;Cowen and Lindquist, 2005), and more generally that RAD can prevent the emergence of drug resistance in other systems (Whitesell et al., 2014).However, there are some mutants that are cross-resistant to both FLU and RAD (Hill et al., 2015), and the prominent mechanism of resistance can differ with the intensity of selection and drug concentration (Cowen and Lindquist, 2005;Yang et al., 2023).We thus chose to evolve yeast to resist different concentrations and combinations of FLU and RAD to generate a diverse pool of adaptive mutations comprising different mechanisms of drug resistance. We evolved yeast to resist three different concentrations of either FLU or RAD for a total of six single-drug conditions (Table 1).We also studied four conditions containing combinations of both drugs, as well as two control conditions, for a total of 12 evolution experiments (Table 1).We chose to study subclinical drug concentrations with the hope that no drug treatment would be strong enough to reduce the population of yeast cells to only a handful of unique barcodes (Figure 2-figure supplement 2).We needed to maintain barcode diversity in order to evolve a large number of unique lineages that each accumulate different mutations. With the goal of collecting adaptive lineages from each evolution experiment, we took samples from each one after 3-6 growth/transfer cycles (Figure 2-figure supplement 1).This represents roughly 24-48 generations of growth assuming 8 generations per growth/transfer cycle (Levy et al., 2015).We sampled early because previous work using this barcoded evolution system demonstrated that the diversity of adaptive lineages peaks after just a few dozen generations (Levy et al., 2015;Venkataram et al., 2016).This happens because the barcoding process is slightly mutagenic, thus there is less need to wait for DNA replication errors to introduce mutations (Levy et al., 2015;Venkataram et al., 2016).We sampled about 2000 cells from each evolution experiment except those three containing high concentrations of FLU from which we sampled only 1000 cells for a total of ~21,000 isolates (2000 cells x 9 conditions +1000 cells x 3 conditions) (Figure 2C). Next, we measured the fitness of each isolate in each of the 12 evolution environments to quantify fitness tradeoffs (e.g.whether mutants that perform well in one environment perform worse in another).This process also indirectly screens isolates for adaptive mutations by comparing the fitness of each evolved isolate to the ancestor of the evolution experiments (Venkataram et al., 2016).To do so, we pooled these 21,000 isolates and used this pool to initiate fitness competition experiments 1).(C) A small sample of evolved isolates were taken from each evolution experiment and their barcodes were sequenced.These ~21,000 isolates do not represent as many unique, adaptive lineages because many either have the same barcode or do not possess adaptive mutations.(D) These samples of evolved isolates were all pooled together with control strains representing the ancestral genotype.(E) Barcoded fitness competition experiments were then performed on this pool in each of the 12 evolution conditions.Fitness was measured by tracking changes in each barcode's frequency over time relative to control strains.Two replicates per condition were performed.(F) The overall goal is to investigate fitness tradeoffs for hundreds of adaptive lineages.For example, the adaptive lineage depicted in dark blue has higher fitness than the ancestor in some environments (HR, HF) but lower fitness in others (DMSO, ND).We were able to investigate fitness tradeoffs for 774 adaptive lineages.We excluded lineages when we did not observe their associated barcode at least 500 times in all 12 environments.In other words, we only included lineages for which we obtained high-quality fitness estimates in all 12 environments. The online version of this article includes the following figure supplement(s) for figure 2: (Figure 2D).We competed the pool against control strains, that is strains of the ancestral genotype that do not possess adaptive mutations (Kinsler et al., 2020;Venkataram et al., 2016).We performed 24 such competitive fitness experiments, 2 per each of the original 12 evolution conditions (Figure 2E).In each experiment, we emulated the growth and transfer conditions of the original evolution experiments as precisely as possible, tracking how barcode frequencies changed over five growth/transfer cycles (~40 generations).We used the log-linear slope of this change, relative to the average slope for the control strains, to quantify relative fitness. We found that many barcoded lineages have higher fitness than the control strains in some conditions, presumably because they possess adaptive mutations that improve their fitness in some conditions (Figure 2-figure supplement 3).In fact, some of these adaptive lineages outcompeted the other lineages so quickly that it posed a challenge.Barcodes pertaining to outcompeted lineages were often not present at high enough coverage to track their fitness.We applied a conservative filter, preserving only 774 lineages with barcodes that were observed >500 times in at least one replicate experiment per each of the 12 conditions.The reason we required fitness measurements in all 12 conditions is that our goal is to examine each lineage's fitness tradeoffs (Figure 2F) in order to see if different lineages have different tradeoffs.In order to compare apples to apples, we need to measure each lineage's fitness in the same set of environments.The 774 lineages we focus on are biased towards those that are reproducibly adaptive in multiple environments we study.This is because lineages that have low fitness in a particular environment are rarely observed >500 times in that environment (Figure 2-figure supplement 4).By requiring lineages to have high-coverage fitness measurements in all 12 conditions, we exclude adaptive mutants that have severe tradeoffs in one or more environments, consequently blinding ourselves to mutants that act via unique underlying mechanisms.Despite this bias, we will go on to demonstrate that there are different types of mutants with characteristically different fitness tradeoffs present among the 774 lineages that remain. To provide additional evidence that these 774 barcoded yeast lineages indeed possess adaptive mutations, we performed whole genome sequencing on a subset of 53 lineages.Because we sampled these lineages after only a few dozen generations of evolution, each lineage differs from the ancestor by one or just a few mutations, making it easy to pinpoint the genetic basis of adaptation.Doing so revealed mutations that have previously been shown to be adaptive in our evolution conditions (Supplementary file 1).For example, we sequenced many FLU-resistant yeast lineages finding 35 with unique single nucleotide mutations in either PDR1 or PDR3, and a few with mutations in SUR1 or UPC2, genes which have all been shown to contribute to FLU resistance in previous work (Flowers et al., 2012;Tanaka and Tani, 2018;Uemura and Moriguchi, 2022;Vasicek et al., 2014;Vu and Moye-Rowley, 2022).Similarly, lineages that have very high fitness in RAD were found to possess single nucleotide mutations in genes associated with RAD resistance, such as HDA1 (Robbins et al., 2012) and HSC82, which is the target of RAD (Roe et al., 1999).We also observed several lineages with similar mutations to those observed in other studies using this barcoded evolution regime, including mutations to IRA1, IRA2, and GPB2 (Kinsler et al., 2020;Venkataram et al., 2016).Previous barcoded evolutions also observed that increases in ploidy were adaptive, with 43% to 60% of cells becoming diploid during the course of evolution (Venkataram et al., 2016).However, ploidy changes contributed less to adaptation in our experiment, with at most 9.4% of cells becoming diploid by the time point when we sampled, but often less than 2% (Supplementary file 2). In sum, we have created a diverse pool of 774 barcoded yeast lineages, most of which have a fitness advantage in at least one of the conditions we study and are likely to possess a unique adaptive mutation.The question we address for the rest of this study is to what extent these hundreds of mutant lineages differ from one another in terms of their fitness tradeoffs and the mechanism/s underlying their fitness advantages. A unique mechanisms of FLU resistance emerges among mutants isolated in RAD evolutions The majority of the 774 adaptive lineages that we study have higher fitness than the ancestral strains in not one, but often in several drug conditions.This suggests that pleiotropy, and in particular crossresistance, is prevalent among the lineages we study.But not all lineages show the same patterns of cross-resistance (Figure 3).For example, the 100 most fit lineages in our highest concentration of fluconazole are also beneficial in our highest concentration of radicicol (Figure 3A; leftmost two boxplots).As expected, these 100 lineages also have high fitness in conditions where high concentrations of FLU and RAD are combined (Figure 3A; third boxplot).And these 100 most-fit lineages in FLU lose their fitness advantage in conditions where no drug is present (Figure 3A; rightmost boxplot). Given their high fitness in conditions containing FLU, it seems likely that these 100 mutants originated from evolution experiments containing FLU.We can trace every lineage back to the evolution experiment/s it originated from because we sequenced the lineages we sampled from each evolution experiment before pooling all 21,000 isolates (Figure 2C).As we expected, these 100 best performing lineages in high FLU largely originate from evolution experiments containing FLU (Figure 3B).Given that these lineages have no fitness advantage in conditions containing no drug, it is also unsurprising that they are underrepresented in evolution experiments lacking RAD and FLU (Figure 3B). It might be tempting to generalize that most mutations that provide drug resistance are not beneficial in environments without drugs.Afterall, we show this is true for 100 independent lineages (Figure 3A).Further, many previous studies find a similar pattern, whereby drug resistant mutants often do not have high fitness in the absence of drug (Allen et al., 2019;Andersson and Hughes, 2010;Basra et al., 2018;Melnikov et al., 2020), such that treatment strategies have emerged that cycle patients between drug and no drug states, albeit with mixed success (Algazi et al., 2020;Baker et al., 2018;Raymond, 2019;Wang et al., 2019).However, this type of generalization is not supported by our data.We find that drug resistance can sometimes come with an advantage, rather than a cost, in the absence of a drug (Figure 3C).The top 100 most fit mutants in our highest concentration of RAD provide a fitness advantage in high RAD, high FLU, as well as in environments with no drug (Figure 3C).These observations suggest that there are at least two different mechanisms by which to resist FLU that result in different tradeoffs in other environments (Figure 3A vs. 3 C). Intriguingly, these FLU-resistant lineages that maintain their fitness advantage in the absence of drug (Figure 3C) mainly originate from evolution experiments performed in conditions lacking FLU (Figure 3D).This highlights how the potential mechanisms by which a microbe can resist a drug may be more varied than is often believed.Typically, one does not search for FLU-resistant mutants by evolving yeast to resist RAD.Thus typical studies might miss this unique class of FLU-resistant mutants. In sum, there appear to be at least two different types of mutants present among our collection of 774 adaptive yeast lineages.One group has almost equally high fitness in RAD and FLU but has no fitness advantage over the ancestral strain in conditions without either drug (Figure 3A).Another group is defined by very high fitness in RAD, moderately high fitness in FLU and moderately high fitness in conditions without either drug (Figure 3C).When comparing fitness in RAD vs. FLU across all 774 lineages, not only the top 100 best performing in each drug, we see some evidence that they (A) This panel describes the 100 mutant lineages with the highest fitness relative to the control strains in the high FLU environment (8 μg/ml FLU).The vertical axis depicts the fitnesses (log-linear slopes relative to control strains) for these 100 strains in four selected environments, including the high FLU environment (boxed).Boxplots summarize the distribution across all 100 lineages for each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5 × IQR (whiskers).(B) The 100 lineages with highest fitness in high FLU were most often sampled from evolution experiments in which FLU was present.In this pie-chart, colors correspond to the evolution conditions listed in Table 1 and the blue outer ring highlights evolution conditions that contain FLU.The size of each slice of pie represents the relative frequency with which these 100 lineages were found in each evolution experiment.(C) Similar to panel A, this panel describes the 100 mutant lineages with the highest fitness relative to the control strains in the high RAD environment (20 µM Rad).(D) The 100 lineages with highest fitness in high RAD were most often sampled from evolution experiments that did not contain FLU.(E) A pairwise correlation plot showing that all 774 mutants, not just the two groups of 100 depicted in panels A and C, to some extent fall into two groups defined by their fitness in high FLU and high RAD.The contours (black curves) were generated using kernel density estimation with bins = 7.These contours describe the density of the underlying data, which is concentrated into two clusters defined by the two smallest black circles.The 100 mutants with highest fitness in high FLU are blue, highest fitness in high RAD are red, and the seven that overlap between the two aforementioned categories are black. The online version of this article includes the following figure supplement(s) for figure 3: largely fall into the two main categories highlighted in Figure 3A and C (Figure 3E).Thus, it might be tempting to conclude that there are two different types of FLU-resistant mutant in our dataset.However, sorting mutants into groups using a pairwise correlation plot (Figure 3E) excludes data from 10 of our 12 environments. A strategy to differentiate classes of drug-resistant mutants with different tradeoffs The observation of two distinct types of adaptive mutants (Figure 3) made us wonder whether there were additional unique types of FLU-resistant mutants with their own characteristic tradeoffs.This is difficult to tell by using pairwise correlation like that in Figure 3E because we are not studying pairs of conditions, as is somewhat common when looking for tradeoffs to leverage in multidrug therapies (Ardell and Kryazhimskiy, 2021;Larkins-Ford et al., 2022;Melnikov et al., 2020;Scarborough et al., 2020).Instead, we have collected fitness data from 12 conditions to yield a more comprehensive set of gene-by-environment interactions for each mutant.This type of data, describing how a particular genotype responds to environmental change, is sometimes called a 'reaction norm' and can inform quantitative genetic models of how selection operates in fluctuating environments (Gomulkiewicz and Kirkpatrick, 1992;Ogbunugafor, 2022) and how much pleiotropy exists in nature (Yadav et al., 2015).More recent studies refer to the changing performance of a genotype across environments as a 'fitness profile' or in aggregate, a 'fitness seascape', and suggest these type of dynamic measurements are the key to designing effective multi-drug treatments (King et al., 2022) and to predicting evolution (Cairns et al., 2022;Chen et al., 2023;Iwasawa et al., 2022;Kinsler et al., 2020;Lässig et al., 2017).And when the environments studied represent different drugs, these types of data are often referred to as 'collateral sensitivity profiles' a term chosen to convey how resistance to one drug can have 'collateral' effects on performance in other drugs (Gjini and Wood, 2021;Maltas and Wood, 2019;Pál et al., 2015).Despite the wide interest in this type of fitness data, it is technically challenging to generate, thus many previous studies of fitness profiles focus on a much smaller number of isolates (Imamovic et al., 2018;Maltas and Wood, 2019;Nichol et al., 2019), sometimes with variation restricted to a single gene (King et al., 2022;Mira et al., 2015), or evolved in response to a single selection pressure (Kinsler et al., 2020;Li et al., 2018b).Here, we have generated fitness profiles for a large and diverse group of drug-resistant strains using the power of DNA barcodes.Now we seek to understand whether these mutants fall into distinct classes that each have characteristic fitness profiles (i.e.characteristic tradeoffs, characteristic reaction norms, or characteristic gene-by-environment interactions). To address this question, we start by performing dimensional reduction, clustering mutants with fitness profiles that have a similar shape.It is in theory possible for all mutants to have similar profiles, perhaps implying they all affect fitness through similar underlying mechanisms (Figure 4A).However, the disparity reported in Figure 3 suggests otherwise.It is also possible that every mutant will have a different profile.This could happen if each mutant affects different molecular-level phenotypes that underlie its drug resistance (Figure 4B).But previous work suggests that the phenotypic basis of adaptation is less diverse than the genotypic basis (Brettner et al., 2024;Iwasawa et al., 2022;Kinsler et al., 2020).A final possibility, somewhere in between the first two, is that there exist multiple classes of drug-resistant mutants each with characteristic tradeoffs (Figure 4C).This might imply that each class of mutants provides drug resistance via a different molecular mechanism, or a different set of mechanisms.Overall, our endeavor to enumerate how many distinct fitness profiles are present across these 774 mutants (Figure 4A -C) informs general questions about the extent of pleiotropy in the genotype-phenotype-fitness map (Bakerlee et al., 2021;Boyle et al., 2017;Chen et al., 2023;Geiler-Samerotte et al., 2020), the extent to which fitness tradeoffs are universal (Andersson and Hughes, 2010;Herren and Baym, 2022;Li et al., 2019), and relatedly, the extent to which evolution is predictable (Iram et al., 2021;King et al., 2022;Kinsler et al., 2020;Lässig et al., 2017;Petti et al., 2023). To see whether there are distinct fitness profiles present among our drug-resistant yeast lineages, we applied uniform manifold approximation and projection (UMAP)to fitness measurements for 774 yeast strains across all 12 environments.This method places mutants with similar fitness profiles near each other in two-dimensional space.As might be expected, it largely places mutants in each of the two categories described in Figure 3 far apart, with drug-resistant mutants that lose their benefit in the absence of drug in the top half of the graph, and those that maintain their benefit in the bottom half (Figure 4D and Figure 3-figure supplement 1). Beyond the obvious divide between the top and bottom clusters of mutants on the UMAP, we used a gaussian mixture model (GMM; Fraley and Raftery, 2003) to identify clusters.A common problem in this type of analysis is the risk of dividing the data into clusters based on variation that represents measurement noise rather than reproducible differences between mutants (Mirkin, 2011;Zhao et al., 2008).One way we avoided this by using a GMM quality control metric (BIC score) to establish how splitting out additional clusters affected model performance (Figure 4-figure supplement 1).Another factor we considered were follow-up genotyping and phenotyping studies that Cluster 1 is also unique in that it contains lineages that predominantly originated from the low fluconazole evolution condition; the pie chart depicts the fraction of lineages originating from each of the 12 evolution environments with colors corresponding to Table 1.(B) Evolved lineages comprising cluster 1 do not have consistent fitness advantages in conditions containing RAD, while lineages comprising clusters 2 and 3 are uniformly adaptive in medium and high RAD.Boxplots summarize the distribution across all lineages within each cluster in each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5×IQR (whiskers).(C) Lineages comprising cluster 1 are most fit in low concentrations of FLU, and this advantage dwindles as the FLU concentration increases.Lineages comprising clusters 2 and 3 show the opposite trend.(D) In low FLU (4 μg/ml), Cluster 1 lineages (UPC2 and SUR1) grow faster and achieve higher density than lineages from cluster 3 (PDR).This is consistent with bar-seq measurements demonstrating that cluster 1 mutants have the highest fitness in low FLU.(D) Cluster 1 lineages are sensitive to increasing FLU concentrations (SUR1 and UPC2).This is apparent in that the dark blue (8 μg/ml flu) and grey (32 μg/ml flu) growth curves rise more slowly and reach lower density than the light blue curves (4 μg/ml flu).But this is not the case for the PDR mutants.These observations are consistent with the bar-seq fitness data (Figure 4E). demonstrate biologically meaningful differences between mutants in different clusters (Figure 5, Figure 6, Figure 7, Figure 8).Using this information, we identified seven clusters of distinct mutants, including one pertaining to the control strains, and six others pertaining to presumed different classes of adaptive mutant (Figure 4D).It is possible that there exist additional clusters, beyond those we are able to tease apart in this study. Preliminarily, we investigated whether the clusters we identified capture reproducible differences between mutants, rather than measurement noise, by reducing the amount of noise in our data and asking if the same clusters are still present.To do so, we reduced our collection of adaptive lineages from 774 to 617 by requiring 5000 rather than 500 reads per lineage per experiment in order to infer fitness.This procedure reduced noise; the Pearson correlation across replicate experiments improved from 0.756% to 0.813%.Despite this reduction in variation, these 617 lineages cluster into the same six groups (plus a seventh pertaining to the control strains) as do the original 774 (Figure 4-figure supplement 2).The groupings are also preserved when we perform alternate methods for dimensionality reduction (Figure 4 Each of the six clusters of adaptive mutants has a characteristic fitness profile (Figure 4E).In any given environment, the fitnesses of the mutants within each cluster are often very similar to one another and often significantly different from other clusters (Figure 4E).Our follow-up investigations (Figures 5-8), including whole genome sequencing, growth rate measurements, and tracing the evolutionary origins of the mutants in each cluster, provide additional evidence that the adaptive mutants in each cluster have characteristically different properties. A group of mutants with distinct genotypes are primarily resistant to low concentrations of FLU The upper three clusters of mutants on the UMAP (Figure 4D) are all similar in that they have elevated fitness in at least one FLU-containing environment but ancestor-like fitness in the absence of drug (Figure 4E; upper three profiles).Despite these similarities, there are major differences between these three groups of mutant lineages, both at the level of genotype and fitness profile (Figure 5).For example, in cluster 1 (depicted in purple in Figures 4 and 5), the three sequenced lineages have single nucleotide mutations to either SUR1 or UPC2 (Figure 5A).But in clusters 2 and 3 (depicted in blue and orange in Figures 4 and 5), 35/36 sequenced lineages have unique single nucleotide mutations to one of two genes associated with 'Pleiotropic Drug Resistance' (PDR1 or PDR3). PDR1 and PDR3 are transcription factors that are well known to contribute to fluconazole resistance through increased transcription of a drug pump (PDR5) that removes FLU from cells (Fardeau et al., 2007;Osset-Trénor et al., 2023).However, SUR1 and UPC2 are less commonly mentioned in literature pertaining to FLU resistance, and have different functions within the cell as compared to PDR1 and PDR3 (Hill et al., 2015;Kapitzky et al., 2010).SUR1 converts inositol phosphorylceramide to mannosylinositol phosphorylceramide, which is a component of the plasma membrane (Uemura and Moriguchi, 2022).Similarly, UPC2 is a transcription factor with a key role in activating the ergosterol biosynthesis genes, which contribute to membrane formation (Tan et al., 2022;Rine, 2001).The presence of adaptive mutations in genes involved in membrane synthesis is consistent with fluconazole's disruptive effect on membranes (Sorgo et al., 2011). Interestingly, the lineages with mutations to UPC2 and SUR1, and the unsequenced lineages in the same cluster, do not consistently have cross-resistance in RAD (Figure 5B; cluster 1).Oppositely, lineages with mutations to PDR1 or PDR3, and the unsequenced lineages in the same clusters, are uniformly cross-resistant to RAD (Figure 5B; clusters 2 and 3).Perhaps, this cross-resistance is reflective of the fact that the drug efflux pump that PDR1/3 regulates (PDR5) can transport a wide range of drugs and molecules out of yeast cells (Harris et al., 2021;Kolaczkowski et al., 1996).Overall, the targets of adaptation in cluster 1 have disparate functions within the cell as compared to the targets of adaptation in clusters 2 and 3.This may suggest that the mutants in cluster 1 confer FLU resistance via a different mechanism than clusters 2 and 3. The lineages in cluster 1 have additional important differences from clusters 2 and 3.The lineages in cluster 1 perform best in the lowest concentration of FLU and have decreasing fitness as the concentration of FLU rises (Figure 5C).In fact, about 15% of these mutant lineages perform worse than their ancestor in the highest concentration of FLU, suggesting the very mutations that provide resistance to low FLU are costly in higher concentrations of the same drug.The mutants in clusters 2 and 3 show These line plots show that the fitness profiles for clusters 2 and 3 have a very similar shape.Pie charts display the relative frequency with which lineages in clusters 2 and 3 were sampled from each of the 12 evolution conditions, colors match those in the horizontal axis of the line plot and Table 1.(B) This panel shows the differences between the new clusters 2 and 3 created after all fitness profiles were normalized to eliminate magnitude differences.The upper right inset displays a new UMAP (also see Figure 6-figure supplement 1) that summarizes variation in fitness profiles after each profile was normalized by setting its average fitness to 0. The line plot displays the fitness profiles for the new clusters 2 and 3, which look different from those in panel A because 37% of mutants in the original clusters 2 and 3 switched identity from 2 to 3 or vice versa.The new clusters 2 and 3 are depicted in slightly different shades of blue and orange to reflect that these are not the same groupings as those depicted in Figure 4. Pie charts display the relative frequency with which lineages in new clusters 2 and 3 were sampled from each of the 12 evolution conditions, colors match those in the horizontal axis of the line plot and Table 1. The online version of this article includes the following figure supplement(s) for figure 6: the opposite trend from those in cluster 1.They perform best in the highest concentration of FLU and have reduced fitness in lower concentrations (Figure 5C).These findings provide additional evidence that a distinct mechanism of FLU resistance distinguishes cluster 1 from clusters 2 and 3. To confirm that different drug-resistant mutants dominate evolution in just slightly different concentrations of the same drug, we used each cluster's barcodes to trace mutants back to the evolution experiment from which they originated.Mutants in cluster 1 predominantly originated in evolutions containing the lowest concentration of FLU (Figure 5A; pie charts), while mutants in clusters 2 and 3 more often originated from evolution experiments containing higher FLU concentrations (see Figure 6).We also confirmed that the mutants in cluster 1 have high fitness in low FLU only by measuring growth curves for SUR1, UPC2, and PDR mutants in three concentrations of fluconazole.Lineages from cluster 1 (SUR1, UPC2) indeed grow faster and reach higher density in low FLU than those from cluster 3 (PDR; Figure 5D).But lineages from cluster 1 (SUR1, UPC2) grow poorly in higher FLU concentrations, while lineages from cluster 3 (PDR) do not suffer this tradeoff (Figure 5E).The observation that different mutants acting via different resistance mechanisms dominate evolution in only slightly different concentrations of the same drug highlights the complexity of adaptation and the potential benefits of more deeply understanding the diversity of adaptive mechanisms before designing treatment strategies (Berman and Krysan, 2020;Yang et al., 2023).Two groups of mutant lineages possessing similar adaptive mutations differ in sensitivity to RAD While cluster 1 appears fairly different from its neighbors, it is not immediately obvious why the mutant lineages in clusters 2 and 3 are placed into separate groups.For one, the mutants in clusters 2 and 3 have fitness profiles with a very similar shape (Figures 4E and 6A).The sequenced lineages in each of these clusters also possess mutations to the same genes: PDR1 and PDR3 (Figure 5A).And finally, the lineages in each cluster originate from similar evolution experiments, largely those containing FLU (Figure 6A; pie charts).These observations made us wonder whether the difference between cluster 2 and 3 arose entirely because the mutants in cluster 3 have stronger effects than those in cluster 2 (Figure 6A; the solid blue line is above the solid orange line).In other words, we wondered whether the mutant lineages in clusters 2 and 3 affect fitness via the same mechanism, but to different degrees.To investigate this idea, we normalized all fitness profiles to have the same height on the vertical axis; this does not affect their shape (Figure 6A; dotted lines).Then we re-clustered and asked whether mutants pertaining to the original clusters 2 and 3 were now merged into a single cluster.They were not (Figure 6B). Normalizing in this way did not radically alter the UMAP, which still contains largely the same six clusters of mutants (Figure 6-figure supplement 1).Clusters 2 and 3, containing lineages with mutations to PDR1 or PDR 3, experienced the largest changes with 37% of mutants switching from one of these two groups to the other.The new clusters 2 and 3 now differ in the shape of their fitness profiles, whereby slight differences that existed between the original fitness profiles are exaggerated (Figure 6B).For example, mutants in cluster 3 perform better in high and medium concentrations of RAD (Figure 6B; line plot).This difference in fitness is reflected in the evolution experiments, with more mutant lineages in cluster 3 originating from the evolutions performed in RAD (Figure 6B; pie charts).Though cluster 3 mutants tend to have stronger RAD resistance, they tend to have reduced fitness in conditions containing neither FLU nor RAD as compared to cluster 2 lineages (Figure 6B; line plot).In sum, the differences between lineages in clusters 2 and 3 were not resolved upon normalizing (A) Same UMAP as Figure 4D with clusters 4, 5, and 6 highlighted and sequenced isolates in these clusters represented as diamonds.Diamond colors correspond to the targets of adaptation in the sequenced isolates.Pie charts display the relative frequency with which lineages in cluster 6 were sampled from each of the 12 evolution conditions; colors match those in Table 1.Grey outline depicts conditions lacking RAD and FLU.(B) Of the three clusters on the bottom half of the UMAP, cluster 6 lineages perform best in conditions without any drug and in the highest concentration of FLU.Yet they perform worst in the lowest concentration of FLU.Boxplots summarize the distribution across all lineages within each cluster in each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5×IQR (whiskers). fitness profiles to reduce magnitude differences, instead they were made more apparent (Figure 6).These differences do not appear to be random because they persist across independent experiments.For example, cluster 3 mutants are more fit in both medium and high RAD environments (Figure 6B; line plot) and were more often isolated from evolutions containing RAD (Figure 6B; pie charts).The observation that PDR mutations fall into two separate clusters begs a question: how can different mutations to the same gene affect fitness via different molecular mechanisms? Asking this question forces us to consider what we mean by 'mechanism'.The mechanism by which mutations to PDR1 and PDR3 affect FLU resistance is well established: they increase transcription of an efflux pump that removes FLU from cells (Buechel and Pinkett, 2020;Moye-Rowley, 2019;Osset-Trénor et al., 2023).But if this is the only molecular-level effect of mutations to these genes, it is difficult to reconcile why PDR mutants fall into two distinct clusters with differently shaped fitness profiles.Others have also recently observed that mutants to PDR1 do not all behave the same way when exposed to novel drugs or changes in pH (Chen et al., 2023).This phenomenon is not reserved to PDR mutants, as adaptive missense mutations to another gene, IRA1, also do not share similarly shaped fitness profiles either (Kinsler et al., 2020).One explanation may be that, while all adaptive mutations within the same gene improve fitness via the same mechanism, not all mutants suffer the same costs.For example, perhaps the adaptive PDR mutations in cluster 2 cause misfolding of the PDR protein, resulting in lower fitness in RAD because this drug inhibits a chaperone that helps proteins to fold.In this case, it might be more correct to say that each of our six clusters affects fitness through a different, but potentially overlapping, suite of mechanisms (Wang et al., 2023).Previous work demonstrating that mutations commonly affect multiple traits supports this broader view of the mechanistic differences between clusters (Boyle et al., 2017;Geiler-Samerotte et al., 2020;Kinsler et al., 2020;Paaby and Rockman, 2013). Alternatively, perhaps not all adaptive mutations to PDR improve fitness via the same mechanism.PDR1 and PDR3 regulate transcription of YOR1 and SNQ2 as well as PDR5, and maybe the different clusters we observe represent mutants that upregulate one of these downstream targets more than the other (Osset-Trénor et al., 2023).Or, the mutants in each cluster might harbor different aneuploidies or small, difficult to sequence chromosomal insertions or deletions that affect fitness.We leave identification of the precise mechanisms that differentiate these clusters for future work.Here, using the example of PDR mutants, we showcase how genotype may not predict fitness tradeoffs, suggesting there is more to learn about the mechanisms underlying FLU resistance. One group of RAD resistant mutants does not respond as expected to drug combinations Although the three clusters of mutants on the bottom half of the UMAP are all advantageous in RAD and in conditions without any drug (Figure 4E; lower three plots), they differ in their fitness in conditions containing FLU.For example, the cluster of yeast lineages highlighted in green (cluster 4 in Figures 4 and 7A) is unique in that it has a slight advantage in the HRLF environment (Figure 7B).We found it especially strange that the neighboring cluster 5 does not also have a fitness advantage in this condition.Mutants in cluster 5 have a slight advantage in the LF condition, and a big advantage in the HR condition, thus we expect them to have at least some fitness advantage in the condition where these two drugs are combined (HRLF), but they do not (Figure 7B).The same is true for the combination of LRLF: cluster 5 mutants have an advantage in both single drug conditions which is lost when the drugs are combined (Figure 7-figure supplement 1).However, the mutants in cluster 4 (green) exhibit no such sensitivity to combined treatment.They have a slight advantage in all of the aforementioned single drug conditions, which is preserved in the relevant multidrug conditions (Figure 7B, Figure 7-figure supplement 1).To obtain an independent measure of the fitness of cluster 4 vs. cluster 5 lineages in these multidrug conditions, we asked from where the lineages in each cluster originate.About 10% of cluster 4 lineages originated from the HRLF evolution, while almost none of the lineages in cluster 5 came from this experiment, confirming that cluster 5 lineages are uniquely sensitive to this multidrug environment (Figure 7C). The different fitness profiles of mutants in cluster 4 vs 5 (Figure 7B, Figure 7-figure supplement 1) might imply that they have different growth phenotypes.We performed a follow-up experiment that supports this observation.We asked whether there are differences in the growth phenotypes of cluster 4 vs 5 mutants by measuring a growth curve for the lineage we were able to isolate and sequence from cluster 5, comparing it to a growth curve from a cluster 4 lineage (Figure 7D).Indeed, the selected mutants in cluster 4 and 5 appear to have markedly different growth curves in some conditions (Figure 7-figure supplement 1).The growth differences echo those we see in the fitness data.For example, the cluster 5 mutant has a lower maximum growth rate in the HRLF multidrug condition, corresponding with their lower fitness in this condition (Figure 7B and D).The cluster 5 mutant also reaches a lower maximum cell density in the LRLF multidrug condition and also has lower fitness in this condition (Figure 7-figure supplement 1).However, the growth curves and fitnesses of cluster 4 and 5 mutants are more similar in LF, LR, and HR single drug conditions.The observation of reproducible growth differences between a cluster 4 and cluster 5 mutant provides some supporting evidence that our clustering approach is effective at separating mutants with different properties. One group of RAD resistant mutants is exceptionally adaptive in conditions without drug One group of mutants in the lower half of the UMAP (cluster 6 in Figure 8A) appears distinct from the other two in that it has the largest fitness advantage in conditions lacking any drug (Figure 8B).This might imply that cluster 6 lineages rose to high frequency during our evolution experiments in environments without either drug, specifically the 'no drug' and 'DMSO' control conditions.Indeed, this is what we observe: over 50% of the lineages in cluster 6 were sampled from one of these two evolution experiments (Figure 8A).On the contrary, the other clusters in the lower half of the UMAP consist mainly of lineages sampled from one of the RAD evolutions (Figure 7C).Since our fitness experiments were performed independently of the evolution experiments, this provides two independent pieces of evidence suggesting that lineages in cluster 6 are defined by their superior performance in conditions lacking any drug. In line with the success of cluster 6 mutants in no drug conditions, the five sequenced mutants in this cluster include three that have mutations to IRA1, which was the most common target of adaptation in another evolution experiment in the condition we call 'no drug' (Figure 8A; Venkataram et al., 2016).In that experiment, and in our no drug experiment, mutations to IRA1 result in a greater fitness advantage than mutations to its paralog, IRA2, or mutations to other negative regulators of the RAS/ PKA pathway such as GPB2 (Figure 8).Previous work showed that sometimes IRA1 mutants have very strong tradeoffs, for example, they become extremely maladaptive in environments containing salt or benomyl (Kinsler et al., 2020).We do not observe this to be the case for either FLU or RAD.In fact, we observe that cluster 6 mutants, including those in IRA1, maintain a fitness advantage in our highest concentration of both drugs (Figure 4), being more fit in high FLU than mutants in either of the other clusters in the lower half of the UMAP (Figure 8B).However, cluster 6 mutants are unique in that they lose their fitness advantage in the lowest concentration of FLU (Figure 8B).Being singularly sensitive to a low concentration of drug seems unusual, so much so that when this was observed previously for IRA1 mutants the authors added a note about the possibility of a technical error (Kinsler et al., 2020).Our results suggest that there is indeed something uniquely treacherous about the low fluconazole environment, at least for some genotypes. Discussion Here, we present a barcoded collection of fluconazole (FLU) resistant yeast strains that is unique in its size, its diversity, and its tractability.One way we were able to isolate diverse types of FLU-resistance was by evolving yeast to resist diverse drug concentrations and combinations.But the more important tool used to increase both the number and type of mutants in our collection was DNA barcodes.These allowed us to sample beyond the drug resistant mutants that rise to appreciable frequency and to collect mutants that would eventually have been outcompeted by others.Our primary goal in collecting these mutants was to get a rough sense of how many different mechanisms of FLU resistance may exist.This question is relevant to evolutionary medicine (because more mechanisms of resistance make it harder to design strategies to avoid resistance), evolutionary theory (because more mechanisms of adaptation make it harder to predict how evolution will proceed), and genotypephenotype mapping (because more mechanisms makes for a more complex map). We distinguish mutants that likely act via different mechanisms by identifying those with different fitness tradeoffs across 12 environments, leveraging the mutants' barcodes to track their relative fitness following previous work (Kinsler et al., 2020).The 774 FLU-resistant mutants studied here clustered into a handful of groups ( 6) with characteristic tradeoffs.We confirmed that each group captures mutants with distinct properties using multiple approaches, including whole genome sequencing, growth curve experiments, tracing the evolutionary origins of the mutants in each cluster, and by using two additional independent clustering methods: hierarchical clustering and PCA (Figures 5-8).Some groupings are unintuitive in that they segregate mutations within the same gene (Figure 6) or are distinguished by unexpectedly low fitness in multidrug conditions (Figure 7).These findings are important because they challenge strategies in evolutionary medicine that rely on consistent tradeoffs or intuitive trends when designing sequential drug treatments.On the other hand, the observation that some mutants have very similar tradeoffs such that they cluster together is promising in that it suggests predicting the impact of some mutations by understanding the impacts of others is somewhat feasible. Problematically, the clusters we present are incomplete and bound to change as additional data presents itself.For one, we have shown that additional FLU-resistant mutants emerge from evolution experiments in conditions lacking FLU (Figure 3C and D).This begs questions about what other FLUresistant mutants might emerge in environments we have not studied here.Additionally, previous work has shown that some mutants that group together in our study (e.g.GPB2 and IRA2) have different fitness profiles in conditions that we did not include here (Kinsler et al., 2020).Also of note is that our evolution experiments were conducted for only a few generations and all started from the same genetic background.Additional types of FLU-resistant mutants with unique fitness profiles may emerge from other genetic backgrounds or arise after more mutations are allowed to accumulate (Allen et al., 2021;Bosch et al., 2021;Brandis et al., 2012).Finally, by requiring that all included mutants have sufficient sequencing coverage in all 12 environments, our study is underpowered to detect adaptive lineages that have low fitness in any of the 12 environments.This is bound to exclude large numbers of adaptive mutants.For example, previous work has shown some FLU resistant mutants have strong tradeoffs in RAD (Cowen and Lindquist, 2005).Perhaps we are unable to detect these mutants because their barcodes are at too low a frequency in RAD environments, thus they are excluded from our collection of 774.All of the aforementioned observations combined suggest that there are more unique types of FLU-resistant mutations than those represented by these 6 clusters, and that the molecular mechanisms that can contribute to fitness in FLU are more diverse than we know.This could complicate (or even make impossible) endeavors to design antimicrobial treatment strategies that thwart resistance. On the up side for evolutionary medicine, not every infection harbors all possible types of mutants.This might explain why strategies that exploit one or two common tradeoffs have some, albeit mixed, success in delaying or preventing the emergence of resistance (Amin et al., 2015;Imamovic et al., 2018;Kaiser, 2017;Krishna et al., 2022;Nyhoegen and Uecker, 2023;Waller et al., 2023;Wang et al., 2019).Our results encourage more complex strategies to thwart drug resistance (Iram et al., 2021), such as those that focus on advance screening to determine the resistance mechanisms that are present (Andersson et al., 2019), or on cycling a larger number of drugs to exploit a larger number of tradeoffs (Thomas et al., 2022;Yoshida et al., 2017).Problematically, these strategies often rely on knowledge about the diversity of mutants and tradeoffs that exist (or that can emerge) within an infectious population.This type of information about population heterogeneity, heteroresistance, and substructure is expensive and arduous to obtain (; Bottery et al., 2021).Fortunately, new methods, in addition to the one presented in this study, are emerging (Aissa et al., 2021;Brettner et al., 2024;Forsyth et al., 2021;Hsieh et al., 2022;Kuchina et al., 2021;Nagasawa et al., 2021).The richer data provided by these methods dovetails with emerging population genetic models that predict the likelihood of resistance to a given drug regimen (Cannataro et al., 2018;Day et al., 2015;Feder et al., 2021;King et al., 2022;Read and Huijben, 2009;Somarelli et al., 2020;Wilson et al., 2016).In sum, our observation of numerous different types of drug-resistant mutations suggests that designing resistance-detering therapies is challenging, but perhaps not impossible. Outside of predicting the evolution of resistance, our findings provide a tool to investigate the phenotypic impacts of mutation.This task has proven daunting in light of work demonstrating that mutations often have many phenotypic impacts (Boyle et al., 2017;Paaby and Rockman, 2013) and that these impacts change with contexts including the environment (Eguchi et al., 2019;Geiler-Samerotte et al., 2020;Geiler-Samerotte et al., 2016;Lee et al., 2019;Paaby et al., 2015).The approach presented in this study provides a way forward by identifying mutations that cluster together such that the effects of some mutants can be predicted from others.This clustering strategy can assist high-throughput efforts to identify the phenotypic impacts of a large panel of mutations (Flynn et al., 2024;Fowler and Fields, 2014;Mehlhoff et al., 2020;Starr et al., 2017).Further, our approach identifies environments that differentiate one cluster of mutants from another.This suggests where to look to understand the phenotypes that differentiate each cluster of mutants.For example, we were able to show that the growth phenotypes of mutants from clusters 4 and 5 are different because we knew to look for these differences in multidrug environments (Figure 7D).And our results suggest radicicol environments may be most helpful in teasing out any phenotypic differences that set apart some PDR mutations from others (Figure 6).Thus, our approach guides efforts to understand the phenotypic effects of mutation, while also guiding efforts to predict the effects of some mutations from others as well as efforts to predict the outcomes of adaptive evolution. Base media All experiments were conducted in 'M3' media defined in the same study as the landing pad strain (Levy et al., 2015), which is a glucose-limited media lacking uracil.In our study, we supplemented this media with fluconazole, radicicol, or DMSO when appropriate. Selecting drug concentrations Our goal was to choose concentrations of each drug that would not kill so many yeast cells as to dramatically decrease barcode diversity.We wanted to maintain a high number of unique barcodes so we could track a high number of yeast lineages as they independently evolved drug resistance. We measured the effect of each drug and drug combination on the growth rate of a single barcoded yeast strain using a plate reader to track changes in optical density (OD) over time.Ultimately, we chose a 'low' concentration of each drug that appeared to have no effect on growth rate, and a 'high' concentration that appeared to reduce growth rate by about 15% (Figure 2-figure supplement 2). Although the lowest concentration of radicicol that we tested on a plate reader was 10 μM, we chose 5 μM as our low RAD concentration because previous work suggested this concentration had widespread effects on yeast physiology without affecting growth (Geiler-Samerotte et al., 2016;Jarosz and Lindquist, 2010).To perform our plate reader experiment, a single colony was grown to saturation.From this culture, 5 μl was added to every well of a 96-well plate, where every well contained 195 μl of M3 media.Some wells also contained either fluconazole, radicicol, DMSO, or combinations of these drugs.The concentrations that were tested are listed on the horizontal axis of Figure 2figure supplement 2; each drug condition was replicated six times.The 96-well plate was incubated at 30 °C for 48 hr on a plate reader and OD measurements were taken every 30 min.Raw OD values were exported and maximum exponential growth rates for all tested conditions were calculated from the log-linear changes in OD over time. Inserting 300,000 unique DNA barcodes into otherwise genetically identical yeast cells In order to track many yeast lineages as they independently develop drug resistance, we needed to insert unique DNA barcodes into many yeast cells.Plasmids harboring barcodes (pBar3) were the same as those used in a previous barcoded evolution experiment (Levy et al., 2015) and were generously provided to us by Sasha Levy.These barcodes are 25 base pairs in length.They are targeted to an artificial intron within the Ura3 gene, such that they must be retained in media lacking uracil but are not expressed and thus do not themselves affect fitness (Levy et al., 2015).We transformed this barcode library (pBar3) into the landing pad strain (SHA185) as was done previously, activating a Crelox recombination system by growing the cells in YP-galactose, which resulted in genomic integration of the barcode.However, our efforts to perform extremely high efficiency transformations from which we could isolate hundreds of thousands of uniquely barcoded yeast were unsuccessful, despite manipulating the levels and timing of the inducer (galactose).Ultimately we performed 24 separate transformations and pooled many of these to obtain a large pool of barcoded yeast where every yeast cell was genetically identical except for its DNA barcode. Examining the frequency of each barcode in the starting pool of cells We sequenced the barcode region of these 24 transformed yeast populations on the Hiseq X platform using a dual index system (Kinsler et al., 2023) to discern barcode coverage, that is how many total unique barcodes were successfully inserted into yeast cells and how evenly these barcodes were sampled.We needed many uniquely barcoded yeast in order to observe many different adaptive lineages within each evolution experiment.But barcodes with very high frequencies, referred to herein as monster lineages, were present in 10 of the 24 transformations and present a problem.Monster lineages allow too many cells to carry the same barcode, giving that barcode more chances to develop an adaptive mutation.This could allow different cells harboring that same barcode to pick up different adaptive mutations, destroying our ability to draw conclusions about adaptive mutations by using barcodes.Therefore, our final library of barcoded lineages was created by pooling 14 individual transformations together, choosing those 14 that lacked monster lineages, which we defined as lineages representing greater than 1% of all transformants.Our sequencing results suggest that this library contains about 300,000 unique barcodes. Initiating 12 barcoded evolution experiments All evolution experiments started from the same pool of roughly 300,000 uniquely barcoded yeast lineages.To start the evolution experiments, a pea sized amount of the frozen yeast barcode library was grown up in 4 ml YPD for 4 hr at 30 °C in a shaking incubator at 220 rpm.Then, 300 µl of the grown barcode library was added to each of 12 pre-prepared 500 ml flasks representing the 12 evolution experiments listed in Table 1.To prepare these flasks, first, 1.2 l of M3 media was warmed at 30 °C.Then, 100 ml was added to each of 12 flat bottom flasks.Next, 500 µl of the appropriate drug or drug combination was added to each flask.Drugs were pre-diluted, aliquoted and frozen such that 500 µl of the appropriate tube could be added to each flask to achieve the desired concentration as listed in Table 1.All drugs were resuspended in DMSO such that the final concentration of DMSO in all experiments (except the 'no drug' control) was 0.5%. Performing barcoded evolution experiments Evolution experiments were performed following previous work (Levy et al., 2015).After initiation (see above), the yeast in every flask were allowed to grow at 30 °C with shaking at 200 RPM for 48 hr.Then, the flasks were removed from the incubator and 400-1000 µl of each culture was transferred to a new pre-prepared flask with identical conditions to the first.The reason we added more volume (1000 µl) to some flasks than previous work was that the cell counts at the end of the 48 hr were lower for some of our higher drug conditions.We adjusted the transfer volume to maintain a transfer population of 4x10 7 cells, which was the same as in previous work (Levy et al., 2015).We completed a total of 24 growth/transfer cycles, corresponding to 192 generations of growth assuming 8 generations per 48 hr cycle (Levy et al., 2015).Following each transfer, the remaining culture from each flask were split into two 50 ml conical vials, centrifuged for 3 min at 4000 rpm, and the supernatant was discarded.The final pellet was resuspended in 30% glycerol up to a total volume of 6 ml before being split into three 2 ml cryovials and stored at -80 °C.These frozen samples were later utilized for barcode sequencing and isolating adaptive mutants. Isolating a large pool of adaptive mutants To generate a large pool of diverse adaptive mutants, our goal was to collect a sample from each evolution experiment at a time point when there were many different adaptive lineages competing. If we sampled too late, the adaptive lineage with the greatest fitness advantage would have already risen to high frequency, thus reducing diversity.But if we sampled too early, adaptive lineages would not yet have risen in frequency above other lineages.Therefore, we chose to sample cells from a time in each evolution experiment when many barcoded lineages appeared to be rising in frequency (Figure 2-figure supplement 1).We sampled either 1 or 2 thousand cells per each evolution experiment by spreading frozen stock from the chosen time point onto agarose plates, scraping 1 or 2 thousand colonies into a 15 mL conical tube containing a final concentration of 30% glycerol, and freezing the pool pertaining to each of the 12 evolutions.We sampled 2000 cells from most evolution experiments, but sampled only 1000 from those containing a high concentration of FLU as those evolutions appeared to have reduced barcode diversity (Figure 2-figure supplement 1), presumably because high FLU represents a strong selective pressure.We sequenced the barcodes from each of these 12 pools so that we could track which adaptive mutants originated from which evolution experiment (see Methods section below entitled, 'Inferring where adaptive lineages originally evolved'). Initiating barcoded fitness competition experiments To assess the fitnesses of the 1 or 2 thousand barcoded lineages that we sampled from each evolution experiment, we pooled all sampled lineages together into a larger pool of roughly 21,000 barcoded lineages.We used this larger pool to initiate 24 fitness competition experiments, 2 replicates for each of the 12 conditions listed in Table 1.In this type of competition, we measure fitness by tracking changes in each barcode's frequency over time.Barcodes that rise in frequency represent strains that have higher fitness than others. Our goal was to calculate the fitness effect of adaptive mutations.Therefore, we needed to calculate the fitness of every evolved lineage relative to the unmutated ancestor of the evolution experiments.To do so, we followed previous work by spiking in a large quantity of this unmutated ancestor strain into each fitness competition, with this ancestor making up at least 90% of the final culture (Kinsler et al., 2020;Venkataram et al., 2016).In environments containing a high concentration of FLU which resulted in the ancestral strain having a more severe growth defect, we spiked in the ancestor such that it represented 95% of the final pool. To avoid wasting 90% or more of our sequencing reads on the ancestor strain's barcode, we created a barcodeless ancestor strain.This strain was created by transforming SHA185 with a linear piece of DNA such that the genetic background was identical to the strains of the barcoded library, but the homology to the primers used to amplify the barcode was missing.Thus the DNA from these cells does not get amplified or sequenced during subsequent steps. In addition to this barcodeless ancestor, we also spiked in some barcoded ancestral strains at lower frequency (1%) to use as 'reference' or 'control' strains, following previous work (Kinsler et al., 2023;Kinsler et al., 2020).These strains have been previously shown to possess no fitness differences from the ancestor.We used these strains as a baseline when calculating relative fitness by setting the fitness of these strains to zero during our fitness inference procedure (see Methods section below entitled, 'Inferring fitness'). All 24 fitness competitions were performed simultaneously in one big batch (Kinsler et al., 2023) and initiated from the same pool of roughly 21,000 barcoded evolved yeast lineages, barcodeless ancestor, and control strains.To initiate the competitions, 7x10 7 cells from this pool were added to 24 pre-prepared 500 mL flasks corresponding to the conditions listed in Table 1.These flasks were prepared exactly the same way as was done for the evolution experiments (see above in 'Performing barcoded evolution experiments').Each flask was allowed to grow for 48 hours at 30 °C with shaking at 200 RPM. Performing barcoded fitness competition experiments Fitness competitions were performed following previous work (Kinsler et al., 2020).After the initial flasks were allowed to grow for 48 hours, they were removed from the incubator and 400 μl from each culture representing 4x10 7 cells were transferred to a new flask with identical media.For each of 24 competitions, we completed a total of 4 growth/transfer cycles, corresponding to 40 generations of growth assuming 8 generations per 48 hr cycle (Levy et al., 2015).Following each transfer, the remaining culture from each flask was split into two 50 ml conical vials, centrifuged for 3 min at 4000 rpm, and the supernatant was discarded.The final pellet was resuspended in 30% glycerol up to a total volume of 6 ml before being split into three 2 ml cryovials and stored at -80 °C.These frozen samples were later utilized for DNA extraction and subsequent barcode sequencing. Despite the fitness competition experiments being conducted for nearly the same number of generations (40) as were the evolution experiments before isolating adaptive lineages, we do not anticipate that secondary mutations will bias fitness measurements.Previous work has demonstrated that in this evolution platform, most mutations occur during the transformation that introduces the DNA barcodes (Levy et al., 2015).In other words, these mutations are already present and do not accumulate during the 40 generations of evolution.Therefore, the observation that we collect a genetically diverse pool of adaptive mutants after 40 generations of evolution is not evidence that 40 generations is enough time for secondary mutations to bias abundance values.For a detailed treatment of how secondary mutations have a minimal influence on fitness, see Venkataram et al., 2016.Extracting genomic DNA DNA was extracted from 500 μl of concentrated frozen stocks pertaining to the evolution experiments and fitness competitions.Frozen cells were thawed and pelleted.Cells were treated with 250 μl of 0.1 M Na2EDTA, 1 M sorbitol and 5 U/μl zymolyase for a minimum of 15 min at 37 °C to remove the cell wall.Lysis was completed by adding 250 μl of 1% SDS, 0.2 N NaOH and inverting to mix.Proteins and cell debris were removed with 5 M KOAc by spinning for 5 min at 15,000 rpm.Supernatant was moved to a new tube and DNA was precipitated with 600 μl isopropanol by spinning for 5 min at 15,000 rpm.The resulting pellet was washed 1 ml of 70% ethanol before being resuspended in 50 μl water plus 10 μg/ml RNAse.Extracted DNA was quantified using the NanoDrop spectrophotometer and all samples were diluted to a concentration of 50 ng/µl for barcode amplification and sequencing library preparation. Preparing barcodes for high-throughput multiplexed sequencing using PCR Extracted DNA was prepared for sequencing using a two-step PCR that preserves information about the relative frequency of each barcode in each sample (Kinsler et al., 2023;Kinsler et al., 2020;Venkataram et al., 2016).Briefly, in the first step PCR, the barcode region is amplified from the genomic DNA, labeled with a sample-specific combination of primers, and tagged with a UMI.This step utilizes a short 3 cycle PCR with New England Biolabs OneTaq polymerase.Purification of the first step product to remove excess reagents was performed using Thermo Scientific GeneJET PCR Purification Kit.The second step PCR attached Illumina indices that were used to distinguish samples from different experiments and timepoints.We utilized a dual indexing scheme to prevent index misassignment that is common when sequencing amplicon libraries using patterned flow cell technology (Kinsler et al., 2023).Amplification of this second step of PCR was done with a longer 23 cycle PCR using Q5 polymerase.Final libraries were bead purified using 0.8 X Quantabio sparQ Pure Mag beads.Quantification of the final PCR products was done using the Invitrogen Qubit Fluorometer before all samples were pooled at equimolar ratios for sequencing. Sequencing and clustering barcodes Next Generation Sequencing was performed at either Psomagen (Rockville, MD) or at the Translation Genomics Research Institute (Phoenix, AZ) on patterned flow cells (either an Illumina HiSeqX or NovaSeq) using 2x150 base pair paired end reads.Samples were dual indexed to allow multiplexing while minimizing contamination from index misassignments (Kinsler et al., 2023).The 20 base pairs of variable sequence referred to as a DNA barcode were identified and clustered to determine the number of unique barcodes and the frequency of each barcode in each sample.For the evolution experiments, this was done following our previous work (Kinsler et al., 2020;Venkataram et al., 2016).For the fitness competition experiments, this was done using updated software (Zhao et al., 2018) with the following command: Inferring fitness In fitness competition experiments, fitness is often inferred from the log-linear change in a strain's frequency over time (Bakerlee et al., 2021;Geiler-Samerotte et al., 2011;Kinsler et al., 2023).Recently, more advanced methods to infer fitness have emerged that take into account nonlinearities in frequency changes over time, for example, nonlinearities that reflect changes in the mean fitness of the population (Kinsler et al., 2020;Li et al., 2018a;Li et al., 2023;Venkataram et al., 2016).We had trouble implementing these newer methods on our fitness data, perhaps because many of our evolved lineages, and our control strains, have low fitness in some drugs.This caused their barcodes to rapidly decline in frequency such that they received low counts only at later time points.Their counts could become so low that these lineages would seemingly disappear due to sampling error, and then reappear at a subsequent time point.This dramatic (but false) late increase in frequency was sometimes interpreted as evidence of very high fitness, especially when we inferred fitness using approaches that account for nonlinearities. To contend with this issue, we applied strict coverage thresholds to every fitness measurement: we required at least 500 counts across all timepoints in order to infer fitness for a given lineage in a given environment.This is stricter than previous work that does not require a minimum number of reads per each lineage and instead requires a minimum number of reads per time point (Kinsler et al., 2020).We found that 774 lineages passed our threshold in at least one replicate experiment per all 12 environments.Of these, 729 passed for both replicates and the final fitness value we report represents the average of both replicates. Even with our strict coverage threshold, fitness inference methods that account for nonlinearities still interpreted minor stochastic fluctuations in fitness at later time points as evidence of a fitness advantage, even if fitness dramatically declined in earlier time points.Therefore, we calculated fitness via the traditional method, as the slope of the log-linear change in barcode frequency relative to the average slope of the control strains.We found that this method is less sensitive to that type of error.Using this method, we found that our fitness inferences were reproducible between replicates (Figure 2-figure supplement 4A), and between experiments performed in similar conditions (e.g.medium vs. high concentrations of the same drug; Figure 2-figure supplement 4B).When we increased our coverage threshold to require an order of magnitude more reads per lineage per measurement (from 500 to 5000), we lost 157 lineages (from 774 to 617), saw reproducibility increase across replicates (from an average Pearson correlation of 0.756-0.813)and the main conclusions of our study were unchanged in that the same 6 clusters were present on a UMAP (Figure 4-figure supplement 2). Identifying adaptive mutations using whole-genome sequencing One downside of barcoded evolution experiments is that all lineages exist together in a pooled culture.Fishing out adaptive lineages in order to perform whole genome sequencing is a major challenge (Venkataram et al., 2016).Here, we randomly selected cells from these mixed pools for whole genome sequencing, sometimes selecting from later time points in the evolution experiments and sometimes selecting from the samples of 1 or 2 thousand cells that were isolated to initiate fitness competitions. To perform whole genome sequencing, cells from mixed pools were spread onto M3 agarose plates, single colonies were selected and grown in YPD to saturation.DNA was extracted using the PureLink Genomic DNA Mini Kit (K182002).Sequencing libraries were made using Illumina DNA Prep kit by diluting reactions by ⅕.Briefly, samples were prepared such that the starting concentration in 6 μl was between 20 and 100 ng of DNA. 2 μl of BLT and TB1were added to the starting material and incubated on a thermocycler at 55 °C (lid 100 °C) for 15 min.Two μl of TSB was added to each reaction and incubated at 37 C (lid 100 °C) for 15 min.Beads were washed two times with 20 μl of TWB.Following the final wash, 4 μl of EPM, 4 μl of water and 2 μl of UD indexes were added to each sample.Depending on starting concentration, PCR was performed based on Illumina guidelines as follows: lid 100 °C, 68 °C for 3 min, 98 C for 3 min, [98 °C for 45 s, 62 °C for 30 s, 68 °C for 2 min] for 6-10 cycles, 68 °C for 1 min, 10 °C hold.PCR products were cleaned with a double side sized selection as follows: 4 μl of each sample was pooled together (32 μl total for 8 samples) and added to 28 μl of water plus 32 μl of SPB.After a 5 min incubation, 25 μl of supernatant was moved to a new tube containing 3 μl of SPB.Beads were washed with fresh 80% ethanol and libraries were eluted in 12 μl RSB.Samples were multiplexed using Illumina's unique dual (UD) index plates (A-D) and sequencing was performed with 2x150 paired end sequencing on HiSeq X at Psomagen (Rockville, MD). In total 122 colonies were randomly picked and sequenced.As one might expect, barcodes that rose to high frequency were more likely to be picked multiple times.In an attempt to find lineages with unique attributes, some cultures were grown at 37 °C or plated to high concentrations of drug prior to picking isolated colonies for sequencing.Of the 122 genomes we sequenced, only 53 had unique barcodes that pertained to the 774 lineages for which we obtained high enough barcode coverage to infer fitness.Only two of these 53 had no sequenced mutations suggesting their fitness increase over ancestor is due to a mutation we are unable to identify by sequencing, perhaps a change in ploidy.The other 51 all had at least one single nucleotide mutation in a gene reported in Supplementary file 1. Whole genome sequences were deposited in GenBank under SRA reference PRJNA1023288. Variant calling was done using GATK as described here: https://github.com/gencorefacility/variantcalling-pipeline-gatk4(Khalfan, 2020).Identified variants were annotated using SnpEff (Cingolani et al., 2012).Variant call files from 132 (53 unique/in CS) sequenced lineages were analyzed in R and compared to reference strain GCF_000146045.2 (Genome assembly 64: sacCer3).SNPs present in the ancestor (as well as all evolved lineages) were ignored as these could not have caused the fitness differences we observed.We also ignored SNPS that were present in a substantial number of evolved lineages, as these likely represent background mutations that were present in a substantial portion of the cells representing the landing pad strain (SHA185).These are reported in Supplementary file 1 and include: SRD1-Glu97Lys, RSC30-Gly571Asp, OPT1-Val143Ile, and LYS20-Thr29Met. Measuring growth curves of evolved lineages with unexpected fitness in multidrug conditions Though fitness differences are not necessarily due to differences in maximum growth rate (Li et al., 2018b), we measured growth curves for a few lineages.In one case, we did so to investigate a case where an evolved lineage had unexpectedly low fitness in multidrug conditions (Figure 7B).Indeed, we found that this mutant grew more slowly in those conditions (Figure 7D; Figure 7-figure supplement 1).To perform this test, a lineage with a mutation to GBP2, a lineage with a mutation to HDA1, and sometimes the ancestor strain were streaked to YPD plates.We used the barcodeless ancestor strain, which is identical to the evolved lineages in every way except for lacking a barcode, and is described above in the methods section entitled, 'Initiating barcoded fitness competition experiments'.A single colony of each strain was isolated from YPD plates and was used to inoculate an overnight YPD culture.After ~24 hr, a coulter counter (BD) was used to determine the number of cells/ ml present in each culture.Next, all cultures were diluted such that the starting number of cells was 250,000 in 6 ml of M3 plus drug (either HR, LF, LR, LRLF, or HRLF, see Table 1).To measure growth curves, these samples were allowed to grow at 30° C. OD was measured every 10 min as the cultures were grown to saturation using the compact rocking incubator TVS062CA (Advantec Mfs).Raw growth curves for these conditions are shown in Figure 7-figure supplement 1B and C. Maximum growth rate was calculated using a sliding window approach to determine the region of each growth curve with the steepest log-linear slope.We used similar methods to measure growth curves for 3 mutants from cluster 1 and 3 from cluster 3 in Figure 5. Determining ploidy While our barcoded yeast strain is haploid, previous studies observed that some cells diploidize during the course of evolution in M3 media and by doing so gain a fitness advantage (Levy et al., 2015;Venkataram et al., 2016).To ensure that observed fitness effects in our experiments were not largely due to the effects of diploids, we estimated the percent of diploid cells in each of our populations.We chose to make our estimates from frozen samples taken at the same time points from which we sampled 1 or 2 thousand cells to initiate fitness competitions (Figure 2-figure supplement 1).As such, our estimates also report on the percent of diploids that were present at the start of the fitness competitions experiments (Supplementary file 2). To study ploidy, we used the nucleic acid stain SYTOX Green, which is capable of selectively staining the nucleus of fixed cells and has been shown to be optimal for use in budding yeast (Haase, 2004).For each of the 12 evolution experiments conditions, a small amount of freezer stock from the chosen timepoints (Figure 2-figure supplement 1) was plated to YPD and grown for ~48 hr.Individual colonies were picked and transferred to 96-well plates, one full plate per each condition, before being fixed with 95% ethanol for 1 hr.Plates were centrifuged at 4500 rpm and supernatant was discarded.A total of 50 μl RNase A was added to the samples at a concentration of 2 mg/ml, and the plates were then incubated for 2 hr at 37 °C.Cells were pelleted by centrifuge and the supernatant was removed, which was followed by treatment with 20 μl of the protease pepsin at a concentration of 5 mg/μl.Pepsin-treated samples incubated at 37 °C for 30 mins before centrifugation and removal of supernatant.Finally, cells were resuspended in 50 μl TrisCL (50 mM, pH 8) and stained with 100 μl of 1 μM SYTOX Green.Known diploid and haploid strains were used as controls alongside our samples to determine the expected fluorescence of stained diploid vs. haploid cells.Analysis was performed using a ThermoFisher Attune NxT, housed in the Flow Cytometry Core Facility at Arizona State University. Dimensional reduction Our fitness inference procedure resulted in a data set consisting of nearly 10,000 fitness measurements (774 lineages x 12 conditions = 9,288 fitness measurements).Dimensional reduction was performed on these data using UMAP (McInnes et al., 2018).Clusters of similar mutants were identified and colored using a gaussian mixed model Fraley and Raftery, 2003; Bayesian Information Criteria (Figure 4-figure supplement 1) as well as follow up genotyping and phenotyping studies (see Figures 5-8) were used to select the number of clusters.These analyses were performed in R; code can be found at https://osf.io/pxyv9/. In order to prevent conditions with the most variation in fitness (e.g.high FLU) from dominating, we normalized fitness measurements from each of the 12 environments to have the same overall mean and variance (we transformed the data from every environment to have a mean of 0 and a standard deviation of 1) before performing dimensional reduction.This normalization procedure did not have a dramatic effect on the UMAP (Figure 4-figure supplement 2A).We also explored normalizing all data to account for magnitude differences by setting the average fitness of each lineage across all 12 environments to 0. Doing so did not significantly change the groupings present in the UMAP from those displayed in Figure 4, Figure 6-figure supplement 1 other than in the ways we describe in Figure 6.Reducing our data set to 617 adaptive lineages with very high sequencing coverage Figure 4-figure supplement 2B also did not significantly affect the way that mutants cluster into groups, nor did using a different dimensional reduction algorithm altogether (Figure 4-figure supplement 3 and see next paragraph).In short, the clustering of mutants was robust to the different decisions we made when choosing how to analyze these data. In order to assess whether clusters identified from the UMAP are robust to alternative clustering methods, we also used hierarchical clustering to identify clusters of mutants with similar fitness profiles.First, we computed the pairwise distance of all lineages across the fitness profiles.Then, we used Ward's method from scikit-learn to iteratively cluster lineages such that the within-cluster variation is minimized (Pedregosa et al., 2012;Ward, 1963).To test the consistency of lineage clustering, we chose a pairwise cluster distance cutoff of 11, which results in the same number of clusters (7) as identified with the UMAP clustering approach used in the main text.We then compared the identity of the lineages within each of these clusters with the UMAP clusters.We found that, for most clusters, over 80% of lineages from the UMAP cluster corresponded with a unique hierarchical cluster and labeled these hierarchical clusters according to this correspondence (Figure 4-figure supplement 3).For UMAP cluster 1, lineages were more evenly split between two clusters.64% of these lineages clustered together in what is labeled as hierarchical cluster 1 and 30% in hierarchical cluster 1/7 (Figure 4-figure supplement 3), which contains all of the control lineages that comprise UMAP cluster 7. Despite these lineages clustering more closely with control lineages than the remainder of the cluster 1, they do tend to cluster distinctly with the control lineages, suggesting they have behavior that is distinguishable from the control lineages.If we consider these cluster 1 mutants that end up in cluster 3/7 as 'mis-clustered', we find that 85% of lineages from each UMAP cluster are clustered together in the corresponding hierarchical cluster.If we consider these as 'consistently clustered', this metric increases to 90% of lineages correctly clustered.Similarly, clustering lineages using principal component analysis also largely preserved the clusters reported in Figure 4, Figure 4figure supplement 4. Altogether, this analysis shows that the results we show are robust to alternative methods of clustering. Inferring where adaptive lineages originally evolved All 774 adaptive lineages were isolated from one of the 12 evolution experiments at the timepoint indicated in Figure 2-figure supplement 1 (see Methods section entitled, 'Isolating a large pool of adaptive mutants').The sample we isolated from each evolution experiment was sequenced prior to pooling.This allows us to computationally determine which barcoded lineages originated from which evolution experiment to generate the pie charts in Figures 3, 5-8. If adaptive mutation arose independently during the course of each evolution experiment, it would be unlikely for any adaptive lineage we study to be present in more than one of the evolution conditions.This would make it very easy to assign each barcode to the evolution experiment from which it originated.However, this was not the case for many barcoded lineages. Previous work explained that the transformation procedure used to insert a barcode into the landing pad of SHA185 is itself mutagenic, such that many mutations arise prior to the start of the evolution experiments (Levy et al., 2015).Since all our evolution experiments were started from the same pool of barcoded lineages, we thus expect that many adaptive lineages will be present in more than one condition.However, it is not expected that these adaptive lineages will be present at the same frequency in every condition; instead these frequencies change with the fitness of the mutation each lineage possesses.Therefore, when an adaptive lineage appeared in multiple conditions, we weighted its origin to reflect its frequency in each condition.In other words, adaptive lineages that were only present in the sample taken from a single evolution condition were identified and assigned a single origin condition in the pie charts in Figures 3 and 5; Figure 6; Figure 7 and Figure 8.But for adaptive lineages found in the samples taken from more than one evolution condition, the proportions assigned to each origin condition in the pie charts was scaled to equal the relative frequencies of that lineage in all evolution conditions where it was observed.Associated data and code can be found here: https://osf.io/pxyv9/. Figure 1 . Figure 1.A multidrug treatment strategy that relies on all mutants having the same tradeoffs.(A) All of the mutants that resist Drug A do so via a similar mechanism such that all are sensitive to Drug B. (B) There are multiple different types of mutants that resist Drug A, not all of which are sensitive to Drug B. Figure 2 . Figure 2.An overview of the experimental design.(A) Yeast cells were barcoded to create 300,000 lineages.(B) These lineages were evolved in 12 different conditions (Table1).(C) A small sample of evolved isolates were taken from each evolution experiment and their barcodes were sequenced.These ~21,000 isolates do not represent as many unique, adaptive lineages because many either have the same barcode or do not possess adaptive mutations.(D) These samples of evolved isolates were all pooled together with control strains representing the ancestral genotype.(E) Barcoded fitness competition experiments were then performed on this pool in each of the 12 evolution conditions.Fitness was measured by tracking changes in each barcode's frequency over time relative to control strains.Two replicates per condition were performed.(F) The overall goal is to investigate fitness tradeoffs for hundreds of adaptive lineages.For example, the adaptive lineage depicted in dark blue has higher fitness than the ancestor in some environments (HR, HF) but lower fitness in others (DMSO, ND).We were able to investigate fitness tradeoffs for 774 adaptive lineages.We excluded lineages when we did not observe their associated barcode at least 500 times in all 12 environments.In other words, we only included lineages for which we obtained high-quality fitness estimates in all 12 environments. Figure supplement 1 . Figure supplement 1. Twelve barcoded evolution experiments track ~300,000 lineages as they adapt to different drug concentrations and combinations. Figure supplement 2 . Figure supplement 2. Chosen drug concentrations do not dramatically reduce yeast's maximum growth rate. Figure supplement 3 . Figure supplement 3. Twenty-four fitness competitions track evolved lineages as their barcodes change frequency. Figure supplement 4 . Figure supplement 4. Fitness measurements are reproducible between replicates and closely related conditions. Figure 3 . Figure3.Two different classes of FLU-resistant mutants with unique tradeoffs.(A) This panel describes the 100 mutant lineages with the highest fitness relative to the control strains in the high FLU environment (8 μg/ml FLU).The vertical axis depicts the fitnesses (log-linear slopes relative to control strains) for these 100 strains in four selected environments, including the high FLU environment (boxed).Boxplots summarize the distribution across all 100 lineages for each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5 × IQR (whiskers).(B) The 100 lineages with highest fitness in high FLU were most often sampled from evolution experiments in which FLU was present.In this pie-chart, colors correspond to the evolution conditions listed in Table1and the blue outer ring highlights evolution conditions that contain FLU.The size of each slice of pie represents the relative frequency with which these 100 lineages were found in each evolution experiment.(C) Similar to panel A, this panel describes the 100 mutant lineages with the highest fitness relative to the control strains in the high RAD environment (20 µM Rad).(D) The 100 lineages with highest fitness in high RAD were most often sampled from evolution experiments that did not contain FLU.(E) A pairwise correlation plot showing that all 774 mutants, not just the two groups of 100 depicted in panels A and C, to some extent fall into two groups defined by their fitness in high FLU and high RAD.The contours (black curves) were generated using kernel density estimation with bins = 7.These contours describe the density of the underlying data, which is concentrated into two clusters defined by the two smallest black circles.The 100 mutants with highest fitness in high FLU are blue, highest fitness in high RAD are red, and the seven that overlap between the two aforementioned categories are black. Figure supplement 1 . Figure supplement 1.The two types of adaptive mutants depicted in Figure 3 sort into different clusters on the UMAP. Figure 4 . Figure 4. Clustering evolved lineages with similar fitness profiles.(A-C) Simulated data showing potential fitness profiles when (A) all mutants have similar responses to environmental change and thus a similar fitness profile, (B) every mutant has a different profile (five unique profiles are highlighted in color), or (C) every mutant has one of a small number of unique profiles (two unique profiles are depicted).(D) Every point in this plot represents one of the barcoded lineages colored by cluster; clusters were identified using a gaussian mixture model.The 774 adaptive lineages cluster into 6 groups based on variation in their fitness profiles; the control lineages cluster separately into the leftmost cluster in light green.(E) The fitness profiles of each cluster of adaptive lineages.Boxplots summarize the distribution across all lineages within each cluster in each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5×IQR (whiskers).The online version of this article includes the following figure supplement(s) for figure 4: Figure supplement 1 . Figure supplement 1. Bayesian information criteria (BIC) scores suggest the 774 mutants cluster into between 6 and 13 groups. Figure supplement 3 . Figure supplement 3. Clusters are robust to a hierarchical clustering method. Figure supplement 4 . Figure supplement 4. Clusters are robust to principal component analysis. Figure 5 . Figure 5. Evolved lineages comprising cluster 1 have different genotypes and phenotypes from neighboring clusters.(A) The three clusters on the top half of the UMAP differ in their genetic targets of adaptation with cluster 1 being unique in that it does not contain mutations to PDR1 or PDR3.Cluster 1 is also unique in that it contains lineages that predominantly originated from the low fluconazole evolution condition; the pie chart depicts the fraction of lineages originating from each of the 12 evolution environments with colors corresponding to Table1.(B) Evolved lineages comprising cluster 1 do not have consistent fitness advantages in conditions containing RAD, while lineages comprising clusters 2 and 3 are uniformly adaptive in medium and high RAD.Boxplots summarize the distribution across all lineages within each cluster in each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5×IQR (whiskers).(C) Lineages comprising cluster 1 are most fit in low concentrations of FLU, and this advantage dwindles as the FLU concentration increases.Lineages comprising clusters 2 and 3 show the opposite trend.(D) In low FLU (4 μg/ml), Cluster 1 lineages (UPC2 and SUR1) grow faster and achieve higher density than lineages from cluster 3 (PDR).This is consistent with bar-seq measurements demonstrating that cluster 1 mutants have the highest fitness in low FLU.(D) Cluster 1 lineages are sensitive to increasing FLU concentrations (SUR1 and UPC2).This is apparent in that the dark blue (8 μg/ml flu) and grey (32 μg/ml flu) growth curves rise more slowly and reach lower density than the light blue curves (4 μg/ml flu).But this is not the case for the PDR mutants.These observations are consistent with the bar-seq fitness data (Figure4E). Figure 6 . Figure 6.Evolved lineages in clusters 2 and 3 have characteristic differences despite similarities at the genetic level.(A) This panel shows the similarities between clusters 2 and 3.The upper right inset displays the same UMAP from Figure4Dwith only clusters 2 and 3 highlighted and with lineages possessing mutations to the PDR genes depicted as blue diamonds.The line plot displays the same fitness profiles for clusters 2 and 3 as Figure4E, plotting the average fitness for each cluster in each environment and a 95% confidence interval.Dotted lines represent the same data, normalized such that every lineage has an average fitness of 0 across all environments.These line plots show that the fitness profiles for clusters 2 and 3 have a very similar shape.Pie charts display the relative frequency with which lineages in clusters 2 and 3 were sampled from each of the 12 evolution conditions, colors match those in the horizontal axis of the line plot and Table1.(B) This panel shows the differences between the new clusters 2 and 3 created after all fitness profiles were normalized to eliminate magnitude differences.The upper right inset displays a new UMAP (also see Figure6-figure supplement 1) that summarizes variation in fitness profiles after each profile was normalized by setting its average fitness to 0. The line plot displays the fitness profiles for the new clusters 2 and 3, which look different from those in panel A because 37% of mutants in the original clusters 2 and 3 switched identity from 2 to 3 or vice versa.The new clusters 2 and 3 are depicted in slightly different shades of blue and orange to reflect that these are not the same groupings as those depicted in Figure4.Pie charts display the relative frequency with which lineages in new clusters 2 and 3 were sampled from each of the 12 evolution conditions, colors match those in the horizontal axis of the line plot and Table1. Figure supplement 1 . Figure supplement 1. UMAP on data that were normalized to account for magnitude differences (row means set to 0). Figure 7 . Figure7.Evolved lineages in cluster 4 and 5 differ in response to combined drugs.(A) Adjacent clusters 4 and 5 each contain a small number of sequenced isolates depicted as diamonds; diamond colors correspond to the genes containing adaptive mutations in each sequenced isolate.(B) Cluster 5 (red) has an unexpected fitness disadvantage in the HRLF multidrug environment relative to cluster 4 (green), given that cluster 5 lineages do not have a fitness disadvantage in the relevant single drug environments.Boxplots summarize the distribution across all lineages within each cluster in each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5 × IQR (whiskers).(C) Pie charts display the relative frequency with which lineages in each cluster were sampled from each of the 12 evolution conditions, colors match those in Table1.(D) The maximum exponential growth rate for a single lineage isolated from each of clusters 4 (green) and 5 (red), relative to the ancestor.The growth rate of each lineage in each condition was measured twice by measuring changes in optical density over time.Tested lineage from cluster 4 (in green) has a mutation to GBP2 (S317T) while the lineage from cluster 5 (in red) has mutation to HDA1 (S600S).The online version of this article includes the following figure supplement(s) for figure7: Figure supplement 1. Unexpected tradeoffs in evolved lineages in cluster 4 and 5 in response to combined drugs. Figure 8 . Figure8.Evolved lineages in cluster 6 have higher fitness than other lineages in the absence of FLU and RAD.(A) Same UMAP as Figure4Dwith clusters 4, 5, and 6 highlighted and sequenced isolates in these clusters represented as diamonds.Diamond colors correspond to the targets of adaptation in the sequenced isolates.Pie charts display the relative frequency with which lineages in cluster 6 were sampled from each of the 12 evolution conditions; colors match those in Table1.Grey outline depicts conditions lacking RAD and FLU.(B) Of the three clusters on the bottom half of the UMAP, cluster 6 lineages perform best in conditions without any drug and in the highest concentration of FLU.Yet they perform worst in the lowest concentration of FLU.Boxplots summarize the distribution across all lineages within each cluster in each environment, displaying the median (center line), interquartile range (IQR) (upper and lower hinges), and highest value within 1.5×IQR (whiskers). Table 1 . A list of the environments included in this study and the symbol used to represent them in 242 subsequent figures.
24,117.8
2024-09-10T00:00:00.000
[ "Medicine", "Biology", "Chemistry" ]
Trends in cancer incidence in Lithuania between 1991 and 2010 Corresponding author: Giedrė Smailytė, Lithuanian Cancer Registry, Institute of Oncology, Vilnius University, P. Baublio 3B, LT-08406 Vilnius, Lithuania. E-mail<EMAIL_ADDRESS>Background. Analysis of time trends in cancer incidence provides an estimate of the burden of cancer in a certain population and is a useful tool for planning cancer control. Identification of changing epidemiological patterns in cancer is crucial in formulating future healthcare clinical tools, evaluating prognostic and therapeutic models, and generating new hypotheses on disease aetiology and prevention. Materials and methods. Patients diagnosed with cancer in Lithuania between 1991 and 2010 were considered into analysis. Crude rates and age-standardized incidence rates for both sexes were calculated, as well as annual percent change with 95% confidence intervals for selected cancer sites using the Joinpoint Regression Analysis. Results. With the major exceptions of male lung cancer and stomach cancer in both sexes, cancer incidence has increased for most cancer sites in the last two decades in Lithuania. The strongest rises in incidence were seen for prostate cancer in men and thyroid cancer in women. Overall cancer incidence in men was strongly influenced by newly diagnosed prostate cancer cases. Conclusions. This up-to-date analysis provides a basis for establishing priorities to cancer control actions in Lithuania. These results show increase in incidence rates in Lithuania of all cancers combined among both men and women. Trends in cancer incidence rates for males were heavily influenced by trends in prostate cancer which is the most common cancer among men. Increasing cancer incidence requires targeted interventions on risk factors control, early diagnosis, and improved management and pharmacological treatment for selected cancer sites. INTRODUCTION Cancer control is a term that encompasses all elements of prevention, early detection, treatment, rehabilitation and palliation.The World Health Or ganization recommends that cancer control activities are best planned and delivered through a national cancer control plan, and notes that population-based cancer registries are a core component of cancer control strategy (1). Information on cancer is available for analysis from individual cancer registries and from the International Association of Cancer Registries.The Cancer Incidence in Five Continents series, started in the 1960s, bring together incidence data meeting acceptable quality criteria from population-based cancer registries throughout the world.The aim of the series is to make available data on cancer incidence for comparison from as wide a range of geographical locations as possible.This is the classical role of descriptive statistics: to allow formulation of hypotheses that might explain the observed differences (geographically, over time, in population subgroups) and that can be tested by further studies (2).Classically, such descriptive studies are said to be 'hypothesis generating' -providing clues to aetiology, to be followed up in studies that focus on specific risk factors (3). Cancer registries are a vital source of information on cancer epidemiology and cancer services.The idea of recording information on all cancer cases in defined communities dates from the first half of the twentieth century.Originally, cancer registries were concerned with describing cancer patterns and trends, and later on, many were able to follow up the registered patients and calculate survival.In the last 20 years the role of registries has expanded further to embrace the planning and evaluation of cancer control activities, and the care of individual cancer patients (3). The most basic function of a cancer registry in relation to cancer control is to assess the current magnitude of the cancer burden and its likely future evolution.Various statistics are available for assessing the "burden" of cancer, and of different types of cancer, in the population.Incidence is clearly a fundamental measure for this because it describes the stream of new cases that will require some kind of medical attention.It is the relevant measure when considering (primary) prevention the objective of which is to prevent disease occurrence.Measurement of incidence is the most basic function of population-based cancer registries (4). In this paper, we present an analysis of longterm trends of cancer incidence in Lithuania using modern statistical methods of trend analysis.Analysis is based on data from the populationbased Lithuanian Cancer Registry which contains data on the incidence of cancer in Lithuania since 1978. Cancer registry The Lithuanian Cancer Registry is a populationbased cancer registry which contains personal and demographic information (place of residence, sex, date of birth, vital status), as well as information on diagnosis (cancer site, date of diagnosis, method of cancer verification) and death (date of death, cause of death) of all cancer patients in Lithuania, where population size is around 3 million residents according to the 2011 census (5). The national Cancer Registry was founded in 1984, but collection of the data on cancer incidence has already started in 1957.In 1993, the Lithuanian Cancer Registry became a full member of the International Association of Cancer Registries (IACR) in Lyon, France.Since the period 1988-1992, the Registry data have been included in 'Cancer Incidence in Five Continents' (6). The principal sources of information on cancer cases are primary, secondary and tertiary health care institutions in the country that are responsible to fill in the notification when cancer is diagnosed.All physicians, all hospitals and other institutions in the country must send a notification to the Lithuanian Cancer Registry of all cancer cases that come to their attention.Some pathological laboratories send the respective laboratory notification automatically extracted from laboratory data systems, using a standard format.The notifications, supplemented by death certificate information, are built into a database suitable for statistical use.This database contains information on all cancer cases diagnosed in Lithuanian residents since 1978. In the current analysis, patients diagnosed with cancer in 1991-2010 were considered.Cancers were classified according to the 9th (up to 1997) and the 10th (from 1998 onwards) editions of the International Classification of Diseases. Statistical methods We calculated crude rates (CRs) and age-adjusted incidence rates (ASRs) for four periods (1991-1995, 1996-2000, 2001-2005 and 2006-2010) by sex and cancer site.Adjustment for ASRs was done using the European standard population, where a total of 18 age groups were considered, each of 5-year bands starting from 0-4 years to 85 and older.Additionally, the annual percent change (APC) was calculated for trends by means of the generalized linear model using the Joinpoint Software, Version 3.4.3(7).The Joinpoint Regression Analysis allows identifying the best-fitting points ('joinpoints'), where a significant change in the linear slope (in a log scale) of the trend is detected.For each of the identified trends, we also fit a regression line to the natural logarithm of the rates using a calendar year as a regression variable.95% confidence intervals for APC were calculated as well.Annual percent changes were considered statistically significant if p < 0.05. RESULTS The most common cancer sites for men and for women in the periods 1991-1995 and 2006-2010 are shown in Fig. 1.Among men, in 2006-2010 the most common cancers were prostate (34%), lung (14%), non-melanoma skin (8%), stomach (6%), rectum and anus, as well as colon (both 4%), while among women cancer of breast (18%), nonmelanoma skin (16%), corpus uteri (7%) and cervix uteri (6%) comprised almost half of new diagnosis.The most striking changes in cancer incidence are seen in men, where number of prostate cancer cases dramatically rose from 8% in 1991-1995 to 34% in 2006-2010, as well as proportion of lung and stomach cancer dropped almost two times during the analysis time. Number of cancer cases, crude and age-standardised incidence rates for selected cancer sites in four periods (from 1991-1995 to 2006-2010) are presented in Tables 1a and 1b, for men and women, respectively.In total, there was 260,659 primary cancer diagnosis (non-melanoma skin cancer The ASR for all cancer sites but skin rose markedly from 387.0 in 1991-1995 to 568.1 per 100 000 in 2006-2010 in men, while the increase was big but less expressed in women, from 240.8 in 1991-2006 to 293.0 per 100 000 in 2006-2010.Among men, the strongest rise in ASR was for prostate cancer, where ASR in 1991-1995 was 37.5 and increased to 209.8 in 2006-2010.Big increase was observed for kidney cancer (+13.9 units) between these two periods as well, on the other hand, more than 10 units decrease in ASR was seen for lung and trachea (-19.9) and for stomach (-14.3)cancer.Among women, more than 2-fold rise in ASR was seen for thyroid cancer (+11.1 units).Breast cancer incidence rose by +18.1 units, while the biggest decrease in ASR among women was seen for stomach cancer (-6.5 units). Annual percent change with 95% confidence intervals for selected cancer sites in the period 1991-2010 and in line segments selected by the Join point Regression Analysis is shown in Tables 2a and 2b, for men and women, respec tively.For all cancer sites combined (non-mela no ma skin cancer excluded) incidence rate during the study period increased significantly for both sexes and was more expressed in men, where APC rose by 2.4 and 1.3% per year in men and in women, respectively.Cancer incidence during the study period in creas ed significantly in 13 out of 20 cancer sites in men and in 12 out of 22 in women.The significant decrease in cancer incidence was observed only in 5 and 2 cancer sites for men and women, respectively. The site-specific results of the Joinpoint Regression Analysis of cancer incidence in men showed continuous increase without any changes during 20 years for cancer of colon, rectum and anus, brain and nervous system and thyroid, as well as melanoma of skin, testicular cancer, non-Hodgkin's lymphoma and multiple myeloma.The most rapid increase in incidence was observed for prostate cancer, overall by +11.7% per year.Incidence rose by +22.5% between 2001 and 2007, but increase was seen before 2001 as well, by +8.1% per year.Strong decrease though not significant was seen for prostate cancer incidence from the year 2007.Continuous decrease in cancer incidence among men was observed for lung and stomach cancer, and for Hodgkin's lymphoma (all statistically significant). Trends in cancer incidence among women had changed only for few cancer sites during the period of analysis, and these are cancers of kidney, bladder and thyroid, as well as cervical cancer.The incidence of cervical cancer levelled off in 2004, where increase by +2.9% per year was seen prior and slight insignificant decrease was seen afterwards.Overall, APC of thyroid cancer was the biggest among women (+8.5% per year), but statistical rise was seen just before 2000, afterwards increase was not meaningful. Incidence of all cancers combined statistically increased among men from 1991 to 2007 and then meaningful decrease was observed.No such pattern was observed among women, where incidence of all cancers combined increased during all the observation period. Age-standardised incidence rates and regression lines fitting these rates for selected cancers are demonstrated in Fig. 2. The strongest increase in cancer incidence is clearly visible for prostate cancer with the peak in 2007, and at the same time, increase for all cancers combined was observed among men.A small gap between age-standardised incidence rates of colon cancer between men and women was notable in 1991, and it became wider in 2010 with colon cancer being more common in men than in women. DISCUSSION We analysed trends in cancer incidence in Lithuania between 1991 and 2010 based on data from the population-based Lithuanian Cancer Registry and found an increase in cancer incidence for almost all cancer sites with a few exceptions in two decades for both sexes.Based on 2012 data, the estimated age-standardized incidence rate for all cancer sites combined is slightly higher for men in Lithuania than in Europe, while these estimates are very similar for women (8). In the 90s, lung cancer was the most common cancer diagnosis among men in Lithuania.The incidence rate in Lithuanian men is quiet high and similar to those obtained in other Central and Eastern European countries (8)(9)(10).Decrease (8,9,11), and was likely associated to decrease of smoking prevalence among men (12).Another site with decreasing incidence in most populations in the world is stomach cancer (13), nevertheless, incidence rates in Lithuania are still markedly higher than those in Western or Northern Europe (8).The decrease in incidence in Lithuania was also observed, but reasons for this worldwide decline in incidence are not fully understood, and are partly explained by changes in diet, reduction in chronic H. pylori infection due to improved hygiene and use of antibiotics (14)(15)(16).Currently, prostate cancer is the most frequent cancer among males in Europe (17), as well as in Lithuania.Increasing incidence of prostate cancer was observed in other European countries, where prostate-specific antigen (PSA) testing has become widespread (18,19).It remains unclear to what extent the rising trends in incidence rates could be attributed to an increased risk of developing this tumor or to an overdiagnosis due to opportunistic screening practices.In Lithuania, PSA testing is offered to healthy asymptomatic men as a screening test in the population-based Early Prostate Cancer Detection Programme since 2006.The extra ordinary rise of prostate cancer incidence in Lithuania following introduction of PSA screening was observed (20), and there is strong evidence that these changes are the result of increased detection rates, especially in men of eligible age for screening (20,21).Such a huge number of newly diagnosed prostate cancer cases heavily influenced overall cancer incidence changes in male population. Breast cancer is the leading cancer site in Europe among women, but incidence rate in our country is lower.In 2012 in Europe the estimated ASR was 94.2 per 100 000 women (8).Although the breast cancer screening program was started in Lithuania in 2005, our results have not found any changes in cancer incidence trends yet.Despite of the presence of the breast screening program in some countries, the differences of breast cancer incidence in Europe are likely due to variations in external risk factors across populations, such as age at birth of the first child and low parity (22). Rising incidence of colorectal cancer might be due to changes in people eating and behaviour habits.Obesity, physical inactivity, smoking, heavy alcohol consumption, a diet high in red or processed meats, and inadequate consumption of fruits and vegetables, are also factors associated with economic development or westernization (23).Lithuania is not a long-standing economically developed country and we expect to see further increase in colorectal cancer incidence.What is more, the screening program based on the faecal occult blood test was started in Lithuania in 2009 and it is still too early to see the impact on incidence. Possible stabilization in changes in cancer incidence might be seen for bladder cancer.Sig nificant rises of ASRs were seen until 2001 and 1999 in men and women, respectively, with no meaningful changes later on.Bladder cancer is becoming rarer in Western communities over the last decades (24) and we could expect the same changes in the future in Lithuania.This stabilization in cancer incidence is partly due declines in the smoking prevalence together with reduced occupational exposure to carcinogens (12). Cervical, endometrial, and ovarian cancers are relatively common, and cause significant cancer morbidity and mortality worldwide.In Lithuania, the incidence of cervix and corpus uteri cancers increased slightly during the study period and was stable for ovarian cancer.Cervical cancer trends in a given country mainly depend on the existence of effective screening programmes and time changes in disease risk factors, notably exposure to human papillomavirus (25).An organized screening program for cervical cancer using the Pap smear was started in Lithuania in 2004, however, inciden ce of invasive cervical cancer did not start to decrease in recent years.Endometrial cancer affects postmenopausal women almost exclusively.Endometrial cancer risk has been previously associated with several host factors, including high body mass index, nulliparity or low parity, early age at the first birth, history of type 2 diabetes mellitus (noninsulin dependent), and family history of cancer, particularly endometrial cancer (26,27).The aetiology of ovarian cancer is not well understood.Established risk factors for ovarian cancer include age and having a family history of the disease, while protective factors include increasing parity, oral contraceptive use, and oophorectomy (25).Many of the causes of ovarian cancer are yet to be identified.Additional research is needed to better understand the aetiology of this disease. Improvements in diagnosis may contribute to the rising incidence of kidney and thyroid cancer.It is known that thyroid cancer is more common among women (28), but increase in in cidence among men is also observed which is most likely due to new diagnostic technologies (29).Considering kidney cancer, both incidence of late-stage renal cell carcinoma and mortality have also been increasing, implying that risk factors are contributing to this upward trend (30,31).Among this cancer risk factors there are not only life-style risk factors like smoking, diet and obesity, use of some drugs etc., but environmental risk factors like occupational exposure to different chemicals, radiation, renal dialysis as well, participating probably in the aetiology of kidney cancer. While considering haematological cancers, incidence in Lithuania had the same trends as in most European countries, where incidence of non-Hodgkin's lymphoma rose between 1% and 5% per year in both sexes in most European countries, alongside a decrease in Hodgkin's lymphoma (32).The reasons for these trends in non-Hodgkin's lymphoma and Hodgkin's lymphoma incidence are still largely unknown.This analysis in time trends for cancer incidence provides an estimate of the burden of cancer in a certain population which is a useful tool for planning cancer control.Identification of changing epidemiological patterns in cancers is crucial in formulating future health care clinical tools, evaluating prognostic and therapeutic models, and generating new hypotheses on disease aetiology and prevention. CONCLUSIONS This up-to-date analysis provides a basis for establishing priorities to cancer control actions in Lithuania.These results show the increase in incidence rates in Lithuania of all cancers combined among both sexes.Trends in cancer incidence rates for males were heavily influenced by trends in prostate cancer which is the most common cancer among men.With the major exceptions of male lung cancer and stomach cancer in both sexes, in the last two decades cancer incidence has increased in Lithuania for most cancers.Increasing cancer incidence requires targeted interventions on risk factors control, early diagnosis, and improved management and pharmacological treatment for selected cancer sites.Rezultatai.Per pastaruosius dešimtmečius padaugėjo daugumos lokalizacijų vėžio susirgimų, ženkliai sumažėjo tik vyrų plaučių vėžiu ir abiejų lyčių skrandžio vėžiu sergamumas.Labiausiai išaugo sergamumas prostatos vėžiu ir moterų -skydliaukės vėžiu.Bendram visų vėžio lokalizacijų rodikliui tarp vyrų ženklią įtaką turėjo išaugęs naujai diagnozuotų prostatos vėžio atvejų skaičius. Fig. 1 . Fig. 1.The most common cancer sites in men (a) and in women (b) in 1991-1995 and 2006-2010 Fig. 2 . Fig. 2. Age-standardized incidence trends for selec ted cancers by sex in Lithuania in 1991-2010 Table 1a . Number of cancer cases (N), crude (CR) and age-standardized incidence (ASR) rates in men, by cancer site between1991-1995 and 2006- Table 2a . Results of the Joinpoint Regression Analysis in cancer incidence trends by cancer site in men, 1991-2010 Table 2b . Results of the Joinpoint Regression Analysis in cancer incidence trends by cancer site in women, 1991-2010 * Annual percent change is statistically significant.
4,379.2
2014-02-07T00:00:00.000
[ "Medicine", "Biology" ]
Variation in quartzite exploitation during the Upper Palaeolithic of Southwest Iberian Peninsula The Upper Paleolithic of SW Iberia is marked by the presence of chopper and flake assemblages in quartzite. Detailed characterization at regional and chronological levels of these assemblages is of the utmost importance because, in the most Paleolithic recent phases, they can be found without type-fossils associated or in non-datable deposits. In this study, we used 24 quartzite assemblages from SW Iberia, to test the diagnostic character of this raw material through attribute analysis and refitting. Results indicate that Gravettian, Solutrean and Magdalenian can be distinguished on their quartzite assemblages, enabling, by itself, the differentiation of the Upper Paleolithic key-sequence . They also indicate that Gravettian and Magdalenian assemblages are technologically closer to each other than to Solutrean, a pattern possibly related with the adaptation to the Last Glacial Maximum. INTRODUCTION Chopper and flake assemblages on non-flint raw materials are worldwide known and omnipresent through all diachrony of hominid tool production.In the case of the Portuguese Upper Paleolithic and Epipaleolithic, their crude aspect and often non prepared reduction is a clear deceiver of the intrinsic complexity of their production.This was shown, for instance, by refitting at Barca do Xerêz (Araújo and Almeida 2007), Lagar Velho (Zilhão and Almeida 2002), Gato (*) Núcleo de Arqueologia e Paleoecologia (NAP).Faculdade de Ciências Humanas e Sociais, Universidade do Algarve (Ulag) -Campus Gambelas.8005-139 Faro.Portugal.La página web de Ualg está en actualización.El NAP podrá ser iden-Preto, Cabeço de Porto Marinho or Anecrial Cave (Almeida 2000).Therefore, a detailed project dedicated to their comprehensive characterization through the Upper Paleolithic was of most importance in order to recognize regional and chronological variations.They represent one of the most important distinctiveness of the Western Iberian Paleolithic, a phenomenon noted right from the beginning of archaeological investigation in Portugal (Ribeiro 1871).The Portuguese Upper Paleolithic is represented by several sub-periods, distinguished by the presence of very different leptolithic technologies and diagnostic tools (Zilhão 1997;Bicho 2000), with flint dominating the quarry sites but with considerable important quantities of quartz and quartzite in both residential and logistic campsite.Elucidation of the Upper Paleolithic sub-divisions was the result of 150 years of intensive research in several Portuguese regions.Typological and technological analyses revealed that both blank production and retouchedtool types closely followed the traditional Western European patterns and tool-kits.Blades were mostly produced on flint through prismatic cores which preparation included descortification, configuration and the creation of a crest from where the blanks begun to be extracted; reshape was common.Bladelets are usually in flint and quartz.This type of blank was extracted not only through prepared prismatic cores, but also through thick endscrapers and carinated burins.In the first case, the result tended to be twisted blanks, while in the second the result was the creation of thick blanks, also known as burin spalls.These studies brought about the local recognition of several sub-stages of the Early Gravettian, Middle, Late and Terminal Gravettian; Proto and Upper Solutrean, Early, Middle, Final and Terminal Magdalenian .The combination of stratigraphy, artifact seriation with absolute dating allowed the clarification of their temporal organization, resulting in the creation of a fairly accurate Western Iberian Upper Paleolithic key-sequence (Heleno 1956, Marks et al. 1994;Bicho 1996Bicho , 2000Bicho , 2001Bicho , 2004;;Zilhão 1997;Gameiro and Almeida 2001;Zilhão and Almeida 2002;Almeida et al. 2002;Almeida et al. 2004: Almeida et al. 2008;Aubry 2009;Cascalheira 2010). During the Upper Paleolithic, quartzite components are mostly regarded as composed of choppers and flakes, presenting cortex and usually lacking any kind of preparation.Elongated products, diagnostic tool-types (which are those in which most of the traditional key-sequence is based), and weaponry are scarce or nonexistent, and, traditionally seen as without unique technological strategies or diachronic differences .This is completely different from the scenario prior to the emergence of the Upper Paleolithic, when non-flint raw materials are present in cores, blanks, retouched and shaped diagnostic tools.Traditionally, this differentiated use of quartzite in the Upper Paleolithic had two different interpretations.Some considered it as reflecting an opportunistic use of local raw material in order to substitute or preserve flint (Zilhão 1997), while others (Bicho 2000) suggested that it might have been used to perform some specific domestic functions, although, not specifying what.Both perspectives seem to agree on the fact that the reason behind the difference between flint/quartz and quartzite assemblages might have been the availability or/and the low quality of the latter for the production of blades and bladelets.This would be explained by the relatively low frequency of these quartzite blanks in the assemblages of such period (Bicho 1996(Bicho , 2000;;Zilhão 1996Zilhão , 1997)).However, very thin, long and sometimes exquisite products such as handaxes in the Micoquian or Levallois blanks, including elongate flakes and blades in the Mousterian are frequent.Independently of the reasons, despite the presence of a large number of sites dominated by chopper and flake industries that lack the traditional type-fossils (Breuil andZbyszewski 1942, 1945;Zbyszews ki 1966;Raposo and Silva 1984;Raposo 1986;Meireles 1992), the impasse on decoding the post-Middle Paleolithic quartzite components resulted in the decrease of the study of such assemblages . The recent discovery of new sites, some of which are close to important flint outcrops, in which quartzite and greywacke are abundant and often dominating the assemblages (Almeida et al. 2002;Almeida et al. 2008;Araújo and Almeida 2007;Aubry 2009), has re-opened discussion on the interpretation of quartzite exploitation as merely opportunistic.At the same time, study of these new data revealed that the reduction sequences were quite complex. METHODS AND MATERIALS To test the importance and variability of coarse raw materials in the tool-kits from Anatomically Modern Human communities, we used SW Iberia as a study case, by applying a method that combines two lithic technology approaches (attribute based analysis and refitting), statistical enquiry and refitting.The objectives were: 1. To infer if the importance of quartzite as raw material for the anatomically modern human groups in this region. 2. If its exploitation and use changed between the beginning of the Gravettian and the end of the Magdalenian . 3. To construct, if possible, a consistent model for the quartzite exploitation during the Upper Paleolithic, in SW Iberia. Dated to the Gravettian are Caldeirão Cave (layers Jb, Ja, I, H), Terra do Manuel, Fonte Santa and Alecrim Rockshelter .The Solutrean assemblages also came from Caldeirão Cave (Fa, Fb, Fc, H) and Casal do Cepo.Finally, the Magdale nian ones came from Caldeirão Cave (layers Ea, Ebtop, Eb-bottom), Picareiro Cave (layers E, F, G, I, J, K, L), Bocas (layer Bottom) and Bairrada.Available absolute dates are presented in table 2. De- tailed description and inventories of each context can be seen elsewhere (Bicho 1997;Zilhão 1997;Quelhas 1999;Bicho et al. 2003;Pereira 2010).( 1) The analyses follow the common lithic technology concepts used in Western Europe (e.g., Inizan et al. 1999).Assemblages were divided as shown in table 1 and the relative frequency of each raw material in each context is presented in (1) (r)-Refused by the original author.table 3 .Core and blank analysis considered the attributes present in tables 4 and 5, respectively, plus measurements (Tab.4 and 5). The original morphology of the core blanks followed Rapp and Hill (1998: 42) classification.Weight was measured on cobbles, pebbles and cores only.Fire cracked rocks were not considered for the metrical analysis.The traditional morphological classification of cores followed the Brézillon (1983) (2) (2) Sites approximately organized from older (left) to recent (right).(-) Non available data.cores on cobbles and pebbles intended for the production of flakes were classified as choppers. To avoid this simplistic classification, we adapted the approach by Benito del Rey and Benito Ál varez (1998: 64-108).Cores were first divided into extensive, intensive and pre-determinate exploitation categories.Extensive exploitation targeted the production of big flakes or fragments that were after used as cores or massive tools .Intensive exploitation is related to the production of many small and pre-determined flakes as described by Inizan et al. (1999: 61).'Stepped cores' are those that in François Bordes typology would fit-in the category of chopper.The non-use of the term chopper results of the fact that they are cores and not retouched tools.For the typological classification of retouched tools, we used the most common type-list for the Upper Paleolithic (Sonneville- Bordes and Perrot 1954, 1955, 1956), which has been adapted for the Portuguese territory (Zilhão 1997).Since the main object of this study was the quartzite ma crolithic component, we combined Sonnevi lle-Bordes and Perrot with the type-list for the Lower and Middle Paleolithic (Bordes 1961) (Tab. 6).This option allowed a more detailed description of the tool components, especially concerning the variation within sidescrapers.Intensive refitting was performed in all assemblages, sometimes with considerable results (Fig. 2), which reinforced the results from attribute analysis . As complementary information, it has to be said that orthoquartzite and metaquartzite can be found in the same secondary deposits.Both represent a compact and hard rock formed by fine grains of strongly united quartz that fractures conchoidally or sub-conchoidally through the grains rather than around them (Costa 1998;Farndon 2006).Since they represent a very distinct raw material when compared to flint or quartz, we will refer to both simply as quartzite.No microscopic study was carried out to separate the quartzite granulometry . RESULTS The description of the results is illustrated by the tables, figures and graphics.Pebbles and cobbles are water-worn and cortex represents a very thin cap as a regular surface, which do not seem to influence the impact of hammer in the platform of percussion or quality of the cutting edges.That seems to be corroborated by the absence of preparation of the cores, including their configuration.This resulted in the predominance of cortical butts and sticking platforms.There are, however, some cases of maintenance, namely, the removal of problems such as the irregularity of the flaking surface front resulting from stepped detachments or pronounced irregularities in the overhang (Fig. 3). Six different reduction strategies were identified (Tab.4).The Extensive concept seems to have been intended to split big round cobbles, often weighing between 2 and 4 Kg.The resulting big and thick flakes, the remaining core and, sometimes, bigger fragments that resulted from this knapping were subsequently used as base for the recurrent obtainment of flakes that fall into the standard dimensions .The Stepped reduction aimed the production of standard flakes according to an intensive concept.This strategy tended to start in a cortical platform, and follow the thickness of the volume.Detachments were oblique, orthogonal and sometimes, usually the last ones that failed and, therefore, definitely jeopardized the continuation of the reduction can even be obtuse.Reduction was unipolar, unifacial, unidirectional parallel or convergent sequence.The parallel or convergent orientation of the detachments depended in the platform being more or less flat.During the knapping process, these tended to lateralized progressively, especially in the pebbles that had a more discoidal shape. Prismatic reduction was aimed for the production of flakes, blades and bladelets.It also started with cortical platforms, and followed the thickness of the volume in a unipolar, unifacial and unidirectional parallel sequence.Pebbles used as support tended to be thicker and more angular than those from the Stepped examples.Platforms also tended to be flatter and more regular.Reduction usually started in a natural crest or, in other words, a natural apex of the block.A perpendicular platform that creates crossed detachments may be present in the final moments of exploitation of the blocks, with negatives usually overlapping but rarely alternating with the prior.This situation represents a rotation of the core in order to follow the same strategy in a new platform and not a deliberate variation of the prismatic concept with alternate or alternant crossed detachments . The production of small flakes was the objective of the Centripetal reduction.The supports for the development of this concept were thick flakes or fragments, often resulting from the division of the big cobbles.It is rarely present in small pebbles .Some bigger cores can be recognized during the Gravettian, usually related with the exploitation of the flakes or fragments that resulted from the sub-division of big cobbles .Nevertheless, they tend to be rather smaller than the stepped and prismatic ones.In the case of flake supports, the detachments tended to be concentrated on the bulb.That is not surprising, since this is the place on the flake where exists a higher quantity of mass available .Due to the location of the detachments, some of these cores where the detachments are more concentrated in the bulb and not so much on the perimeter of the flake, can fall in the category of Kombewa cores.The goal was to reduce the volume's surface, through centripetal unifacial or, less frequently, bifacial detachments . Based on their frequency, one can say that the other concepts, such as the Random and Polyhedral ones, can be considered as result of punctual circumstances that demanded a better use of volumes of raw material, often, big fragments or cobbles with major imperfections.During the Gravettian, the initial moments of the stepped reduction could have been slightly different, starting in one of the most angular corners of the initial blank, with the detachment of several overlapping flakes, sometimes alternating with others struck from the top of the front of débitage in use.During this phase, the number of flakes with cortical and flat butts can be even; however, before this area is exhausted, the exploitation expands laterally, following a standard unipolar unifacial parallel sequence.The recognition of this variance was only possible through refitting (Fig. 2: 5).Extensive cores to produce smaller cores, resulting in a high core to cobble ratio, are only present during this period (Fig. 2: 1-3).During the Solutrean the blanks used for the cores were always pebbles with spherical or sub-prismoidal shapes, but never cobbles and the reduction sequences remained relatively stable, starting in the most suitable sector and progressively reducing it to exhaustion.Stepped reduction rarely presented any kind of variation.The Centripetal cores were much smaller and, sometimes, made in quartzite similar to that found in the Solutrean pre-determined flakes (see below).During the Magdalenian, the cobbles/pebbles chosen to serve as cores tended to be as large as those from the Gravettian but were more angular, with a wide and flat platform.The Prismatic exploitation usually started along a natural well marked crest.The intention of the Magdalenian knapper to produce elongated blanks is clear, and not circumstantial as in the previous phase (Fig. 4A).Massive tools, sometimes in the shape of big thick flakes with strong distal cutting-edges, often presenting edge damage are common (Fig. 4B).The Centripetal reduction is usually evident on very small cores, on very fine quartzite (Fig. 5). A diachronic comparison of the core frequency (Fig. 6) shows that during Gravettian compounds are clearly dominated by the Stepped reduction (53.8%), followed from far by the Prismatic (18.3%) and Centripetal (17.2%).Random strategies are rare .During the Solutrean, cores assemblages are evenly dominated by Prismatic and Stepped reductions (41.2% each), while the frequency of Centripetal as well as Random/ Polyhedral is low (8.8% each).Finally, in the Magdalenian core assemblage, Prismatic reduction takes the lead (41.4%), followed by the Stepped (37.6%), from far by the Centripetal (14.3%).Polyhedral and Random cores are rare (3%). The diachronic analysis of the relative frequency of core types shows that from Gravettian to Magdalenian the Stepped reduction decreases, the Prismatic increases, and these two are the ones that always dominate the assemblages (Fig. 6B).The curve from the Centripetal cores show a considerable reduction during the Solutrean and seem to have a response from the one of Random cores.Proportional response seems to occur also between the Extensive and Polyhedral, despite the different objectives of each concept. The χ 2 test on the null hypothesis of significant chronological difference between core assem-blages indicate that Gravettian and Solutrean as Gravettian and Magdalenian are significantly different, but Solutrean and Magdalenian are not (Tab.7).This is corroborated with the Cluster analysis, based on the Euclidean distance between groups (Fig. 7A).This fact might be related with the considerable low amount of cores in the Solutrean and, therefore, should be considered with caution.First, because it concerns only the typological variability; Second, and more importantly, because it is not corroborated by the same tests performed on the technological data obtained from the core assemblages .In fact, the χ 2 test (Tab.8) and cluster analysis (Fig. 7B) on the technologic attributes performed on the core assemblages clusters Gravettian with Mag- dalenian, meaning that these two are technologically more similar (e.g. have more in common) between them than with Solutrean.Finally, because the typological similarity and clustering of the core assemblage is not corroborated by the same χ 2 test performed on the entire assemblages (Tab.9), nor even by the same the χ 2 test (Tab .10) or cluster analysis (Fig. 7c) performed on the assemblages after pebbles, fragments, chips and firecracked rocks were excluded.This likeness between Gravettian and Magdalenian quartzite assemblages is not a surprise, once it follows the pattern already known for the flint and quartz. The combination of technological analysis with refitting shows that quartzite exploitation intended to produce flakes, which always represent more than 92% of the blank assemblage .A progressive but still slight increase in the number of bladelets exists towards the Madgalenian as there is a slight peak in the frequency of blades during the Solutrean (Fig. 8). The attribute analysis on the flakes seems to show a general standardization during the three periods (Tab.5).The majority of the flakes do not present cortex on the dorsal face and only a few have more than 50% of cortex .This element is often located along a lateral edge and present mostly in the bigger flakes.Unidirectional parallel scars dominate the dorsal patterns and tend to increase in time by opposite to cortical and unidirectional convergent .On-axis detachments increase considerably during the Magdalenian .Flat profiles are slightly more abundant during the Solutrean, while triangular are slightly more abundant during the Gravettian and the Magdalenian .Feathered ends decrease from Gravettian to Magdalenian, overpass and fractured in- crease, while stepped are more abundant during the Solutrean.Lipping is much less abundant during Solutrean.Edge morphology varies considerably.The most common shape is the irregular (especially during the Solutrean) and divergent (especially in the Gravettian, when convergent are also very frequent).Parallel are slightly more abundant towards the Magdalenian.Biconvex along with concave-convex shapes are rare.Pro-nounced bulbs are considerably more abundant during the Solutrean than during the other two periods.As it happens with the cores, the cluster analysis performed over the results from the technological analysis grouped the Magdalenian assemblage with the Gravettian, meaning that flake assemblage from pre and post Last Glacial Maximum has more in common in between than with Solutrean or this one with any of the others (Fig 9A ); this is corroborated by the Principal Coordinates analysis (Fig. 9B). Flakes tend to be small, usually not bigger than 40 mm in length and 30 mm in width, with the Solutrean ones slightly larger and standardized (Fig. 10A).Again, the cluster analysis on the length, width and thickness of the three flake assemblages, clusters Gravettian with Magdalenian (Fig. 10B).This result that is corroborated by the Principal Coordinates analysis (Fig. 10C). In some Solutrean contexts, we have found a small number of flakes (with 5 to 10 cm length -~8 cm), that are relatively thin, on-axis, with lateral sharp edges, feathered ending, and produced in types of quartzite different from those present in the remaining assemblages.None was refitted, albeit some seem to come from the same block.Their characteristics do not fit any type or individual from the core assemblages and they are clearly distinct from the remaining flake assemblages.They were not recovered in Casal do Cepo (Middle Solutrean), but were found in Caldeirão (Middle and Upper Solutrean, layers Fa, Fb) and in Layer A of Vale Boi (Upper Solutrean), and in the latter case along with a fragment of Parpalló point (Fig. 11). The quantity of tools varies considerably and, like with the cores, they are considerably fewer in the Solutrean.Typical Upper Paleolithic retouched tools are more abundant during the Magdalenian as are sidescrappers (Tab.6).Percussion tools are very common in quartzite assemblages and can frequently occur in association with massive retouched tools such as choppers.During the Gravettian, retouched tools are dominated by mon.While the denticulates decreased substantially, sidescrapers increased (17.4%), again with simple forms being the most frequent.Considering all the Solutrean quartzite assemblages from SW Iberia, it is striking that only two Laurel Leaf points were identified: one from the Middle Solutrean site of Vale Almoinha and the other from the Upper Solutrean layer from the Slope area in Vale Boi .Finally, during the Magdalenian, tools are more frequent than in any other period.Despite this, assemblages are still dominated by notches, retouched flakes or fragments and denticulates, with sidescrappers decreasing slightly (15.3%).Again, the cluster analysis on the tool assemblages group Magdalenian with Gravettian, indicating that these two have more in common than Solutrean (Fig. 12A).This result is corroborated by the Principal Coordinates analysis (Fig. 12B). DISCUSSION Traditionally, European Upper Paleolithic research paid more attention to those lithic assemblages made on flint than other in other raw materials.This is because this flint is the raw material most used for the diagnostic retouched tools and elongate assemblages .This over-attention to flint artifacts denied importance to other raw materials such as quartz and quartzite, resulting in a truncated interpretation.Consequent- ly, the role of quartzite exploitation was always down-played in the Upper Paleolithic and has never been the focus of in-depth research. The constant presence of quartzite and greywacke in the SW Iberian Upper Paleolithic assemblages, independent of the distance to flint outcrops, clearly indicates that it had a more relevant role than previously thought.Considering the archaeological evidence, it seems clear that quartz and flint assemblages are more similar to each other than with those of quartzite.Both present, systematically, three different strategies of bladelets production.One based on the reduction of prismatic cores to produce flat profile blanks, other based on the reduction of thick-nose endscrappers for twisted blanks and, finally, a third based on the reduction of burins for the obtainment of thick bladelets, usually known as burin spalls (Zilhão 1997;Bicho 2000;Marreiros 2009;Mendonça 2009;Cascalheira 2010).These blanks were often retouched into similar spectra of retouched tools, most of them widely interpreted as hafted elements used in many activities but many of them clearly as hunting gear (Zilhão 1997;Bicho 2000), as shown by some use-wear analysis (Bicho et al. 2009;Igreja 2009;Marreiros 2009).A very important flake component is also present in both raw materials.The major difference between these assemblages is the relative presence of blades, retouched tools on blades and shaped tools, such as Laurel Leaves.The resemblance between flint and quartz bladelet and flake assemblages is most probably related with the similarity in the physical properties of these two raw materials, since both present a large capacity to produce sharp edges.Furthermore, since the most readily available quartz in SW Iberia tends to split in smaller chunks, the presence of quartz blades is rare.However, when this raw material presents good knapping quality, which is usually associated with macro-crystalline quartz, the resemblance increases considerably because quartz blades can occur (Zilhão 1997;Bicho 2000;Almeida et al. 2004;Marreiros 2009;Mendonça 2009;Cascalheira 2010).On the contrary, as it is shown in this paper, quartzite assemblages present a completely diverse strategy of exploitation, with different pattern of blank and retouched tool production aimed for the production of flakes.Usually, chopper-like cores and flakes compose these assemblages, both with considerable amounts of cortex.The retouch, despite rare, tend to be simple, forming notches, denticulates, retouched flakes and sidescrapers. The reasons beneath such difference were traditionally related with the physical properties of this raw material.According to some authors (Bicho 1996(Bicho , 2000;;Zilhão 1996Zilhão , 1997)), the coarse trait of quartzite disable it from the knapping ability to produce elongate, thin, sharpedged blanks and highly shaped tools.A detailed observation of the bibliography on the Paleolithic lithic assemblages from SW Iberia gives us strong clues on why this assumption can no longer be considered as valid . Consequently, the almost absence of quartzite retouched tools (except in some random occasions, probably related with unexpected situations) has to be interpreted as an anthropologic result and a cultural option.In other words, tools were not produced in quartzite either because SW Iberian anatomically modern humans did not want to, or because they lacked the ability to work it properly.If the second hypothesis is true, then what could have been the reason for these populations to use it through all the Upper Paleolithic and continued to use it until knapped stone tools were definitely discarded from the tool-kits? An important indication of the role of quartzite in SW Iberia might come from Central Portuguese region of Estremadura and the Southern coast region of Algarve .Estremadura is highly abundant in very good quality quartz, quartzite and flint, which appear in both primary and secondary deposits.Despite the flint quality and abundance, most Upper Paleolithic sites present a considerable amount of quartzite even when the occupations are located less than 5 Km from flint outcrops.The presence of quartzite is clearly evident especially in those interpreted as residential sites, such as Cabeço de Porto Marinho (Bicho 1996(Bicho , 2000;;Zilhão 1996Zilhão , 1997)).Similarly, Algarve is also highly abundant in quartz, greywacke and flint, however all of lower quality than in Estremadura.Quartzite occurs in very small, rounded and friable pebbles, but there is another coarse raw material that is very abundant in cobbles, pebbles and slabs: greywacke.This raw material appears in all sites that are not flint outcrops and it is particularly abundant through all sequence and in all loci of Vale Boi, a residential site with ~10,000 m 2 .Reduction patterns of this raw material aimed for the production of flakes in similar quantities and characteristics as those in quartzite on Estremadura (Marreiros 2009;Mendonça 2009;Bicho et al. 2010;Cascalheira 2010;Pereira 2010). The almost absence of quartzite from the hunting gear of SW Iberian Upper Paleolithic seems to indicate that the absence of use of this raw material in the hunting gear was likely a cultural option.In fact, many hunting tools were pre-determined and sometimes just slightly retouched as is the case of the Vale Comprido and Casal Filipe points.At the same time, quartzite retouched tools fit in what is widely known as domestic tools (notches, denticulates, marginal trimmed flakes, sidescrappers, anvils, etc.).This is congruent with the few available use-wear analysis preformed in some Portuguese sites, where quartzite is associated with the procession of vegetable, wood, hide, antler and bone, but not hunting (Carvalho 2007;Bicho et al. 2009;Igreja 2009;Pereira et al. 2011).Quartzite is also highly abundant in sites where intensive carcass and hide processing occurred, such as the Gravettian EE15 occupation layer of Lagar Velho Rockshelter (Almeida et al. 2009) or the Epipaleolithic multi-component site of Barca do Xerêz (Araújo and Almeida 2007). Together, the available data seem to point to the possibility of coarse-grained raw materials such as quartzite and greywacke to have had a specific role in the everyday SW Iberian Upper Paleolithic activities different from that of just save or replace flint, as it was firstly suggested (Zilhão 1997).This role seems to have been that related with domestic activities and not hunting as argued before by Bicho (1992).Hunting tools are always in flint and quartz.This seems to indicate that was probably quartz and not quartzite that was used to replace flint as previously suggested (Bicho 2000); moreover because flint and quartz allow the production of edges that are more similarly sharp than those in quartzite.This suggests that occupation in fluvial terraces similar to that of Barca do Xerêz (Araújo and Almeida 2007), rich in quartzite chopper and flakes, lacking or poor in flint and quartz hunting gear, might also represent butchering and/or hide process sites.Such dichotomy between the use of flint and quartz in the hunting gear vs. quartzite and greywacke in processing activities seem, at this point, to be explained only with the cultural background of the modern human populations living in the Western Iberian sector between the South margin of the Douro River and Algarve .This situation is clearly different from that seen in most of Iberia, especially Eastern and Mediterranean, where flint dominates over 90% of the collections.Flint is less present is other Iberian regions, but that seems to be related with its low availability .North of Douro valley and most of Galicia, quartz and quartzite are abundant and flint is rare.This geologic fact couples with the topographic features of this territory (highly mountainous) which event today limits the circu-lation of people through the landscape.Archaeological lithic assemblages in this territory are almost all in quartz and quartzite .Presently, it is assumed that, during the Paleolithic, distance and orography were probably major facts that, together, jeopardized the long distance acquisition of flint (Straus 1980;Utrilla 1981;Arrizabalaga 1999;Llana and Villar 1996). Another example came from Asturias, a region where flint, quartz and quartzite are abundant but where the Upper Paleolithic assemblages are considerably different from those of SW Iberia.Here, quartzite is present in curated tools (namely projectiles), such as the Solutrean Concave Base points along with domestic tools such as notches, denticulates or retouched flakes (Straus 1980(Straus , 1983(Straus , 1986;;1996;Cabrera Valdés 1984;Straus and Clark 1986;Rasilla Vives 1989;Bernaldo de Quiros and Cabrera Valdes 1996). The diachronic variability on the reduction sequences in quartzite identified during the Upper Paleolithic of SW Iberia deserves some discussion.First, from a geological point of view, it seems that there were any changes in the quality, morphology or availability of quartzite during this time span that could explain the existence of such variations .Secondly, the setting of each occupation (open air, rockshelter or cave) might have had a direct relation with the functionality of the occupations and, therefore, with the frequency of each raw material or the spectrum of tools, as it was shown by the butchering sites discussed above.However, it does not seem to have had any influence on the technologic patterns on which the quartzite assemblages were produced.These two facts putted together suggest that the diachronic changes recognized in the quartzite Upper Paleolithic assemblages from SW Iberia have to be related with either the adaptation to raw material constrains or with cultural grounds .The fact that Estremadura is highly rich in good quality flint, quartz and quartzite, seems to set aside the first hypothesis. Techno-typological approaches to lithic assemblages dated from the Upper Paleolithic of Western Iberian have been performed since early 1990s .They had arrant results on the recognition of idiosyncrasies that enable the distinction of Gravettian, Solutrean and Magdalenian assemblages in both quartz and flint.Therefore, it should not be a surprise that such idiosyncrasies would also be recognized in quartzite.The fact that, statistically, both technological traits and retouched tools variability in this raw material always clustered Gravettian with Magdalenian indicates that these resemble each other more than with those from Solutrean.A considerable similarity between Gravettian and Magdalenian assemblages was already recognized for the lithic assemblages in quartz and flint, in such a way that they were only possible to distinguish after absolute dates (3).At the same time, a recent approach to Solutrean assemblages (Cascalheira 2010) indicates that during this period there appears to have been a higher standardization on the lithic production, especially in quartz and flint.The higher standardization recognized in the quartzite flakes is congruent with that.Solutrean quartzite flake assemblages also stand out from the Gravettian and Magdalenian ones for being slightly bigger .Together, the analysis of quartzite flake attributes clearly indicates a consistence shift from Gravettian to Solutrean and from Solutrean to Magdalenian .The fact that quartzite exploitation variability during the three major periods of the SW Iberian Paleolithic closely followed the shifts recognized in the other most used raw materials (flint and quartz) indicates that quartzite use was not occasional but an organic part of the everyday lifestyle. CONCLUSIONS With this study, we were able to accomplish the three objectives proposed.We showed that quartzite had a major importance for the anatomically modern human groups from SW Iberia.This statement is supported by several data.First, quartzite artifacts rarely fit into the hunting-gear and present characteristics that seem to putt such assemblages closer to the domestic activities .This idea seems to be reinforced by the few usewear studies that associate its use to processing activities.Furthermore, the lack of preparation of both cores and blanks, indicate that tools and blanks were probably aimed to be used in the hand and not hafted or as projectiles.Consequently, its use seems to have been complementary to the flint and quartz assemblages; these two with a clear hunting goal despite some domestic ones as well, while quartzite was aimed only towards the domestic tasks.This means that it was not used only in an opportunistic fashion or to save flint, but it was part of the daily tool-kit. Our results also seem to show that the production of these core-and-flake assemblages followed different strategies through time, with consequent different results .Those differences are such that it is now possible to fit each assemblage in the three specific time phases that correspond to Gravettian, Solutrean and Magdalenian without the need of the traditional type-fossils or other dating methods.This is particularly relevant for large areas of Western Europe, from where many sites with rich quartzite (or other coarse raw material) component, lacking traditional type-fossils and that were never dated are known. Distinction between the Upper Paleolithic sub-periods can be made on pebble/cobble selection, core type, reduction sequences and tool assemblage.Upper Paleolithic quartzite assemblages from SW Iberia can be organized as follow: Gravettian: pebbles/cobbles chosen for reduction tend to be big, angular, with the later splintered prior the intensive reduction, within the extensive concept.Stepped strategy dominates the assemblages.Extensive cores are only present in this period.This is the period when elongate blanks are rarer, despite that bladelets are more common than blades and where quartzite flakes are smallest; Solutrean: cores are always in spherical or sub-prismoidal pebbles, never in cobbles.Prismatic and Stepped strategies are dominant.Predeterminate flakes, with ~8 cm in length, thin, on-axis, with lateral sharp edges and feathered ending can be considered as chronological marker for Solutrean quartzite exploitation.This is the period when elongate blanks are most common, with a relative higher abundance of blades than bladelets.This is the period with more relative frequency of quartzite blades during the all Upper Paleolithic and where quartzite flakes are biggest; Magdalenian: cobbles/pebbles tend to be large and angular, with a wide platform.Prismatic and Stepped strategies dominate the assemblages.Typical prismatic cores for bladelets occur.Massive tools and flakes with intense edge damage are common.Polyhedric cores are only present in this period.Elongate blanks are rare, but bladelets are more common than blades .Also, this is the period with more relative frequency of bladelets and where some quartzite flakes are clearly bigger that average, usually presenting intensive edge damage . The use of different raw materials in regions with mixed geological features is a transversal phenomenon to geography and chronology.It represents both a simple and effective way to exploit available resources.The results of our study show that the combination of detailed attribute analysis, refitting and use-wear allow to overpass the absence of traditional type-fossils, usually associated with the hunting-gear.Therefore, it should represent a methodological approach to all territories and assemblages, especially to those where such artifacts are often absent.In addition, the possibility of differentiated use of raw materials should be kept open and further tested in order to research to get a deeper understanding of past population behavior.That should include both not archaeological and actualistic studies, once present hunter-gatherer communities are still used as proxy to Pre-historic periods.The possibility of differentiated use of raw materials towards specific goals might, eventually in the future, allow the recognition of some paleoethnographic sub-regions. . 3. Relative frequency of the raw materials in each archaeological context studied (2). Fig. 6 . Fig. 6.Quartzite exploitation during the Upper Paleolithic of Southwest Iberian Peninsula: A. Diachronic variability of cores in number; B. Diachronic variability of cores in percentage. Fig. 7 . Fig. 7. Quartzite exploitation during the Upper Paleolithic of Southwest Iberian Peninsula.Cluster analysis based on the Euclidean distance between groups using: A. Cores: Typologic criteria; B. Cores: Technologic criteria; C. Entire assemblages: after pebbles, fragments, chips and firecracked rocks were excluded. notches and denticulates.Sidescrapers represent 12.9% of the assemblage, being the simple sidescrapper group more frequent than double and convergent.Typical Upper Paleolithic tools are very rare.During the Solutrean, choppers and chopping-tools disappear.Notches along with retouched flakes and fragments were the most com- Fig. 8 . Fig. 8. Quartzite exploitation during the Upper Paleolithic of Southwest Iberian Peninsula.Diachronic frequency of blades and bladelets . Fig. 9 . Fig. 9. Quartzite flake production during the Upper Paleolithic of Southwest Iberian Peninsula: A. Cluster analysis based on the Euclidean distance between groups using technologic criteria; B. Principal Coordinates analysis using technologic criteria . Fig. 10 . Fig. 10.Quartzite flakes production during the Upper Paleolithic of Southwest Iberian Peninsula: A. Box-plot of flake dimensions (in mm); B. Cluster analysis based on the Euclidean distance between groups using measurements; C. Principal Coordinates analysis using measurements. Fig. 12 . Fig. 12. Quartzite tools during the Upper Paleolithic of Southwest Iberian Peninsula: A. Cluster analysis based on the Euclidean distance between groups using tool assemblages; B. Principal Coordinates analysis using tool assemblages . Complex Site Layer With chips Without chips Reference Flint Quartz Quart- zite Other Total Flint Quartz Quart- zite Other Total Tab. 5. Attribute analysis of the flakes from the sites studied (continuation). Tab .6 .Inventory of the tools from the sites studied .
8,664.6
2012-12-12T00:00:00.000
[ "Geology" ]
The Opening of the Stock Market of Angola and the Challenges for Companies at the Level of the Financial Reporting System and Corporate Governance This research aims to analyze the main requirements in terms of financial reporting and corporate governance mechanisms established by the Capital Market Commission (CMC) of Angola to companies operating on the capital market of Angola. Using a qualitative investigation approach, we conclude that, overall, Angola is providing itself with a legal framework that is in convergence with the major orientations of the international organizations, regarding the requirements made to entities operating in the capital market, in terms of governance and oversight of those corporations, of their financial reporting process. However, there is an urgent need for the CMC of Angola to orient or even advocate the mandatory adoption of the IASB’s international accounting standards for entities operating (or intending to operate) in the capital market of Angola. At the Corporate Governance level, we found a fair convergence of the principles and recommendations of the CMC regarding the OECD Corporate Governance principles. Introduction The increasing globalization of the Angolan economy, with the resulting increase in competition, as well as the high external financing needs associated with the large industrial and infrastructure projects, recommends the adoption of models of DOI: 10.5281/zenodo.1196478 organization maximizing business increased competitive efficiency of enterprises, as well as strengthening the external credibility of the Angolan economy and business organizations (Bom 2014). On the other hand, the startup of the capital market in Angola in December 2014, with the opening of Stock Exchange and Debt Values of Angola, and the opening of the stock exchange on 15 November 2016, which initially will only Transact public debt securities, and forecast for brief from the beginning of the acceptance of stock trading, impose the adoption, by the Angolan companies, of good practices of Corporate Governance (Bom 2014). Like in many developing countries, it is, therefore, necessary build a proper legal framework, sensitize civil society and Angolans business groups and implement Corporate Governance processes tailored to the ambitions of economic and social development of Angola (Fan & Wong 2005;Bom 2014). In addition to the participating entities of the capital markets of emerging countries is essential to enhance the quality of financial reporting of these entities by establishing greater demands in terms of accounting and financial reporting standards (including the increased disclosure of accounting and financial information) through the placing of financial statements to external audits (Fan and Wong 2005). Ball (2006) notes that the adoption of international accounting standards and financial reporting in emerging countries is a very important step to enhance the quality of financial reporting of companies based there. Then, this study aims to analyze the system level requirements of financial reporting and Corporate Governance mechanisms established by the Capital Market Commission of Angola to companies wishing to place their shares to trading on the Stock Exchange and Debt Values of Angola. Methodology In methodological terms, this study is an investigation of a qualitative character. The qualitative approach emerged as an alternative to positivist paradigm, to the extent that this sometimes proves ineffective to analyze and study the subjectivity related to the behaviors and activities of persons and organizations (Sousa & Baptista, 2011;Lee & Humphrey, 2006). Interpretive researchers strive to provide a thorough understanding of the environment in which to experience the problems studied, considering the perspective of one who lives (Paiva et al. 2011). The research conducted in this study consisted of an exploratory research, adopting the methodology of case study, qualitative in nature, since it tries to understand, in addition of the phenomenon studied, the situation in which this was developed (Charoux, 2006). To achieve the general objective of the study was used the bibliographical research and documentary research. The bibliographical research was used to support the theory study, being based essentially on consultation of books and articles published in scientific journals, dealing with the implementation and development of capital markets in emerging countries. The documentary research was used as the data collection instrument. The largest part of the collection of the documentation was made available on the website of the Capital Markets Commission (CMC), Stock Exchange and Debt Values of Angola, the Order of Accountants and Experts Accountants of Angola, international bodies linked to capital markets and the Accounting, among other sources, where they met relevant data for the continuation of the present research. Case study Requirements at the level of Governance and Oversight Structures Listed companies quoted on a Stock Exchange of Debt and Values of Angola and the Corporate Governance Code of Angola In Angola existed the Commercial Companies Code (CCC) (hard law), which forces business companies by quotas and open fulfil-liability with the requirements related to the practices of Corporate Governance, with the opening of the capital market, such a code was insufficient and something screw up with regard to some international practices of Corporate Governance suggested and required the entities with securities quoted. 1 Thus, the CCC was the target of changes (by modifying and revoking some legal provisions were in force) and was even introduced the Securities Code of Angola (supported in part by the CCC) 2 . Thus, were established the legal provisions necessary for the effective compliance with the internationally accepted Corporate Governance practices and to be followed by entities wishing to place securities to official listing to Stock Exchange and Debt Values of Angola. So, in addition to the provisions described in the Commercial Companies Code (CCC), the entities that intend to place securities to trading in Stock Exchange and Debt Values of Angola must also comply with Security Code. In addition, the entities with securities listed on Stock Exchange and Debt Values of Angola, the Security Code provides for the existence of a Board of Directors composed of odd number, at least three members. 3 On supervision of listed company, the Security Code predicts the existence of a fiscal Council shall be composed of independent members, on which it is required that at least one Member is an expert accountant or accountant. 4 The Table 1 contains 5 a summary of the structural 6 model 7 of governance 8 and 9 surveillance of 10 open 11 societies in Angola. Company governance Supervision of the company • The management of listed company is exercised by a Board of directors consisting of odd number, of at least 3 members. 5 • May be appointed or elected people don't shareholders to the Board of Directors. 6 • The security provided by the administrators of listed company (which may be by an insurance contract in favors of the holders of damages) may not be less than the kz: 30,000,000.00. 7 • The Board of listed company has the function of establishing and maintaining internal control systems, considering the size of the company and the nature of your activity. 8 • The ruling on open society must be exercised by the Supervisory Board. 9 • The Supervisory Board shall consist of most independent members, and must include obligatory, a member that is an expert accountant or accountant. 10 • Is considered independent person not associated with any group of interests of the company and is not in any circumstances likely to affect the exemption of your opinion, in particular, being holder or act in the name or on behalf of holders with higher participation or equal to 5% and be re-elected for more than two terms, continuously or interspersed. 11 Source: Own Elaboration (based on SC and CCC) 3 Expression that replaced the term previously named by "public society" (article 3, 136 and 112, paragraph 1 (a)), b), c), (d)) and e) of the Securities Law). Recommendations of the CMC (soft law) 1 -Base that ensures an effective Corporate governance structurethe promotion of fair and transparent markets and efficient resource allocation should be consistent with the rule of law, an effective supervision and enforcement support. The Annotated Guide of good practice of the CMC recommends procedures that ensure solid structures of Corporate Governance based on the CCC and SC of Angola, whereas the OECD principles (recommendations Nos. 30 to 43). -Rights and the equal treatment of shareholders and key functions of heritage -the structure of Corporate Governance can protect and facilitate exercise of the right of shareholders and ensure the equal treatment of shareholders, including minority and foreign partners. Companies must treat your shareholders an equal and equidistant manner with respect to their interests. In this way, the information should be treated in a manner reserved ensuring no privileged access of same by partners with any qualified participation in society (recommendations Nos. 13,14). -Institutional investors, the stock market and other intermediaries -States that there must be solid economic incentives in the entire chain of investment, with focus on institutional investors. No information related to institutional investors, but investors as a whole and in the dissemination of information Optics (recommendation No. 14). -The role of stakeholders in Cor- porate Governance -Recognize the rights of stakeholders established by law or through mutual agreements and encourage active co-operation between corporations and stakeholders in creating wealth, jobs and financially solid companies. In preparing the annual report on Corporate Governance should be disclosed information about the relationship between the company and its stakeholders (recommendation 3 (b)). 5 -Disclosure and transparency -Ensure the timely and accurate dissemination of information on all relevant matters relating to the company, including financial and operational performance, the goals of the entity, information about majority shareholdings in the capital of society, information on related parties, risk factors, compensation, members of the governance of the society. Source: Own Elaboration (Based on OECD and CMC) In conjunction with the Executive Management, the Board of Directors is responsible for the proper execution of the model of Government in force in society, and to ensure that, in respect of their specific characteristics, such as your size, complexity, nature of the risks inherent to the main business and other relevant factors, are fulfilled the recommendations of corporate governance of CMC (recommendation No. 1). 6 -The responsibilities of the Board of Directors -it is up to the administrative review of the corporate strategy, make the selection and setting the compensation of managers, oversee the large corporate acquisitions and divestments, and ensure the integrity of the entity' s financial reporting system. The Board of directors should answer to the General Assembly for compliance with the best practices in government business and, if applicable, to the sectoral regulators, in respect for the comply or explain principle (recommendation 2). Source: Own Elaboration (Based on OECD and CMC) Analyzing the information in table 2, the Annotated Guide of good practice of Corporate Governance of CMC does not specify the institutional base which ensures a good Corporate Governance structure, but best practices internationally for the best capital markets (CMC, 2015). Thus, in the case of Angola, that institutional quality assurance Corporate Governance structure, is provided for in Security Code (hard law), which addressed the tasks of the supervisory body of the capital market and related to the supervision and direct supervision of open societies. The biggest difference that denotes when comparing the principles of Corporate Governance of the OECD with the Annotated Guide of Good Practice of Corporate Governance of CMC, reflects the level of reference to institutional investors: while the OECD States that there should be incentives solid economic investment chain, with focus on institutional investors, CMC only refers to investors in a universal sense, and in the perspective of disclosure of information. Requirements at the level of the accounting system and Financial Reporting Open companies listed on Stock Exchange of Debt and Values of Angola The General Accounting Plan of Angola was inspired based on international accounting standards of the International Accounting Standard Board (IASB), however, currently shows up in the face of the evolution suffered by misfit international accounting standards, becoming imperative that your review with a view to bringing international practices of the IASB (Landu 2014). Caliatu and Soares (2015) that, considering the increasing development of the country and the increased private investment and foreign, Angola is too late to adjust their accounting standards to international standards, and that should be created an independent body of the Ministry of finance, to work on this adjustment, involving accounting professionals, associations and bodies that directly or indirectly relate to the story. Barroso (2014) states that the international accounting harmonization becomes a necessity for the development of capital markets, and that the absence of such harmonization has consequences such as increased capital costs for businesses, the biggest firms in difficulty be credible to investors and lenders, and the fact that listed companies in international capital markets suffer costs due to recast their accounts. Barroso (2014) adds that the harmonization of accounting systems in relation to the international standard presents obstacles related to the culture and history of each country, with the powers and size of the bodies, but that presents benefits related to the expansion of international transactions, with increased transparency, comparability and understandability of the financial statements submitted by the companies, and with the facilitation of decision-making by international investors and other stakeholders. Conclusion In relation to Angola, in general, concluded that the country is to establish a legal framework that shows in convergence with the main guidelines of the international organizations with regard to the requirements of Government and corporate surveillance and of the process of financial reporting, appropriate for entities operating in the capital markets. However, there is the urgent need of the Capital Market Commission guide or even recommending the mandatory adoption of international accounting standards of the IASB to entities that operate or want to operate on the Stock Exchange and Debt Values of Angola. In terms of Corporate Governance, a thorough convergence of principles and recommendations of the Committee on capital markets about the principles of Corporate Governance of the OECD.
3,229.8
2018-02-25T00:00:00.000
[ "Business", "Economics" ]
en-ended , worm-like and graphene-like structures from layered spherical carbon materials † A study of the effects of size dispersion of Au@SiO2 spheres and silica sphere templates for the synthesis of hollow carbon structures was evaluated using a chemical vapor deposition (CVD) nanocasting method. The diameter of the template, the presence of the gold nanoparticles and the polyvinylpyrrolidone (to cap the Au particles) were found to determine the size, thickness and shape of the synthesized carbon nanostructures. The Au@monodispersed small-sized silica sphere (80–110 nm) template covered with carbon followed by removal of silica produced broken hollow carbon spheres, whereas an equivalent Au@monodispersed large-sized silica sphere (110–150 nm) template produced hollow carbon spheres with a complete carbon shell. Monodispersed and polydispersed pristine silica spheres without Au produced hollow carbon spheres with complete and deformed carbon shells, respectively. Polyvinylpyrrolidone addition to polydispersed SiO2 spheres, followed by carbonization with toluene (1 h) and SiO2 removal, produced wormlike carbon structures. Carbonization (and SiO2 removal) of Au@polydispersed silica spheres for a short carbonization time (1 h) gave a layered carbon nanosheet while at intermediate and longer carbonization times (2–4 h) gave nanotube-like (or worm-like) carbon structures. Raman spectra confirmed the formation of the graphitic nature of the carbon materials. These results highlight the potential use of Au@carbon coreshell structures for the generation of few layered graphene-like unusual nanostructures. As a proof of concept, the wormlike carbon structures were incorporated in organic solar cells and found to give a measurable photovoltaic response. Introduction Layered graphene structures can in principle be made by two approachesby a bottom-up approach from carbon building blocks or by top-down approaches from layered carbon materials.The bottom-up approach entails the synthesis of graphene structures by epitaxial growth of graphene sheets through Diels-Alder polymerization, 1,2 layer by layer assembly, 3 solvothermal 4 and chemical vapor deposition methods.The chemical vapor deposition (CVD) methodology can entail depositing carbon on a metal template with some or limited carbon solubility such as Cu, [5][6][7] Ni, 8,9 and Co 10 among others.The metal template acts as a catalyst for graphene layer formation and growth.2][13][14][15] The top-down approach involves the exfoliation of graphite by mechanical, electrochemical or chemical means to give graphene. 16,17Indeed, the classical methodology to make graphene is from graphite. 18In principle any carbon source (planar, non-planar geometries) that contain layers of carbon atoms could be converted to an open layered carbon material.For instance, unzipping multiwalled carbon nanotubes can result in the formation of graphene oxide nanoribbons 19 and graphene nanoribbons. 20A recent report has indicated that C 60 can be converted into graphene quantum dots showing the possibility of creating graphene like structures from spherical carbon materials. 213][34] In addition, these nanostructures could be used as electron acceptors in polymer based solar cells due to their high interfacial area. Hollow carbon spheres (HCSs) contain graphite like structures in their carbon shells.Because of these features they have been used extensively in fuel cells, 35,36 as catalysts support, 37,38 and in supercapacitors. 39,40This is due to their large pore volume, high surface area and the ability to tailor the diameters of the carbon shells.These nanostructured materials have been synthesized by hydrothermal carbonization, 41 templating, 42,43 Kirkendall effect, Ostwald ripening and the galvanic replacement methods among others. 44A hard templating method offers the ability to modify surface properties of templates, to manipulate the morphology and structure of the nal product and to use readily available precursors. 45The carbon shell thickness can be controlled by varying the surfactant to precursor ratio, 46,47 the amount of the carbon source 48 and the carbonization time. 49To date, the carbon shell of a HCS is generally retained in the synthesis strategies employed.][51] Organic solar cell (OSC), devices that convert solar energy into electrical energy oen use semiconducting polymers made from carbon.Carbon is cheap, readily available, easily processable and is hydrophobic.This latter property ensures its compatibility with organic polymers used in the active layer when common organic solvents are used to make solar cell devices.In a bulk heterojunction (BHJ), the active layer comprises of an interpenetrating network of the p-type donor (poly(3-hexyl-thiophene-2,5-diyl), P3HT) and an n-type acceptor ( [6,6]-phenyl-C 61 -butyric acid methyl ester, PCBM) respectively.The key component here is the fullerene that is a single layered spherical carbon material.Other hollow carbon nanostructures can also act as the electron accepting materials and be used to form a ternary blend of the active layer.3][54][55][56][57][58] These early studies indicated that a key feature was the graphitic carbon network.To further explore the role of graphitic carbon structures in solar cell devices the use of "collapsed" HCSs has been explored.This is overcomes the issue associated with large HCSs that can conform to the size of the typical active layer dimensions of the OSC device and can lead to shorting.3][64][65] To our knowledge, the use of open ended worm like carbon nanostructures to form an active layer composite in organic solar cells has to date not been reported. In this report, carbon has been deposited on monodispersed and polydispersed silica templates in the presence of Au nanoparticles embedded in the silica to produce graphene-like layered structures (Au@HCS).Hollow carbon spheres with broken spherical shells were produced from small and monodispersed SiO 2 spheres and unusual open ended worm like layered structures were made from polydispersed silica sphere templates.In essence, the conversion of spherical core shell materials to an open ended worm like carbon nanostructures (3D) with unique morphologies by a CVD nanocasting method (a hard template route).The effect of Stober sphere diameter, polydispersion and carbonization time on the morphology of the carbon spheres produced as well as the role of the Au nanoparticles and the polyvinylpyrrolidone (PVP) used to cap the Au particles were studied.The obtained wormlike carbon nanostructures and the broken hollow/deformed carbon structures were employed in a ternary blend active layer to fabricate an organic solar cell. Synthesis of silica spheres and hollow carbon spheres (HCSs) Monodispersed SiO 2 spheres were synthesized by mixing 90 mL of ethanol, 63 mL of distilled water and 5 mL of NH 4 OH and the solution stirred for ten minutes.Then 16 mL of TEOS was added rapidly and the mixture stirred for 1 hour.In contrast, polydispersed SiO 2 spheres were obtained by adding 2 mL of TEOS slowly to ethanol (40 mL) and 2 mL of NH 4 OH and the mixture allowed to stir for 1 h.The solutions were centrifuged at 4500 rpm for 20 minutes and the product dried at 80 C for 12 hours.Carbonization of both the monodispersed and polydispersed SiO 2 spheres was carried out by a bubbling method using toluene as the carbon source and argon as the carrier gas in a chemical vapor deposition reactor for 1 h, 2 h and 4 h respectively 66 (Table 1).The carbonized silica was then etched with 10% HF for 24 hours at room temperature and dried to give the hollow carbon spheres. Synthesis of Au nanoparticles HAuCl 4 $3H 2 O (0.01 M, 5 mL) was stirred at reux for 30 minutes and trisodium citrate dihydrate (50 mL of 0.01 M) was added to the Au solution and the mixture stirred for 30 minutes.Polyvinylpyrrolidone (PVP; 0.5 g) was dissolved in 20 mL of distilled water and added dropwise to the Au solution and stirred at room temperature for 12 hours.Centrifugation at 12 000 rpm for 15 minutes gave a suspension of gold nanoparticles (d ¼ 14 AE 4 nm) in solution (Fig. S1 †). Synthesis of Au@SiO 2 spheres Gold@silica spheres were synthesized using the conditions shown in Table 2. Gold nanoparticles (2 mL, 0.00016 M) were mixed with 20 mL of ethanol and 0.5 mL of ammonia solution (25 wt%).The solutions were stirred for 20 minutes and then 1 mL of TEOS was added rapidly and the solutions stirred separately for 30 minutes and 1 hour respectively.The two solutions were then centrifuged for 20 minutes at 3500 rpm and the collected solids dried at 80 C for 12 hours to give monodispersed Au@SiO 2 A and Au@SiO 2 B spheres respectively.Polydispersed Au@SiO 2 particles were made by adding gold nanoparticles (20 mL) to 40 mL of ethanol and 2 mL of ammonia solution (25 wt%).The solution was stirred for 20 minutes and 2 mL of TEOS was added slowly and the solution stirred for 1 hour, centrifuged for 20 minutes at 3500 rpm and the collected solid dried at 80 C for 12 hours to give Au@SiO 2 C. Synthesis of Au@hollow carbon structures (Au@HCSs) Au@hollow carbon spheres (Au@HCSs) were synthesized by carbonization of the synthesized Au@SiO 2 spheres in a horizontal chemical vapor deposition reactor.In separate reactors, the three Au@SiO 2 samples (0.06 g) were uniformly spread onto a quartz boat which was placed in the center of a quartz tube.The furnace was heated to 900 C at 10 C min À1 under an Ar atmosphere (Ar, 200 sccm).Once the desired temperature was reached, Ar (200 sccm) was bubbled through toluene for 1 h for Au@SiO 2 A and Au@SiO 2 B and for 1 h, 2 h and 4 h for Au@SiO 2 C (to give Au@SiO 2 C1, Au@SiO 2 C2 and Au@SiO 2 C4) respectively.Aer this, the gas ow was stopped and the system was le to cool down to room temperature under an inert atmosphere (Ar, 200 sccm).The quartz boat was then removed from the reactor and the silica was removed with a 10% HF solution (24 hours) and aer thorough washing with distilled water the product was dried at 80 C for 12 hours to give the ve Au@HCS samples (Table 3). Characterization The morphology of the synthesized gold nanoparticles, Au@SiO 2 , Au@HCSs, pristine SiO 2 and the HCSs was ascertained by transmission electron microscopy (TEM) using a FEI Technai G2 spirit electron microscope operating at 120 kV.Graphitic domains in Au@HCS were determined using a JEOL JEM 2100 High Resolution TEM (JEOL, Japan) tted with a LaB6 gun.Images were captured at 200 kV using a Gatan Ultrascan camera (Gatan, USA).Samples were made by placing a droplet of suspended nanoparticles in ethanol on carbon coated grids and allowed to dry at room temperature.A Jobin Yvon T64000 Raman spectrometer equipped with an Ar ion laser (514.5 nm) and a laser power of 5 mW was used to establish the graphitic nature of the carbon found in the Au@HCSs and HCSs. Silica spheres and hollow carbon spheres (HCSs) Stober silica spheres were synthesized by classical routes. 67The silica spheres were made using two different reactant concentrations and reaction times to give monodispersed and polydispersed SiO 2 with different sizes as shown in Fig. S2.† Monodispersed SiO 2 spheres (400-500 nm) were obtained when TEOS was added quickly while polydispersed SiO 2 spheres (90-310 nm) were obtained when TEOS was added slowly.A quick addition of TEOS results in the creation of nucleation sites at the same rate and time whereas a slow addition creates new nucleation site with each TEOS portion added, analogous to an interrupted particle growth mechanism. 68,69These spheres were carbonized with toluene for 1 h, 2 h and 4 h and the SiO 2 was removed with HF to give HCSs (Table 1).The TEM images of the six different HCSs are shown in Fig. 1.It is noted that the HCSs were smaller in diameter than the SiO 2 spheres due to the shrinkage of the silica spheres. 37,70or all types of silica spheres used, carbon coverage on SiO 2 was observed prior to etching.The carbon shell thickness increased with increasing carbonization time as shown in Table 1.Aer treatment of the SiO 2 @C materials with HF, it is seen that the carbon shells in monodispersed HCSs retained their spherical shape (Fig. 1a-c).In contrast, HCSs with deformed and interconnected carbon shells were obtained aer treatment of the polydispersed SiO 2 @C materials with HF (Fig. 1d and e).However, HCSs with complete carbon shells were obtained aer 4 h carbonization of polydispersed SiO 2 materials and SiO 2 removal (Fig. 1f).Also to note: the carbon shell thickness of the polydispersed HCSs was thinner than that of the monodispersed HCSs.Though both mono and polydispersed SiO 2 are chemically the same, SiO 2 polydispersity was found to reduce the number of carbonization layers.This could be attributed to the packing of polydispersed particles which restricts toluene inltration between SiO 2 spheres during carbon shell growth on SiO 2 . 3.2 Au@SiO 2 and Au@hollow carbon spheres Au@SiO 2 was made by classical procedures by dispersing Au particles in a solution containing TEOS (Table 2).The sizes of encapsulated gold nanoparticles were almost the same in all the Au@SiO 2 spheres with a slight increase in size observed in Au@HCSs obtained aer carbonization and SiO 2 removal (Table 3).Fig. 2 show the TEM images of Au@SiO 2 A and Au@SiO 2 B templates with their respective HCS morphologies obtained aer 1 h carbonization.The longer reaction time (1 h versus 0.5 h) used in the formation of the template gave a larger HCS as expected. 71,72Carbonization of the smaller sized silica spheres (Au@SiO 2 A) produced broken hollow carbon spheres aer etching away the SiO 2 (Fig. 2c) while the large sized silica sphere (Au@SiO 2 B; 110-150 nm) gave more unbroken spherical carbon shells aer HF etching (Fig. 2d).Though, the thickness of the carbon shells was similar aer 1 h carbonization time; the smaller HCSs were more prone to break during the etching procedures, due to the large strains induced by the larger curvature which weaken upon SiO 2 removal.Carbon shell thickness dependent collapse of Au@SiO 2 A is thus unavoidable due to the smaller diameter size. 50,51ig. 3a shows TEM images of polydispersed Au@SiO 2 C spheres aer synthesis.Aer 1 h carbonization a thin carbon shell (11 AE 2 nm) covering the silica (Fig. S3 †) is formed which led to a layered carbon nanostructure (Fig. 3b) aer HF etching.This contrasts with the morphologies found for Au@HCSA and Au@HCSB as the collapse led to large sheets of carbon shells instead of a hemispherical curved surface.The carbonization of silica spheres to give thin carbon shells occurred where the silica cores interconnected and breaching presumably occurred where the silica spheres intersected.A short carbonization time (1 h) produced a layered carbon nanosheet like morphology due to limited nucleation densities and surface coverage.Carbonization for longer times (2 & 4 h) led to the formation of long nanotube/worm like morphology (length > 500 nm) (Fig. 3c and d).Table 3 shows that an increase in carbon shell thickness of Au@HCSC occurs with an increase in carbonization time. Comparison of polydispersed Au@HCSC (Fig. 3) and polydispersed HCSs (Fig. 1) showed that in the absence of gold, HCSs with deformed and interconnected carbon shells were observed and no carbon nanosheet like morphology was noted.This implicates the presence of Au in the formation of the peculiar carbon structures.A more detailed analysis of Au@HCSC1 was obtained from HRTEM studies (Fig. 4).HRTEM images of the Au@HCSC1 revealed that the sample could be viewed as a carbon sheet (Fig. 4a) made of overlapping carbon layers formed from the extended array of hollow carbon structures (Fig. 4b).The extended array was in the form of discontinuous curved features portraying the wavy nature of the carbon sphere edge. 73The carbon structure appears to follow the curvature of the unzipped spheres (Fig. 4c).Further analysis of the 'lm' indicates that the carbon 'shell' shows the presence of graphitic domains (Fig. 4d; d interlayer spacing ¼ 0.344 nm) with short range ordering (Fig. 4e) over large parts of the carbon structure.The structure also indicates regions of amorphous carbon as conrmed by selected area electron diffraction (SAED) data (ESI Fig. S4 †). The factors responsible for the formation of the wormlike carbon structures are described below. (i) Polydispersity.5][76][77] This is presumably due to the closer contact between the SiO 2 spheres made possible when small and large SiO 2 spheres are mixed.The carbon inltrates less effectively between the mixed size spheres and this leads to the formation of thinner and hence weaker carbon structures.These are then more easily ruptured. (ii) Carbonization time.The carbonization time determines the layer structure and wormlike structures aer silica etching.The carbon shell thickness has an effect on the completeness of the HCS shell and a thin carbon wall is subject to high stresses which lead to fracture during HF etching.A long carbonization time results in higher nucleation density and diffusivity along the edges leading to a near uniform thickness of the wormlike structures.Continued carbonization results in further coverage and hence formation of thicker open ended and breached nanotube like structures. (iii) Role of polyvinylpyrrolidone on carbon morphology.The presence of PVP (with a high molecular weight) on SiO 2 synthesis has been reported to cause a broad silica size distribution. 78In this study, the volume of PVP used to make Au@SiO 2 A and Au@SiO 2 B (2 mL PVP) was less than that used to make Au@SiO 2 C (20 mL PVP) hence leading to modied SiO 2 -SiO 2 surface interactions during carbonization leading to different carbon deposits along the external SiO 2 surfaces. To conrm the inuence of PVP on the morphology of the obtained worm like and open ended carbon nanostructures, polydispersed SiO 2 spheres (Fig. S2b) and PVP (20 mL) were mixed together and stirred for 1 hour and 12 hours respectively (see ESI †).The concentration of PVP was similar to that used in the synthesis of Au@SiO 2 C (for comparison purposes).Fig. S5a and S5b † shows the silica spheres mixed with PVP before carbonization conrming the PVP surface coverage on silica.The polydispersed silica spheres mixed with PVP aer 1 h stirring time, followed by carbonization and SiO 2 removal produced worm like hollow carbon nanostructures (Fig. S5c †).In contrast, a 12 hours stirring time, followed by carbonization and SiO 2 removal gave broken hollow carbon spheres with a spherical morphology (Fig. S5d †).The carbon shell thickness was 7 AE 5 nm in the wormlike hollow structures and 9 AE 7 nm for the broken/deformed hollow carbon spheres.The wormlike carbon nanostructures had a unique hierarchical structure with the coexistence of mesopores and macropores within the nonspherical cavity as shown by the nitrogen adsorption-desorption isotherm (ESI Fig. S9 †). In the study, 900 C was used for carbonization of the silica.0][81] Some of the PVP was lost due to sublimation at the reaction temperature.The surface morphology is dependent on the amount of PVP bound on the silica surface.When a high concentration of PVP was used on Au@SiO 2 and polydispersed SiO 2 spheres, a thin carbon nanosheet and wormlike hollow structures were obtained. (iv) Presence of Au.The HRTEM images of the Au@HCS indicate the HCSs have been broken and show folded edges of the carbon sheet on the planar structure (Fig. S6a †).The folded edges have a higher interlayer spacing (0.371 AE 0.002 nm) compared to the d spacing of pure graphite (0.335 nm) 82 (Fig. S6b †).This indicates an 11% strain in the carbon material around the Au and folding along the edges due to the carbon grown on the interconnected spheres.A difference in the curvature energies of thin carbon walls is thus expected for carbon in the presence/absence of Au particles.On removal of the silica, the carbon atoms can now relax and this leads to collapse/unzipping of the walls, leading to the formation of worm like carbon structures.The unzipping effect is not only related to the thinness of the carbon structure.If this was the case, the polydispersed HCSC1 (Fig. 1d) with a shell thickness of 8 AE 2 nm would also be expected to form the worm like graphitic carbon structures.This is not seen.There is clearly less surface strain on the carbon wall aer silica removal and hence HCSs with deformed and interconnected carbon shells are formed despite the presence of a thin carbon shell. Fig. 5 shows a schematic diagram that summarizes the formation of the different carbon nanostructures for the Au@monodispersed and Au@polydispersed silica sphere templates.A small sized Au@monodispersed SiO 2 sphere template results in Au@HCSs with breached shells while a large sized Au@SiO 2 monodispersed sphere template gives Au@HCSs with complete shells.In contrast, the Au@polydispersed SiO 2 sphere template gives graphene-like, wormlike and open ended tube like structures. Raman spectra of hollow carbon spheres and Au@hollow carbon spheres Table S1 † shows the I D /I G ratios of monodispersed and polydispersed HCSs.The corresponding Raman spectra for HCSs and Au@HCSs are shown in Fig. S7 and S8.† In all cases, a strong D band was exhibited between 1342 cm À1 and 1381 cm À1 due to the breathing mode of sp 2 and sp 3 carbon atoms. 83,84In addition, a G peak was observed between 1575 cm À1 and 1597 cm À1 which is a characteristic of bond stretching of sp 2 atoms.An increase in structural defects with increasing carbonization time was observed as seen from I D /I G ratios.This is expected as an increase in carbonization time increases the number of carbon atoms nucleating on the silica template leading to thicker carbon shells and thus more structural defects.Au@HCSA and Au@HCSC1 had fewer defects than Au@HCSB resulting in lower I D /I G ratios (Table S2 †).A comparison of the Raman spectra of polydispersed Au@HCSs and HCSs shows a broad band between 2700 cm À1 and 2900 cm À1 characteristic of a 2D band.The broadness of the peak indicates the presence of large defects with small graphitic domains. 85,86In Au@HCSC1, D and G bands were observed at 1370 cm À1 and 1590 cm À1 respectively with a I D /I G ratio of 0.91, indicating less graphitic character.A broad 2D band was observed in Au@HCSC1 showing the presence of both graphitic and non-graphitic domains in agreement with the HRTEM results. Application of worm like hollow carbon nanostructures and broken HCSs in organic solar cells The hollow carbon nanostructures (wormlike and broken HCSs) were mixed with P3HT and PCBM to form a ternary blend active layer (see ESI †).The pristine active layer (PCBM:P3HT blend) had a thickness of 190 AE 10 nm.Fig. S11 † shows the AFM images of P3HT:PCBM:wormlike carbon nanostructures and P3HT:PCBM:broken HCSs with a surface roughness of 81.5 nm and 91.8 nm respectively.A PCBM absorption peak was observed at 336 nm and 338 nm for P3HT:PCBM with broken HCSs and wormlike HCSs respectively (Fig. 6). 87The absorption intensity of P3HT peaks were observed at 517 nm and at 553 nm/555 nm, with a shoulder (#) at 604 nm/605 nm in the lms containing broken HCSs/wormlike HCSs respectively due to P3HT interchain p-p* interactions. 88,89The absorption intensity of the lm comprising of P3HT:PCBM:wormlike HCSs was higher than that made of P3HT:PCBM:broken HCSs due to enhanced scattering of light.An increase in absorption intensity in the wormlike HCSs led to an increased short circuit current density (J sc ).The device performance is determined by the structural organization of the interpenetrating network, interface energy and self-assembly of active layer composites. 90,91Theoretical studies and experimental results of organic photovoltaic devices have shown that P3HT chains can selfassemble to wrap around the carbon based nanostructures and change the conjugation length of the P3HT to modify the device charge transfer properties. 91,92It is proposed that the large interface area provided in the open ended worm like structure could increase the interconnectivity to the P3HT chains and thus, alter the charge transfer properties in the active layer composite. Table 4 shows the current-voltage characteristics of the ternary blend active layer based organic solar cell under illumination and in the dark.An attempt to use hollow carbon spheres with a complete shell led to shorting of the photovoltaic devices due to their large diameter size.Hence, wormlike and broken HCSs were used.The current density of the device with wormlike HCSs is slightly higher than in the device with broken HCSs.This can be attributed to a reduced charge-transport distance, a slight increase in absorption intensity and less surface roughness in comparison to that of a device with broken HCSs.4][95] In addition, a possible charge carrier recombination is further corroborated by an S curve kink (see arrow in Fig. 7a) and by a high leakage current (Fig. 7b).The photovoltaic efficiency of the ternary solar cell fabricated using worm like structures was 0.11% while with broken hollow carbon spheres it was 0.14%.While these values are low they show an improvement with reference to the unbroken HCSs (see Fig. 7).The increase in shunt resistance, open circuit voltage and ll factor (FF) in P3HT:PCBM:broken HCSs relative to that of the device with wormlike HCSs were responsible for the slight improvement in photovoltaic efficiency. Conclusions This study provides insight into the effect of Au@polydispersed silica sphere templates and polydispersed SiO 2 sphere templates towards the formation of hollow carbon nanostructures by a CVD nanocasting method, a study rarely performed.Au@HCSs and HCSs were successfully synthesized using a CVD nanocasting method with Au@SiO 2 and SiO 2 spheres as templates.The size of the Au@SiO 2 and SiO 2 templates used were found to play a key role in the synthesis of the hollow carbon spheres and nanostructures.Monodispersed and large sized Au@SiO 2 spheres gave unbroken HCSs, whereas polydispersed Au@SiO 2 spheres led to the formation of a range of hollow carbon nanostructures (thin carbon nanosheets and open ended nanotube like carbon).Modication of the surface chemistry of the polydispersed SiO 2 using PVP was found to contribute to the worm like carbon nanostructures.Raman analysis conrmed the presence of the graphitized carbon in all the samples synthesized.The Au@HCS core shell layered materials have been used for the generation of graphene like open structures.The polydispersed SiO 2 sphere functionalized with PVP generated worm like hollow carbon structures.The application of the wormlike nanostructures in organic solar cells opens new studies into the electronic properties of these materials.However, functionality of the P3HT on the wormlike nanostructures through selfassembly could lead to higher exciton dissociation.Further studies to check the effect of PVP concentration and addition to SiO 2 spheres on carbon sphere morphology to explore the formation and growth control of these carbon nanostructures and their application in organic solar cells with optimized parameters is underway. Fig. 4 Fig.4HRTEM images of Au@HCSC1; (a) shows the carbon film of Au@HCSC1 and that of the copper grid, (b) shows three overlapping areas of the carbon sheet (black spots are Au particles), (c) shows a folded region and a sheet like region with magnified images of (d) folded region and (e) sheet like region. Table 1 Effect of carbonization time on the carbon shell thickness Table 3 Effect of SiO 2 sphere diameter in Au@SiO 2 template on the carbon morphology
6,176.8
2016-02-16T00:00:00.000
[ "Materials Science", "Chemistry" ]
Formation of the Traffic Flow Rate under the Influence of Traffic Flow Concentration in Time at Controlled Intersections in Tyumen, Russian Federation : Present experience shows that it is impossible to solve the problem of traffic congestion without intelligent transport systems. Traffic management in many cities uses the data of detectors installed at controlled intersections. Further, to assess the traffic situation, the data on the traffic flow rate and its concentration are compared. Latest scientific studies propose a transition from spatial to temporal concentration. Therefore, the purpose of this work is to establish the regularities of the influence of traffic flow concentration in time on traffic flow rate at controlled city intersections. The methodological basis of this study was a systemic approach. Theoretical and experimental studies were based on the existing provisions of system analysis, traffic flow theory, experiment planning, impulses, probabilities, and mathematical statistics. Experimental data were obtained and processed using modern equipment and software: Traficam video detectors, SPECTR traffic light controller, Traficam Data Tool, SPECTR 2.0, AutoCad 2017, and STATISTICA 10. In the course of this study, the authors analyzed the dynamics of changes in the level of motorization, the structure of the motor vehicle fleet, and the dynamics of changes in the number of controlled intersections. As a result of theoretical studies, a hypothesis was put forward that the investigated process is described by a two-factor quadratic multiplicative model. Experimental studies determined the parameters of the developed model depending on the directions of traffic flow, and confirmed its adequacy according to Fisher’s criterion with a probability of at least 0.9. The results obtained can be used to control traffic flows at controlled city intersections Introduction One of the unsolved problems for city transport systems to date is the problem of increasing the efficiency of traffic management in terms of preventing traffic congestions [1]. Their formation in the road network inevitably entails a number of negative consequences, the most tangible of which for the urban population is an increase in the time of movement within the city due to an increase in transport delays [1], excessive fuel consumption by cars [1], environmental deterioration [1,2], and a decrease in the level of social comfort and quality of life [1,3]. In this regard, this problem is relevant and represents a serious challenge for most of the administrations of large cities, engineering, and science. In many previous works, the main reason for the formation of traffic congestion is said to be the combination of a high level of motorization and a lag in the development of the road network. [1,4,5]. In other words, there is a significant numerical difference between transport demand and transport supply, which is quantitatively comparable in a ratio of 4 to 1. It is obvious that when vehicles move across the city, various sections of the road will conditional locations where transport demand is either formed or satisfied [1,6]. Consequently, the formation of traffic congestion is a consequence of a decrease in the traffic flow rate on the section of the road network serving traffic flows in relation to the section of the road network that forms traffic flows [7]. Thus, traffic flow rate should be taken as a target indicator of the study. Currently, in science and world practice, there is no one way to solve the problem of increasing the efficiency of traffic management in cities in terms of preventing traffic congestion on the road network. Conventionally, three global approaches can be distinguished: the road-building approach [1,8], the organizational and administrative approach [1,[8][9][10][11][12][13][14][15][16][17][18][19], and the approach consisting of the use of intelligent transport systems [8,[20][21][22][23][24]. The names of the approaches reflect a set of key measures that are proposed to resolve transport problems. The road-building approach consists of improving the existing and designing a new road network and its infrastructure facilities [1,8]. The essence of the organizational and administrative approach lies in all kinds of restrictions on the movement of cars and the development of public transport, including alternative ways of moving around the city [1,[8][9][10][11][12][13][14][15][16][17][18][19]. The third approach, mentioned above, involves introducing intelligent transport systems, as well as their subsystems at various levels [8,[20][21][22][23][24]. It should be noted that today, each of these approaches is used both individually and in combination, and can be effective depending on the goals and objectives of the researcher, engineer, or city manager, as well as the available resources. Current expertise shows that the creation of an effective traffic-management system is impossible without the use of modern intelligent technologies. Traffic flow management on the road network in many cities and towns, as well as metropolitan cities in the USA, Japan, many European countries, Russia, and other developed countries of the world, is carried out by means of automated traffic control systems based on constantly updated data on the traffic flow rate. Information about the traffic flow rates on sections of the road network is obtained by means of vehicle identification detectors [25]. However, for a complete understanding of the traffic situation on the investigated section of the transport network, it is impossible to restrict ourselves only to data on the flow rate. For an objective assessment of the state of the traffic flow, it is necessary to compare the actual number of moving vehicles with the measure of the flow concentration either in space or in time [25,26]. Until the end of the last century, traffic flow theory used a spatial measure of concentration-traffic flow density [26]. However, modern scientific research proposes to switch to the use of traffic flow concentration in time-lane occupancy. According to a number of researchers, the process of measuring the temporal concentration of traffic flows requires much lower economic and labor costs, and the data obtained are more valid [25][26][27][28][29][30][31][32][33]. Regardless of the city and even the country where the processes aimed at increasing the efficiency of traffic management are implemented, the priority for any state is to ensure safety, preserve the life, and maintain the health of citizens [1,8,25]. Therefore, it is not surprising that against the background of the growth of motorization in cities, there is an increase in the number of traffic lights on the road network. The Russian Federation is no exception because, according to Russian legislation, a large-scale installation of traffic-light control devices is carried out on the sections of the road network with high traffic congestion and increased accident rates. The current regulatory and technical documents clearly stipulate the rules and conditions for the use of traffic lights, which in three out of four cases are directly related to the traffic flow rate. The purpose of a traffic light as a technical means of organizing traffic is to increase the level of road safety. The presence of traffic light control means is one of the significant factors limiting the maximum possible value of the traffic flow rate, which directly depends on the ratio of traffic light signal durations in the control cycle [34][35][36][37]. Therefore, the purpose of this work is to establish the regularities of the influence of the concentration of traffic flow in time on the traffic flow rate at controlled city intersections. This paper presents an analysis of the state of the issue of the indicated problem using the example of Tyumen, Russian Federation, based on which the purpose of the study was formulated. The methodological basis of the research necessary to achieve this purpose is presented. The structuring and definition of the boundaries of the system under study are shown. The results of selecting the most significant factors influencing the traffic flow rate are presented. The results of modeling the investigated process are given, which made it possible to formulate a working hypothesis about the type of model that reflects the process of changing the traffic flow rate under the influence of the selected factors. The results of experimental studies carried out in order to confirm or refute the working hypothesis are presented. Analysis of the State of the Issue The results of the analysis of statistical data [38] showed that over the past 20 years, the level of motorization in Tyumen has increased almost threefold, and according to various estimates, is now approaching the mark of 590 cars/1000 people ( Figure 1). The study of the structure of the motor vehicle fleet showed that more than 80% of all cars in the city belong to the category of light passenger vehicles ( Figure 2), which, according to the authors' assumption, are more likely to be categorized as private vehicles. [38]. In his work [39], Goltz G.A. predicted that when the level of motorization in cities reaches 380 cars/1000 people, a spiral of automobile dependency is formed, which is a cyclical reproduction of the problems of the transport system of large cities at a higher level than the previous cycle. If the level of motorization reaches the threshold value of 500 cars/1000 people (i.e., on average, every two residents of the city have a car), the capacity of the road network is considered exhausted. With that, congestion is formed throughout the city's transport network. Thus, the results of the analysis of the state of the current situation have shown that the problem of traffic congestion is also urgent for the city of Tyumen. Taking into account the continued growth rates of motorization, traffic-light control devices are required in almost all key sections of the road network in the cities of the Russian Federation. Figure 3 shows a graph of changes in the number of traffic light units for the last nine calendar years in Tyumen [40]. The number of intersections equipped with traffic lights has also almost doubled over the past two decades, and has approached 400. The total number of traffic lights does not decrease, and is steadily increasing by an average of 11 units per calendar year. Placing traffic lights on the road network is one of the fastest and most effective ways to improve road safety. However, traffic-light control is also a factor in reducing the capacity of the intersection, which additionally limits the maximum value of traffic flow rate at city intersections [34][35][36][37] and exacerbates the situation. The formation of a traffic jam can be considered both a stochastic and a completely deterministic process caused, on the one hand, by a random value of transport demand, weather and climatic conditions, emergency situations, and other factors; and on the other, by the geometric features of the road network, the presence of traffic lights on its sections, the road surface quality, and other quite predictable phenomena [41]. Therefore, in cases where vehicle detectors record a decrease in traffic flow rate, the following uncertainty arises: this situation can be caused by both a real formation of a traffic jam due to a traffic accident, vehicle breakdown, high load of the considered section of the road network, etc., and an actual absence of vehicles in the studied lane. In the fundamental theory of traffic flows, this uncertainty is resolved by comparing the data obtained on the traffic flow rate and its concentration [26]. The most common measure for assessing the concentration of traffic is its density p . Density p means the number of vehicles per unit length of a section of the road network [25,26]. Until the second half of the 20th century, most of the mathematical models and studies were based solely on the use of the traffic-density indicator [41]. However, with the widespread use of vehicle detectors, which are one of the main elements of automated traffic control systems as part of intelligent transport systems, lane occupancy is proposed as the main measure of traffic concentration. Lane occupancy  is the fraction of the total duration of the measurement during which vehicles were in the control area of the detector. Lane occupancy  is determined as follows [25,26]: where  is lane occupancy; i t is the time spent by the i -th vehicle in the control zone of the detector, s; i L is the length of the i -th vehicle passing through the control zone of the detector, m; d is the length of the detector frame, m; i u is the speed of the i -th vehicle in the flow; and T is the duration of the measurement, s. The need to switch from using the traffic density indicator to using the lane occupancy indicator and the advantages of this transition are due to a number of arguments (Table 1). Objective data can be obtained only through the use of detectors with two inductive loops, the installation, maintenance, and repair of which is associated with complex road construction works; the service life of such a system is one year Measurements are taken using a single video detector with longer lifespan and mobility First of all, the feasibility of using the lane-occupancy indicator lies in its physical meaning. Both measures of traffic concentration are specific, but unlike density, occupancy is a temporal indicator. Of course, the length and area of roads and streets in cities are limited. The territory that could be adapted for the development of the road network is at a significant deficit. However, in case of emergency, it seems possible to delimit traffic flows in space through the construction of underground and aboveground transport infrastructure facilities. Time, however, is an inherently indivisible resource; its reserve is for all road users. In this case, as can be seen from Formula (2), lane occupancy also takes into account the impact of the length of each vehicle. In addition, for accurate measurement of traffic density, a more complex system is required, consisting of two control areas of the detector, based on the principle of operation of an inductive loop. Currently, this system is virtually not used for a number of reasons; more practical video detectors are used instead. Video detectors determine density by calculations performed on the basis of data on the flow rate and speed, which, in the event of a traffic jam, also creates uncertainty [7,[25][26][27][28][29][30][31][32][33]. In this regard, it becomes necessary to use lane occupancy as the main measure of the traffic flow concentration. However, the results of the analysis did not reveal an accurate and unambiguous pattern of the influence of this indicator on the traffic flow rate, which determined the need for research in this direction. Research Area The city of Tyumen, a socially and economically developed regional center with a population of more than 815,000 people [38], located in Western Siberia, was chosen as the object for the study. As a transport hub, Tyumen is an important link and transport corridor for traffic flows not only from east to west, but also from north to south (and in the opposite directions). These routes intersect exactly in the central business district of the city. Despite the fact that trucks are prohibited in this part of the city, and dedicated lanes are allocated for urban passenger public transport, as in many other cities in different countries of the world, the central areas of the city experience excessive traffic congestion. This is especially noticeable in the morning, afternoon, and evening rush hours on weekdays. Therefore, a controlled intersection of Respubliki and M. Toreza streets was chosen as the investigated section of the road network. It is the center of the intersection of the transport routes from north to south, from west to east, and in the opposite directions. Methodological Basis The methodological basis of this study is a systemic approach that considers the objects under study as systems. More specific methods, on the basis of which this research was performed, are the existing, proven, and tested provisions of system analysis, traffic flow theory, experiment planning, impulses, probabilities, and mathematical statistics. Preliminary Selection of Factors and Structuring of the System under Study In order to implement a systemic approach as the methodological basis of this study, we decided to further study the process under consideration at the system level. To establish the boundaries of the system, we needed to determine the factors that are most significant in terms of the degree of influence on the target indicator of management-traffic flow rate. For this, based on previously performed studies, a complete list of factors was compiled, and a preliminary selection of the most significant of them was made. The factors influencing the change in the traffic flow rate were consolidated into the following groups [1,2,4,6,7,9,17,[25][26][27][28][29][30][31][32][33][34][35][36][37][41][42][43][44]: Traffic flow state;  Traffic conditions. Table 2 presents a list of the main factors affecting the change in the traffic flow rate in cities, as well as their main characteristics. In order to implement a systemic approach, the system under study was structured, the already established and scientifically confirmed relationships were identified, and the assumed connections between the elements of the systems were made. Figure 4 shows the designated elements of the system and the assumed connections between them in an enlarged form. Regularities of the Influence of Environmental Factors Key transport nodes (crossroads, intersections, junctions, etc.) located in the central part of cities, as well as other centers of attraction, considered the most problematic and of the greatest interest for further research, are part of the route network of urban passenger public transport. In accordance with the current legislative framework and national standards of the Russian Federation, measures are taken on these sections of the road network to ensure the safe operation of public transport, which implies more efficient snow removal, treatment of the road surface with reagents and other anti-icing agents, and other additional measures that change the influence of environmental factors by an undefined value. Therefore, the authors believe it is impossible to objectively assess the influence of environmental conditions on the change in the characteristics of the transport flow in this part of the study. In this regard, this work introduced a restriction for weather conditions and the state of the road surface. Further research was carried out under the condition that vehicles were moving on a dry road surface, in the absence of ice, precipitation, and fog. Regularities of the Influence of the Factors of the Traffic Flow State On the issue of the relationship between the indicators of traffic flow concentration in space and time, a number of studies have formed hypotheses about a possible linear relationship between traffic density and lane occupancy. [27]. However, there is also an opinion that this relationship is not always experimentally confirmed in practice [25,26,30]. Additional analytical studies were carried out to confirm or deny the possibility of a relationship between traffic concentration indicators. Formulas (1) and (2) presented earlier in this work can be considered equivalent. Formula (2) reveals the physical meaning of the lane occupancy indicator, and Formula (1) gives a more detailed idea of the occupancy measurement process and displays the principle of the detector operation. (1) is the sum of pulse durations [25] recorded by the detector when vehicles pass through its control zone. Therefore, to determine the relationship between the indicators of the traffic flow concentration, the temporal and spatial structures of the pulse measurement process for a homogeneous, uniformly moving flow of cars were considered. Based on the fundamental provisions of the impulse theory and taking into account the indicated conditions, it is possible to represent Formula (1) in the form of a value inverse to the average duty cycle [45]: where t is the average duration of the recorded pulse, s; and S is the average duty cycle. Meanwhile, when studying the spatial structure of this process, the movement of cars in an enlarged form can be described by the value of the dynamic envelope [25,26,37,44]: where d L is the dynamic envelope, m/car; L is the length of the vehicle, m; and a d is the safe distance between moving cars, m. If the specified conditions are met, the variables , , d а L L d in this case will be constant values. (a) (b) Figure 5. The temporal structure of the process of recording pulses (a) and the spatial structure of the process of car movement (b). Comparing Figure 5a,b helped to expand the understanding of the presence of a possible relationship between the indicators of spatial and temporal traffic concentration. (Figure 6). The results of studying the spatial-temporal structure of the process of forming lane occupancy make it possible to represent this process in the mathematical equation as follows: . For d L , the dimension is valid both in specific (m/car, km/car) and in absolute values (m, km). In the latter case, d L should be interpreted as the length of the part of the lane in question containing the vehicle, including the distance provided by the driver for emergency braking. Therefore, having determined the considered section of the road network to the boundaries of the movement of one vehicle (from the rear bumper of the previous car to the rear bumper of the next one), it seems possible to express the dynamic envelope d L in terms of the traffic density p : Comparing (6) and (5) with (3), it becomes possible to establish the relationship between the occupancy of the lane and the density of the traffic flow: Thus, the results of additional analytical studies have confirmed the validity of the assumptions about the presence of a linear relationship of traffic concentration indicators. In the case of extrapolating Formula (7) to the entire traffic flow and measuring  in percent and p in cars/km, Formula (7) is transformed as follows: where  is lane occupancy, %; p is the traffic flow density, cars/km; and  k is the parameter of the relationship between the flow density and lane occupancy, Regularities of the Influence of the Factors of Traffic Conditions The mechanism of the influence of the traffic light control means on the traffic flow rate is carried out by the traffic light control cycle and consists of the following [25,26,[34][35][36][37]: Currently, the literature lacks an unambiguous formulation and a unified approach to determining the saturation flow. In a number of sources, the saturation flow is understood as the maximum possible traffic flow rate with the maximum saturated queue of the crossing point, which is partly comparable with the definition of capacity [25,26,[34][35][36][37]. In some cases, we can assume these definitions as identical. Then, the saturation flow can be understood as the maximum possible capacity that can be achieved in the complete absence of the influence of the traffic light control cycle; i.e., 1   . Comparing Formulas where ej t is the effective duration of the j -th phase of the traffic light control cycle, s; and c T is the duration of the traffic light control cycle, cars/hour. The geometric parameters of the road (width and number of lanes) and the direction of the traffic flow (turning radius) affect the maximum value of s М . Currently, there are various techniques for determining the value of saturation flow. In accordance with the current industry regulatory documents of the Russian Federation, the geometric parameters of the road and traffic direction are taken into account at the stage of designing controlled intersections and adjusting the existing modes of operation of traffic lights. Due to the fact that determining the saturation flow was not the goal of this study, to determine the value of s М , a calculation method was adopted that also reflected the influence of the geometric characteristics of the road network. Key controlled intersections of greatest interest for further research are part of the urban passenger public transport route network. In accordance with the current legislative framework of the Russian Federation, these sections of the road network require measures for the safe operation of public transport, which implies ensuring a high quality of the road surface (roadway). In this regard, the influence of the quality of the roadway was not considered in this work. Ultimately [26,41,44]. For further research, we decided to develop a model based on the principle of macromodeling, considering the traffic flow as a whole. This approach can be used more effectively to solve problems in the field of traffic management [7,26,41]. In accordance with the basic provisions of the traffic flow theory, the fundamental macroscopic model describing the movement of a single-lane traffic flow is the Lighthill-Whitham-Richards hydrodynamic model [26,44]. The Lighthill-Whitham-Richards model provided significant clarity with minimal effort in the research conducted with its use [26,41]. This model was developed in the middle of the 20th century, but at the same time it has significant weight and value, and has a relationship with many modern models; for example, with the Prigogine-Herman synergetic model [46]. The main criticism of the Lighthill-Whitham-Richards model is its inoperability at low densities, which is reflected in the discrepancy between the theoretical distribution curve and real data on traffic flow. Based on Table 1, it makes sense to assume that the discrepancy between theoretical and experimental data can be justified primarily not by the lack of adequacy of the mathematical model, but by the impossibility of obtaining correct measurements of the traffic flow concentration in space, which is density. As stated earlier, data on traffic concentration in time, which is characterized by lane occupancy, are more valid. In this regard, this work provides no weighty justification for refusing to use the indicated mathematical model. In the Lighthill-Whitham-Richards macroscopic hydrodynamic model, the traffic flow is likened to the flow of a liquid, namely water. The model itself is based on two key postulates: , ( , ) ( ( , ))  u t x u p t x (12) , '( ) 0, (13) where ( , ) u t x is the speed of the vehicle at a time t in the vicinity of a point on the road with a coordinate x , km/h; ( , ) p t x is the traffic density, cars/km; ( ) u p is the function of the dependence of the speed of the traffic flow on its density, km/h; and ( ) Q p is the function of the dependence of the traffic flow rate on its density, cars/hour. Function (12) is comparable with the earlier obtained Greenshields model [26,42], and is also known as the equation of state of the traffic flow, which is described by a linear model . Taking into account the established relationship of indicators of traffic flow concentration in space and time in (8), Function (12) can be represented as follows: where  and  j are, respectively, the actual and maximum lane occupancy at which a traffic jam occurs, %; and ( )  u is the function of the dependence of the traffic flow speed on lane occupancy, km/h. The function of the dependence of the traffic flow rate on its density (13) is also known as the fundamental traffic flow diagram [26,44]. The resulting Equations (8) and (14) allow us to transform the fundamental diagram in (13): (15) where ( )  Q is the function of the dependence of the traffic flow rate on lane occupancy, cars/hour. In Equation (15), the value of the free movement speed f u on the considered section of the road network is constant and is limited only by the road safety conditions. In accordance with the current traffic rules of the Russian Federation, the maximum permitted speed within the city limits is 60 km/h, considering there are no additional technical means to regulate the speed limit. The critical value of  j theoretically corresponds to 100% It is also assumed that the coefficient of interrelation of the concentration indicators  k under the same traffic conditions and the conduct of the study takes on a constant value. Consequently, f u ,  j , and  k are constants, which ultimately allows us to make an assumption: the process of changing the traffic flow rate under the influence of lane occupancy is described by a one-factor quadratic model: where 1 a , 1 b are model parameters, cars/(hour•%). Development of a Mathematical Model of the Influence of Traffic Light Control on Traffic Flow Rate The influence of the controlled intersection on the traffic flow rate is carried out by switching the signals of traffic lights, as well as their duration as part of the cycle [4,25,26,[34][35][36][37]. In the absence of traffic light control devices, as well as in situations where the effective duration of the e t phase remains unchanged and becomes equal to the duration of the entire traffic light cycle c T , which is typical for traffic lights with a calling phase for pedestrian traffic, it is equipped on sections of the road network with high transport demand. In this case, the maximum value of the traffic flow rate with the complete long-term absence of pedestrians takes on a value equal to the value of the saturation flow s M . The total duration of the signals prohibiting traffic and the intermediate time step of the traffic light control cycle for the traffic flow is wasted time; in other words, the time during which there is no movement, and the flow rate at the exit of the controlled intersection equals zero. Consequently, the maximum possible number of vehicles at the exit of the controlled intersection will depend on the effective phase duration e t , s, and the total duration of the traffic light control cycle c T , s. The results of earlier studies [34][35][36][37] assume  e o t t , where o t is the duration of the permitting signal of the main time step of the traffic light control cycle, s. This assumption is also accepted in this work. To determine s M in the study, a computational method was adopted, which also allows taking into account the influence of the geometric parameters of the considered section of the road network. Thus, traffic light control on a section of the road network will additionally limit the maximum possible value of the traffic flow rate due to the ratio of the duration of the permitting signal to the total duration of the traffic light cycle. The process of changing the traffic flow rate at the exit of the controlled intersection is described by a one-factor linear model: After establishing the type of one-factor models that reflect the process of changing the traffic flow rate under the influence of key factors, it becomes possible to develop a multi-factor model by arranging. For this, it is necessary to predetermine the type of the proposed multi-factor model that can be presented in a multiplicative or additive form [47]. To decide on the type of model, the nature of the studied regularity was analyzed according to the following algorithm:  The presence of extrema in the studied function;  The behavior of the response function Points of the factor space necessarily belonging to the graphic display of the response function;  The nature of the influence of factors on the response function. As a result of the arrangement, a two-factor mathematical model was obtained: (18) where Q is the traffic flow rate, cars/hour;  is the lane occupancy, %; a , b are model parameters, 1/%; s M is the saturation flow, cars/hour; and  is the share of the permitting signal in the traffic light cycle. Ultimately, a hypothesis was put forward: the process of changing the traffic flow rate under the influence of lane occupancy and the traffic light control cycle is described by a two-factor quadratic multiplicative model. Experiment Planning To confirm the developed hypothesis, determine the numerical values of the parameters and statistical characteristics of the mathematical model, as well as test it for adequacy, experimental studies were planned and carried out (Table 3). No precipitation -fog. Clear visibility (no fog) Data on the traffic flow rate, the occupancy of lanes, and the mode of operation of traffic lights were collected at the intersection of Republiki and M. Toreza streets, Tyumen. Experimental data was obtained using the equipment shown in Figure 7. Primary information about the mode of operation of traffic lights, including the geometric characteristics of the controlled intersection, the order of priority of passage and directions of movement of vehicles, and the location of the elements of the road traffic light unit at the intersection of Respubliki and M. Toreza streets, was obtained based on documents provided by subordinate divisions of the administration of the city of Tyumen. Experimental data on the duration of traffic light signals as part of the regulation cycle were recorded through a traffic light controller installed at the intersection and then fed to the automated workplace of the engineer of the traffic control center in Tyumen. The total sample of the initial data was 1184 measurements. In order to exclude the uncontrolled influence of the controlled intersection mode, the experimental data were preliminarily redistributed relative to the time of day corresponding to the operating time of a certain traffic light mode. The obtained data were processed using the STATISTICA 10 software package. The statistical method of grouping by mean values was used as a grouping method. Initial experimental data were obtained and processed using modern equipment and software: Traficam video detectors, SPECTR traffic light controller, Traficam Data Tool, SPECTR 2.0, and STATISTICA 10. Results of the Study of the Regularities of the Influence of Traffic Flow Concentration in Time on Traffic Flow Rate For each operating mode of traffic lights, the experimental data were grouped according to the Sturges' formulas [48] for the number of intervals k with a range  of lane occupancy  , %. The grouping results are presented in Table 4. Within the range, the type of distribution was determined [47], which, in most cases, corresponded to the normal distribution (Figure 8a). Using the least-square method [47], the regularity of the influence of lane occupancy on the traffic flow rate was established, the graphical representation of which is the regression line (Figure 8b). Table 5. In addition, for the model (16), statistical characteristics were determined and are presented in Table 6. The numerical values of the determination coefficients were in the range of 0.9 to 0.96, and the correlation coefficients were from 0.95 to 0.98, which indicated the presence of a very high-strength relationship between the variables. The excess of the calculated value of the Student's t-test compared to the tabular one confirmed the significance of the obtained correlation, and the Fisher's variance ratio F exceeding the tabular value of the Fisher's F-test testified to the adequacy of the model. Results of the Study of the Regularities of the Influence of Traffic Light Control on Traffic Flow Rate In a similar way, the initial data on the process of changing the traffic flow rate under the influence of various characteristics of the traffic light control cycle were obtained (Table 7). The numerical values of saturation flows were determined by the calculation method based on the obtained measurements of the geometric characteristics of the intersection. The initial data on the geometric characteristics of the intersection and approaches to it was obtained by measuring the distances made to scale and plotted on the layout of the technical means of traffic organization at the intersection. The layout was provided by the Department of Road Infrastructure and Transport of the administration of the city of Tyumen in electronic form in the ".dwg" data file format. (17) is shown in Figure 9. For the mathematical model (17), the numerical values of its parameters and statistical characteristics were determined (Table 8). The numerical values of the determination coefficients of the model of the influence of the traffic light control cycle on the traffic flow rate for the left-turn and forward directions of the traffic flow were 0.74 and 0.71, respectively, and the correlation coefficients were 0.86 and 0.84, respectively, which also confirmed the high strength of the relationship between the variables. The excess of the obtained calculated value of the Student's ttest compared to the tabular one confirmed the significance of the obtained correlation, and the Fisher's variance ratio F exceeding the tabular value of the Fisher's F-test testified to the adequacy of the proposed model. Table 9 presents the parameters and statistical characteristics determined for the mathematical model (18) describing the combined effect of the lane occupancy and traffic light cycle. The numerical values of the coefficients of determination and correlation indicated a high relationship between the variables. The excess of the obtained calculated value of the Student's t-test compared the tabular one confirmed the significance of the obtained correlation. The average approximation error was within acceptable limits, and the Fisher's variance ratio F exceeded the tabular value of the Fisher's F-test, which together testified to the adequacy of the proposed model. Results of the Study of the Regularities of the Combined Influence of Traffic Flow Concentration in Time and Traffic Light Control on Traffic Flow Rate The surface shown in Figure 10 is a graphic representation of the combined effect of lane occupancy and the share of the permitting signal in the traffic light cycle on the traffic flow rate. Analysis of the Results Obtained The obtained study results showed that traffic flow concentration in time, the measure of which is lane occupancy, can vary in the range of values from 0 to 100%. The minimum value of occupancy indicates the complete absence of vehicles in the lane, and the maximum value indicates that the movement of vehicles is completely stopped due to various reasons: vehicle breakdown, traffic accident, etc. It is also possible to assert that there is an optimal value of lane occupancy at which the maximum possible value of the traffic flow rate is achieved, comparable to the value of the saturation flow. Therefore, further management of traffic flows at controlled city intersections should be associated with the optimization of lane occupancy in all directions of traffic flows. In turn, as the values of lane occupancy increase from zero to the optimal one, the traffic flow rate increases from zero to the maximum. In this case, there is a surplus in the capacity of the traffic lane under consideration, which can be used as a resource for adjusting the traffic light control cycle. After traffic flow concentration in time reaches the optimal value, traffic flow rate decreases from maximum to zero, followed by an increase in lane occupancy to a critical 100% value. In this case, there is a deficit in the capacity of the traffic lane under consideration, which first indicates first the risk of traffic congestion, then the transition of the traffic flow to a congestion state, and ultimately the complete cessation of traffic on the investigated section of the road network. The results of experimental studies have also shown that the optimal value of lane occupancy changes depending on the directions of traffic flow along the lanes. The authors believe that the change in the optimal value of lane occupancy depending on traffic directions is due to the fact that for the purpose of traffic safety in turning directions, drivers are forced to reduce the speed of movement. As a result, vehicles spend more time in the control zone of the detector than when driving in a forward direction. This phenomenon also determines the formation of the saturation flow value in turning directions, which was taken into account by the authors in this work when studying the influence of the traffic light control cycle on the maximum possible value of the traffic flow rate. In turn, the change in the share of the permitting signal in the traffic light cycle directly proportionally affects the maximum possible value of the traffic flow rate. An increase and decrease in the share of the permitting signal in the traffic light cycle additionally increases or limits the maximum possible value of the traffic flow rate, respectively, which was also experimentally confirmed in this work. Discussion At present, for many developed countries of the world, the most significant problem in the field of urban traffic organization is the formation of traffic congestion on the urban road network [1]. Previous studies note that this phenomenon is formed due to the high level of motorization [1,4,5]. Based on the results of the analysis of the available world expertise in solving this problem, three main approaches were identified: the road-building approach [1,8], the organizational and administrative approach [1,[8][9][10][11][12][13][14][15][16][17][18][19], and the approach consisting of the use of intelligent transport systems [8,[20][21][22][23][24]. According to the authors, the most promising one is the approach consisting of the use of intelligent transport systems to control traffic flows. It should be noted that in this matter, the authors of this work do not state it is the ultimate truth, and rightly emphasize that each of the conventionally formed approaches to solving the problems of traffic management has the right to exist, and can be used and be effective depending on the situation, available resources, and the goals and objectives of the researchers. To select a key indicator of traffic management, the authors studied the process of traffic congestion on the city road network. It has been established that the formation of traffic jams is directly related to the process of changing the traffic flow rate [1,6,7]. This indicator was designated as a target for further research. The results of the analysis also showed that situations when the value of the traffic flow rate tends to zero arise not only in the event of a traffic jam, but also in the case of a complete absence of vehicles on the road network. To resolve this uncertainty, it is necessary to additionally take into account the data on the concentration of the traffic flow and compare them with the number of passing vehicles [26,27]. Based on previous studies, it was established that at present, for a number of reasons given by the authors as arguments, it is advisable to use the concentration of traffic flow in time, the measure of which is lane occupancy [7,[27][28][29][30][31][32][33]. The results of the analysis of previous studies also showed that the presence of traffic lights at city intersections is certainly necessary to improve road safety and minimize the number of road accidents, but it significantly reduces the maximum possible traffic flow rate, which further exacerbates the situation and contributes to the formation of traffic congestion [34][35][36][37]. Unfortunately, the combined effect of the concentration of traffic flow in time and the means of traffic light control on the traffic flow rate in cities has not been fully studied, which served as the basis for setting the goal of this work. To describe the regularity of the process of changing the traffic flow rate under the influence of lane occupancy, a one-factor quadratic mathematical model was developed based on a macroscopic hydrodynamic model of traffic flow [26,44]. Despite the existing criticism regarding the inoperability of the selected model under low-concentration conditions, the authors of this work adhere to the position that the indicated drawback of the selected model is associated with the density indicator, which has been used until now as the main measure of traffic flow concentration. The discrepancy between theoretical positions and experimental data can be associated with the problem of measuring density and the impossibility of obtaining correct information under normal conditions, which was noted by the authors based on previous studies. At the same time, the question of the relationship or its absence between the indicators of the concentration of the traffic flow in time and space currently also remains unanswered. On the one hand, there are previous studies that indicate the possibility of a relationship between these indicators [27]. On the other hand, it has not yet been possible to experimentally confirm the existence of this relationship in full [25,26,30]. Within the framework of this work, the authors carried out additional studies that also confirmed at least the existence of a theoretical relationship between traffic density and lane occupancy. Subsequently, the results obtained made it possible to transform the initial hydrodynamic model, taking into account the effect of lane occupancy on the traffic flow rate, and also to form a working hypothesis that subsequently was confirmed experimentally. Conclusions Theoretical and experimental studies performed by the authors of this paper in accordance with the designated methodology on the example of one of the most significant street intersections in the city of Tyumen in the Russian Federation confirmed the existence of regularities in the process of changing traffic flow rate under the influence of traffic concentration and traffic light control. The results also showed that this process is characterized by the presence of an optimum; i.e., the optimal lane occupancy value at which the maximum traffic flow rate is achieved. A deviation from the optimum indicates the irrational use of the road network resource, which in turn indicates the need to optimize road traffic by redistributing traffic flows. Experimental studies also confirmed the adequacy of the developed mathematical model of the process under study, and made it possible to determine its parameters depending on the directions of traffic flow. The analysis of the experimental results showed that depending on the direction of movement of vehicles, the optimal value of the lane occupancy also changes. The authors believe that this phenomenon is justified by a decrease in the speed of the traffic flow in order to safely perform the turn. Thus, when driving in a cornering direction, the maximum value of the traffic flow rate is formed with a larger value of lane occupancy. The obtained results of the study can be directly used to improve the algorithms for the operation of controlled intersections as part of urban automated traffic control systems. The developed mathematical model will make it possible to predict the maximum value of the traffic flow rate at the exit of the controlled intersection, taking into account not only the current operating mode of traffic lights, but also the influence of lane occupancy. In the future, this will make it possible to determine the deficit or surplus of the time required to meet the actual transport demand at the controlled intersection, and thereby make a more accurate adjustment of the traffic light cycle. Therefore, further promising areas of research will be the development of a practical methodology aimed at increasing the efficiency of traffic management, taking into account the traffic flow concentration in time, as well as clarifying the regularities of the influence of lane occupancy under different conditions of the road surface and weather conditions. Author Contributions: Conceptualization, V.M.; methodology, S.I.; validation, V.M. and S.I.; V.M. analyzed the state of the issue to solve the indicated problem; developed the hypothesis and theoretical and experimental parts of the study; analyzed the results of the study; and wrote the text of the paper. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding.
10,925.6
2021-07-26T00:00:00.000
[ "Engineering", "Environmental Science", "Geography" ]
Stability Analysis of a Ratio-Dependent Predator-Prey Model where x(t) and y(t) are the densities of the prey and predator population at time t, respectively.+e functionf(x) represents the growth of the prey population rate, g(y) represents the growth rate of predator population, and p(x) represents the functional response function of predator population to prey population. In [1], Xu et al. used the function p(x) � x2/(x2 + my2) as the functional response function of predator population to prey population. +e time delay due to the gestation of the predator is discussed in [1]. It is noted that in model (1), each individual’s prey admits the same risk to be attacked by predators and each individual predator admits the same ability to feed on prey. +is assumption seems not to be realistic for many animals. In natural world, there are many species whose individuals pass through an immature stage. Stage structure is a natural phenomenon and represents, for example, the division of a population into immature andmature individuals. In the last two decades, stage-structured models have received great attention [3–7, 9]. Based on above discussion, we study the following predator-prey model: Introduction Recently, the predator-prey models have been studied by many authors [1][2][3][4][5][6][7][8]. In general, a predator-prey model has the following forms: where x(t) and y(t) are the densities of the prey and predator population at time t, respectively. e function f(x) represents the growth of the prey population rate, g(y) represents the growth rate of predator population, and p(x) represents the functional response function of predator population to prey population. In [1], Xu et al. used the function p(x) � x 2 /(x 2 + my 2 ) as the functional response function of predator population to prey population. e time delay due to the gestation of the predator is discussed in [1]. It is noted that in model (1), each individual's prey admits the same risk to be attacked by predators and each individual predator admits the same ability to feed on prey. is assumption seems not to be realistic for many animals. In natural world, there are many species whose individuals pass through an immature stage. Stage structure is a natural phenomenon and represents, for example, the division of a population into immature and mature individuals. In the last two decades, stage-structured models have received great attention [3][4][5][6][7]9]. Based on above discussion, we study the following predator-prey model: where x 1 (t) and x 2 (t) are the densities of the immature and mature prey at time t and y 1 (t) and y 2 (t) are the densities of the immature and mature predators at time t. In model (2), all parameters are positive constants. τ ≥ 0 is the time delay due to the gestation of the predator. x 2 /(x 2 + my 2 ) is the ratio-dependent functional response. Model (2) is of the following initial conditions: e organization of this study is as follows. In Section 2, we discuss the local stability of the nonnegative boundary equilibrium and the positive equilibrium of models (2) and (3). e existence of a Hopf bifurcation for models (2) and (3) at the positive equilibrium is also established. Sufficient conditions are derived for the global stability of the nonnegative boundary equilibrium and positive equilibrium of models (2) and (3) in Section 3, respectively. Local Stability and Hopf Bifurcation In this section, by analyzing the corresponding characteristic equations, we study the local stability of each of nonnegative equilibria and the existence of a Hopf bifurcation at the positive equilibrium of models (2) and (3). Global Stability In this section, by using an iteration technique, we discuss the global stability of the nonnegative equilibria E 1 and E + of models (2) and (3), respectively. Theorem 2. Let hold; then, the nonnegative boundary equilibrium E 1 of model (2) is globally stable. Proof. It follows from the positive solution of model (2), and we can obtain By Lemma 2.2 of [5] and comparison, we have erefore, there is a positive number t 1 , for sufficiently small positive number ε, such that as t > t 1 , x 1 (t) ≤ x 1 ′ + ε. Hence, for t > t 1 + τ, we derive that By Lemma 2.2 of [5] and comparison, we can obtain erefore, there is a positive number For t > t 2 , we derive from model (2) that By Lemma 2.2 of [5] and comparison, we have (23) By model (2), it follows that By Lemma 2.4 of [3] and comparison, we obtain that which together with (19) and (21) yields Hence, the equilibrium E 1 (x 1 ′ , x 2 ′ , 0, 0) of model (2) is globally stable. , L y i � liminf t⟶+∞ y i (t), (i � 1, 2). (28) By the first two equations of model (2), we can obtain that By Lemma 2.2 of [5] and comparison, we have So, for sufficiently small positive number ε, there exists a positive number t 1 , such that if t > t 1 , then For t > t 1 + τ, by the last two equations of model (2), we get By Lemma 2.2 of [5] and comparison, we obtain (32) erefore, for sufficiently small positive number ε, there is For t > t 2 , by the first two equations of model (2), we have (34) By Lemma 2.4 of [3] and comparison, we derive that Hence, for sufficiently small positive number ε, there is For t > t 3 + τ, it follows from the last two equations of model (2) that By Lemma 2.4 of [3] and comparison, we can obtain erefore, for sufficiently small positive number ε, there is a positive number t 4 ≥ t 3 + τ, such that if t > t 4 , y 2 (t) ≥ N y 2 1 − ε. In this case, by the first two equations of model (2), we have Journal of Mathematics erefore, for sufficiently small positive number ε, there is t 5 ≥ t 4 , such that if t > t 5 , From the last two equations of model (2), we obtain that for t > t 5 + τ, By Lemma 2.2 of [5] and comparison, if a 2 r 2 > d 4 (r 2 + d 3 ) holds, we have Hence, for ε > 0 sufficiently small, there is a T 6 ≥ T 5 + τ, such that if t > T 6 , y 2 (t) ≤ M y 2 2 + ε. Again, for sufficiently small positive number ε and t > t 6 , by the first two equations of model (2), we have (43) So, there is a positive number t 7 ≥ t 6 , for t > t 7 , For sufficiently small positive number ε and t > 7 + τ, from the last two equations of model (2), we can derive
1,635.2
2022-03-17T00:00:00.000
[ "Mathematics" ]
Hybrid graphene-manganite thin film structure for magnetoresistive sensor application An increasing demand of magnetic field sensors with high sensitivity at room temperatures and spatial resolution at micro-nanoscales has resulted in numerous investigations of physical phenomena in advanced materials, and fabrication of novel magnetoresistive devices. In this study the novel magnetic field sensor based on combination of a single layer graphene (SLG) and thin nanostructured manganite La0.8Sr0.2MnO3 (LSMO) film—hybrid graphene-manganite (GM) structure, is proposed and fabricated. The hybrid GM structure employs the properties of two materials—SLG and LSMO—on the nanoscale level and results in the enhanced sensitivity to magnetic field of the hybrid sensor on the macroscopic level. Such result is achieved by designing the hybrid GM sensor in a Wheatstone half-bridge which enables to employ in the device operation two effects of nanomaterials—large Lorentz force induced positive magnetoresistance of graphene and colossal negative magnetoresistance of nanostructured manganite film, and significantly increase the sensitivity S of the hybrid GM sensor in comparison with the individual SLG and LSMO sensors: S = 5.5 mV T−1 for SLG, 14.5 mV T−1 for LSMO and 20 mV T−1 for hybrid GM at 0.5 T, when supply voltage was 1.249 V. The hybrid GM sensor operates in the range of (0.1–2.3) T and has lower sensitivity to temperature variations in comparison to the manganite sensor. Moreover, it can be applied for position sensing. The ability to control sensor’s characteristics by changing technological conditions of the fabrication of hybrid structure and tuning the nanostructure properties of manganite film is discussed. Introduction The detection of magnetic fields with increased spatial resolution to micro-nanoscales is very important for magnetometry, magnetic storage, biosensing and other applications [1][2][3][4][5]. Moreover, the increasing use of smart devices with incorporated chip-based sensors are becoming very promising for growing automotive and Internet of things industries and Intelligent transport [6][7][8]. It is of great interest to have lowdimension sensors with increased sensitivity and extended capabilities operating at room temperatures. The discovery of Hall effect in semiconductors and magnetoresistive effects (anisotropic AMR, tunneling TMR, giant GMR and colossal CMR) in magnetic structures encouraged fundamental research [9][10][11] leading to a number of laboratory-scale and commercially available devices [12][13][14]. However, each application has its specific requirements for sensitivity, temperature and magnetic field ranges of operation, accuracy, etc. Therefore, the choice of material with special properties and design of sensing element becomes very important. Recently, it has been demonstrated that nanostructured (polycrystalline with nanosize grains) lanthanum manganite films can be used for the development of magnetic field sensors operating in wide range of temperatures (4-320 K) and magnetic fields (from mT up to megagauss) [15][16][17]. Manganite films reveal paramagnetic-ferromagnetic phase transition at a Curie temperature and exhibit negative colossal magnetoresistance phenomenon which in simplified theoretical explanation is related to the double-exchange mechanism leading to the increase of material conductivity when magnetic moments of manganese ions are aligned in an external magnetic field [18]. Many research groups are interested in so-called extrinsic magnetoresistance [19] phenomena related to spin-polarized tunneling transport across grain boundaries in polycrystalline manganites, since these promise large magnetoresistance values in low magnetic fields [20,21]. Magnetoresistive sensors based on nanostructured manganite films have large sensitivity [22,23] in a wide range of temperatures, however, it decreases with increase of temperature in a paramagnetic state and with increase of magnetic field due to magnetoresistance saturation. The other advantage of nanostructured manganite films -they are relatively insensitive to magnetic field direction at high fields, what makes it possible to design so-called B-scalar sensors [14,24,25]. One of the most recently studied materials for the development of magnetoresistive sensors is graphene [26], which is a two-dimensional Dirac semimetal with a very high mobility of charge carriers. The operation of graphene sensor is based on Lorentz force induced positive magnetoresistance phenomenon (Gauss effect). It was shown [27] that such sensors are 100 times more sensitive to magnetic field than silicon equivalent and graphene magnetoresistance does not saturate up to very high fields (62 T) [28]. Graphene sensor can achieve very large magnetoresistance values at intermediate and high magnetic fields (5-15 T) [29][30][31][32], however, at low fields (<1 T) the sensitivity of such sensors is low due to classical quadratic MR dependence on magnetic flux density B [33]. Thus low sensitivity of graphene based magnetic field sensor in the low field range up to few tesla and saturation of the sensitivity of manganite based sensor at high magnetic fields are the main disadvantages of these devices. Moreover, the zero magnetic field resistivity of both manganite film and graphene layer at temperatures around room temperature decreases with increase of temperature (semiconducting state), what makes the sensor's response sensitive to the ambient temperature variations. In this paper, we suggest the magnetic field sensor based on combination of graphene layer and thin nanostructured manganite film-hybrid graphene-manganite (GM) sensor, which in comparison to an individual graphene or manganite sensors, has significantly larger sensitivity to magnetic field and lower sensitivity to ambient temperature variations. Graphene sensing element fabrication The commercially available single layer graphene (SLG) grown on Cu foil by chemical vapor deposition method and covered with Poly(methyl methacrylate) (PMMA) polymer was used in the present investigations. The graphene grown on Cu foil was transferred to target substrate Si/SiO 2 -100 nm with already formed Ag contacts by applying wet chemical etching procedure. Firstly, Cu foil was chemically dissolved from the bottom of Cu/Graphene/PMMA structure and, as a result, the SLG+PMMA flake was floating on the top of etching solution surface. Afterwards rinsing with deionized water was performed and the floating flake was 'catched' and transferred on the top of target substrate with already formed Ag contacts. The PMMA layer was left as the protection of graphene layer in order to prevent additional oxygen contamination and altering of electric and magnetic properties of graphene during the time. Nanostructured manganite film fabrication The La 1-x Sr x MnO 3 (LSMO) films with a thickness of 350 nm were deposited using a pulsed-injection metal-organic chemical vapor deposition technique [34] onto a polycrystalline Al 2 O 3 substrate. Such substrate was chosen in order to grow nanostructured films with nanosize crystallites. After flash evaporation (at ∼270°C) of the micro-doses, the resulting vapor mixture was transported by an Ar+O 2 (3:1) gas flow towards the heated substrate. During the growth, the temperature of the substrate was kept at 750°C and injection frequency was 2 Hz. A total pressure of Ar+O 2 gases in reactor was 10 Torr. Morphology and microstructure characterization The surface morphology and microstructure of the LSMO film was investigated by scanning electron microscope (SEM) and transmission electron microscope (TEM). The Sr content x in the grown films was estimated by energy-dispersive x-ray spectroscopy measurements and was found almost constant: x=0.2±0.01. Electrical transport and magnetoresistance measurements The dependence of manganite film and graphene layer resistances on temperature was investigated in the range from 5 to 320 K using closed cycle helium cryo-cooler (JANIS). The magnetoresistance of individual graphene or manganite sensors as well as hybrid sensor's response to magnetic field was investigated using electromagnet which was able to generate DC magnetic field up to 2.35 T. The response to magnetic field from hybrid graphene-manganite sensor was recorded by measuring the change of voltage drop ΔV res across the manganite film, when the circuit was supplied by source voltage of V S =1.249 V. The investigation of the sensor's response to magnetic field direction was performed by rotation of the electromagnet. One can see that the 350 nm thick film is nanostructured with well-pronounced crystalline columns spread throughout the entire thickness of the film in the direction perpendicular to the substrate plane. The crystallite columns having width of 50-70 nm were separated by 5-7 nm thick vitreous grain boundaries (evaluated from high-resolution TEM). Results and discussion The hybrid GM sensor was designed as a Wheatstone half-bridge (voltage divider, see figure 1(d)) with three terminal (1, 2, 3) electrical circuit which consisted of a power supply of voltage V s and two magnetic field sensing elements connected in series: SLG (4) and nanostructured manganite film LSMO (5) represented in the figure 1(d) as resistors R SLG and R LSMO , respectively. The response voltage V res in applied magnetic field was measured across the manganite film. An increase of sensitivity of the proposed device was expected as a result of opposite signs of magnetoresistance phenomena in graphene and manganite film. The schematic structure of both sensing elements SLG and LSMO is shown in figures 1(e), (f). For the proof-of-concept, two designs of hybrid GM sensor were proposed and investigated: coplanar and perpendicular (see figure 1(g), (h)). In case of coplanar design (g), the graphene layer and manganite film were placed and connected in the same plane. In this case the applied magnetic field was directed with the same angle in respect to the plane of both sensing elements. For the perpendicular design (h), the manganite film plane was perpendicular to the graphene layer plane. In such case the magnetic field applied perpendicular to the graphene layer was directed parallel to the manganite film plane. It has to be noted, that magnetoresistive properties of individual sensors, fabricated only from single graphene layer (figure 1(e)) or manganite film (figure 1(f)), were also investigated. In this case, the graphene layer or manganite film was connected in a voltage divider circuit connecting them in series with a ballast resistor replacing R LSMO or R SLG , respectively, in the circuit presented in figure 1(d). The value of resistance of the ballast resistor was chosen the same as that of graphene or manganite film in order to ensure the same conditions as for the hybrid GM sensor in zero magnetic field. Magnetoresistance and sensitivity of the hybrid GM sensor where R(B) and R(0) are resistance values at magnetic field B and zero field, respectively. One can see that the graphene layer exhibits positive while nanostructured manganite film negative magnetoresistance phenomena. In order to compare the influence of magnetoresistive properties of individual sensors based on SGL or LSMO films on the sensitivity of the hybrid structure, the SLG was chosen with similar MR values as LSMO film in the investigated magnetic field range. One can see that at low field (B<1.5 T) the MR dependence is quadratic and thus the SLG sensor has low sensitivity to magnetic field in this range. At higher fields (B>1.5 T) the MR behavior changes to linear [28,29] and the slope of this dependence is very important for the sensitivity of magnetic field sensor. Also, it has to be pointed out that the MR of SLG is maximal when B is perpendicular to the layer plane and zero if it is applied in the plane, what is typical for Lorentz force magnetic field sensors. In contrary, the value of MR of manganite film only slightly depends on the direction of magnetic field (MR anisotropy MRA=(MR || −MR ⊥ )/MR || ) in respect to the film plane (B parallel || or perpendicular ⊥). The MRA significantly decreases with increase of magnetic field and at 2 T is of the order of 10% (in the field higher than 10 T it is less than 2%). Such behavior of the MR is a result of a special nanostructure of manganite film. Due to demagnetization (shape) effect related with aspect ratio of thin film geometry, the direction of easy-axis of magnetization is aligned with the film plane [35][36][37]. In our case of nanostructured LSMO film, such effect is partly compensated by column-like crystallite structure, in which the easy-axis of magnetization in a single columnar crystallite is directed along its axis perpendicular to the film plane. Moreover, depending on thickness of crystalline columns and properties of grain boundaries with reduced magnetization and crystalline order [38], it is possible to tune the magnetoresistance magnitude of the nanostructured films [34] and change compensation level of the demagnetization field [23,37]. As a result, the magnetoresistance anisotropy can be minimized [23,34]. It has to be noted that figure 2 presents only one case-the results of the La 0.8 Sr 0.2 MnO 3 film which nanostructure is presented in figure 1(c). Changing film's nanostructure by technological conditions [34] and the aspect ratio of the film geometric shape (decreasing planar dimensions down to film thickness) it is possible to minimize the demagnetization field and thus to achieve similar MR values for both perpendicular and parallel field directions. Two effects of nanomaterials-large positive magnetoresistance of graphene at high (>1 T) magnetic fields [28][29][30][31] and large negative magnetoresistance of nanostructured strontium manganite films [17] at room temperature from low fields up to intermediate fields-allowed us to propose hybrid graphene-manganite sensor expecting to increase the sensitivity to magnetic field in a wide magnetic field range. The absolute sensitivity of the sensors to magnetic field was defined as S=δΔV res /δB. The dependences of S versus B for two GM sensor configurations obtained from data presented in figure 3(a) are shown in figure 3(b). One can see, that in the range of (0-1.5) T the sensitivity of individual LSMO is higher in comparison to the individual SLG sensor. At higher fields, the sensitivity of the SLG sensor becomes higher. As a result, the total sensitivity of the hybrid GM sensor is always higher in the measured magnetic field range in comparison to the individual LSMO or SLG sensors. Figure 3(b) shows that at low magnetic field the sensitivity of manganite film is significantly higher if magnetic field is oriented along the film plane (perpendicular configuration). Due to demagnetization (shape) effect in thin manganite film, the S versus B dependence (lower graph in figure 3(b)) consists of two regions with different slopes demonstrating different nonlinearity of ΔV res versus B characteristic. Thus the design in which graphene layer and manganite film planes are perpendicular each to the other is preferable for magnetic field sensor as in this case the nonlinearity of ΔV res versus B characteristic in all measured magnetic field range is described by one and the same law (ΔV res ∼B 2 ). This also makes less complicated calibration procedure of the sensor in comparison to the coplanar case. It has to be noted, that for sensor applications the constant sensitivity is preferable. However, it usually can be achieved only in narrow magnetic field range (ΔB) linear . For example, GMR or TMR sensors have linear response and constant sensitivity in mT range [39]. In such case the sensitivity is defined as MR/(ΔB) linear which can be increased several times by decreasing the saturation field and measurement range. In the case of graphene, the linear response can be achieved at much wider range up to very high magnetic fields (from 1 T up to 62 T at room temperature [28]). However, at lower fields the MR has B 2 dependence [28,29]. Therefore, for sensors applications in a wide magnetic field range, nonlinear characteristics are not a problem: one can use modern electronics and signal conditioning circuits with stored in advance calibration data [14]. In our proposed device the magnetoresistance change of graphene and manganite elements has different signs, thus we evaluated the sensitivity of the hybrid GM structure and compared it with the sensitivities of individual elements as response voltage change relative to the magnetic field change. For comparison, the S value in hybrid GM sensor of perpendicular configuration in respect to the individual graphene layer sensor increases approximately 13 times at B=0.1 T and 4 times at B=0.5 T ( figure 3(b), upper graph). However, when both graphene and manganite planes are in parallel, the S value changes from 7 to 3.7 times ( figure 3(b), lower graph), respectively. This demonstrates that sensor in which graphene and manganite planes are perpendicular each to the other exhibits larger sensitivity in comparison to coplanar design. However, if the nanostructure of LSMO film and the aspect ratio of its geometric shape was optimized, it would be possible to minimize the demagnetization field and to achieve high sensitivity of the hybrid GM sensor at low magnetic fields using more technologically convenient coplanar configuration: LSMO⊥+SGL⊥, when both layers are in one plane. It is important to note that the sensitivity S of the GM sensor also depends on the supply voltage V s of the measurement circuit (see figure 1(d)). Therefore, in the general case we have to consider a relative voltage change of the sensor's response in respect to the supply voltage: ΔV res /V S . For example, at 0.5 T the voltage normalized sensitivity S V =S/V S when V S =1.249 V is the following: 0.0044 V/ VT, 0.0116 V/VT, and 0.016 V/VT for individual SLG, LSMO and hybrid GM sensors. In comparison, the Hall sensor based on silicon achieved S V =0.1 V/VT, based on graphene (0.35-3)V/VT (see, for example, a comparison table in [40]), however, these sensors were operating only in mT range. It is obvious that the maximal sensitivity of the hybrid GM sensor depends on the intrinsic properties of both manganite film and graphene layer in magnetic field (magnetoresistance dependences on B, see figure 2) as well as on the resistance values of both elements in zero magnetic field. Therefore, we expressed the relative voltage change in a form of its dependence on the absolute values of magnetoresistances MR SLG and MR LSMO as well as resistances R SLG (0) and R LSMO (0) of the SLG and manganite LSMO, respectively. According to the electrical circuit shown in figure 1(d), the hybrid GM sensor's response change is the following: It could be expressed in the form: Here the following parameters r and m are introduced: It has to be noted, that the MR SLG and MR LSMO in equation (6) are measured not in percentage (as MR in equation (1) and figure 2), but in parts of the unit (MR/ 100%). It was shown (see figure 2) that the magnetoresistance of graphene is positive, while manganite exhibits negative MR, for this reason the sign in front of the absolute value of the MR LSMO in equation (5) is negative. Equation (3) can be used to determine the required parameters of LSMO and SLG in order to obtain maximal relative response of the hybrid GM sensor. For example, zero field resistances of our fabricated individual LSMO and SLG sensors were 670 Ω and 283 Ω, respectively, while the magnetoresistance magnitudes at 2 T were 11% and 7.9%, respectively (see figure 2). Therefore, from equations (4), (5) we obtain the following values of parameters: m=1.21, r=0.42, and from equation (3) the absolute value of relative response ΔV res /V s =0.04. When V s =1.249 V, using equation (3) we determine ΔV res =0.051 V, what is in agreement with the measured value ≈52 mV (see figure 3(a)). From the equation (3) follows, that the maximal sensitivity of the hybrid GM sensor in the investigated magnetic field range can be obtained when the resistances of SLG and LSMO film are of similar value. Moreover, the relative response can be increased by increasing the parameter m-increasing the MR of graphene and manganite by optimizing fabrication technology of both layers. For example, if MR of LSMO would be increased up to 16% at 2 T by using two sources of precursors supply during PI MOCVD technology proposed in [41] and the MR of graphene would be increased up to 100% using optimized graphene preparation technology on BN substrate (see [42]), the parameter m=2.38. Keeping the same ratio of zero field resistances r=0.42 and V s =1.249 V, one can calculate response ΔV res =0.250 V, which is increased by 5 times in comparison with our presented case. Temperature dependence Very important parameter of magnetic field sensor is zero field resistance sensitivity to temperature variations. Figure 4 demonstrates sheet resistance (R , ) dependence on temperature (T) for graphene layer (SLG) and thin manganite film (LSMO). The metal-insulator transition temperature which corresponds to the temperature of resistivity maximum (T m ) of LSMO film is 250 K. As it can be seen, in temperature range from 275 K to 320 K both graphene and manganite exhibits semiconductortype R , versus T behavior with −24 Ω/K and −4 Ω/K resistance temperature coefficient (RTC), respectively. This means that the most sensitive to temperature variations element of the hybrid graphene-manganite sensor is manganite film. However, when two active elements (LSMO and SLG) are connected into a hybrid GM sensor, the voltage response change due to change of ambient temperature is less pronounced. Figure 4(b) shows the absolute value of zero field voltage response change ΔV res (0)=V res320K (0)−V res250K (0) versus temperature in the semiconducting state (250-320 K) for the sensor consisting of individual manganite film (red circular dots) and the hybrid graphene-manganite sensor (black squares). As it can be seen, in the temperature range from 290 to 320 K the voltage response change is about 17% less for hybrid GM sensor in comparison with the LSMO sensor. The sensitivity to temperature variations can be significantly decreased by using manganite film with lower RTC value. The technological method to decrease RTC value for manganite films was proposed by co-authors in EU patent [43] connecting two LSMO films with different metal-insulator transition temperature values. Moreover, the calibration data of the sensor prepared in advance in the whole operation range of temperature and magnetic field could be stored in modern electronics module and used during measurements to convert measured response voltage change to magnetic flux values [14]. Application for position sensing The hybrid graphene-manganite sensor is a composition of two sensors: one is Lorentz force sensor which signal strongly depends on direction of magnetic field (it is zero if B is parallel to the graphene plane) and CMR-B-scalar sensor having small sensitivity to the orientation of magnetic field. This makes it possible to use the hybrid GM sensor also as a position sensor for various applications [44][45][46]. Figure 5(a) shows the response from the GM sensor at two magnetic field orientations: (1) when B is parallel to both graphene and The absolute value of voltage response change ΔV res (0) with temperature in the temperature range of (250-320) K in zero magnetic field for hybrid GM sensor (circular red dots) and sensor based only on manganite film (black square symbols). manganite plane (red dashed curve) and (2) when B is parallel to manganite film plane and perpendicular to graphene layer plane (black solid curve). The difference between solid and dotted curves shows the influence of magnetoresistance effect of graphene layer (SLG magnetoreistance is zero for in-plane configuration) on total response of the sensor. Figure 5(b) also demonstrates how hybrid graphene-manganite sensor can be applied for position sensing. When B is parallel to graphene layer, the response of the sensor shows only magnetic field which is proportional to the distance to a permanent magnetic field source (response only from manganite sensor). In such case the GM sensor operates as proximity (position) sensor. After reaching the defined distance the object containing GM sensor can be rotated in respect to the permanent magnet and additional response signal (due to graphene layer) changing on the angle θ appears. In the latter case the GM sensor operates as an angle sensor. The curve presented in figure 5(b) shows the dependence of the total response of the hybrid GM sensor at 0.7 T on the angle of magnetic field in respect to graphene plane. One can evaluate the sensitivity of the sensor: the trace change with the rotation angle is (2±0.07) mV at 0.7 T and it amounts approximately 22% of the basic signal (∼9 mV). It is difficult to compare the sensitivity of such sensor with commercial GMR or AMR angle and position sensors operating at much lower magnetic fields (for example, relative sensitivity of 0.112 mV/VOe of GMR sensor was obtained in the range of ±50 Oe [46]). Comparing to other sensors, the proposed GM sensor has advantages as it will be able to detect objects with larger tolerance for the air gap (not saturates at higher magnetic field and has wider operation range). Moreover, the use of one single sensor with position and angle sensing options instead of several sensors is an advantage in many applications. Conclusions In conclusion, we have demonstrated a novel magnetic field sensor based on hybrid structure of a SLG and thin nanostructured manganite film which a sense layer thickness depending on fabrication conditions and design configuration can be minimized to micro-nanoscale dimensions. The hybrid graphene-manganite sensor is designed in a Wheatstone halfbridge in which both sensing elements SLG and LSMO are connected in series, thus two effects of nanomaterials-large positive magnetoresistance of graphene and large negative magnetoresistance of nanostructured manganite film allowed us significantly increase the sensitivity of the hybrid GM sensor in comparison with individual SLG and LSMO sensors. Moreover, the hybrid graphene-manganite sensor has lower sensitivity to temperature variations in comparison to the manganite sensor and can be applied for position sensing.
5,814.2
2019-05-08T00:00:00.000
[ "Materials Science" ]
Advanced TexSy-C Nanocomposites for High-Performance Lithium Ion Batteries This study is dedicated to expand the family of lithium-tellurium sulfide batteries, which have been recognized as a promising choice for future energy storage systems. Herein, a novel electrochemical method has been applied to engineer micro-nano TexSy material, and it is found that TexSy phases combined with multi-walled carbon nanotubes endow the as-constructed lithium-ion batteries excellent cycling stability and high rate performance. In the process of material synthesis, the sulfur was successfully embedded into the tellurium matrix, which improved the overall capacity performance. TexSy was characterized and verified as a micro-nano-structured material with less Te and more S. Compared with the original pure Te particles, the capacity is greatly improved, and the volume expansion change is effectively inhibited. After the assembly of Li-TexSy battery, the stable electrical contact and rapid transport capacity of lithium ions, as well as significant electrochemical performance are verified. INTRODUCTION Lithium-tellurium (Li-Te) batteries have attracted increasing attention owing to their high theoretical volume capacity (Liu et al., 2014;Ding et al., 2015;Li et al., 2017;Li G. et al., 2018;Yin et al., 2018;Wenjie Han et al., 2021), excellent electronic conductivity (He et al., 2017), and relieved shuttle effects compared to Li-sulfur, Li-selenium batteries (Yang et al., 2013;Eftekhari, 2017;Fan et al., 2019;Wang et al., 2020;Yu et al., 2020;Dai et al., 2021;Sun et al., 2021;Xiao et al., 2021). However, the huge volume expansion of Te severely deteriorates its practical applications towards the newly emerged battery systems. Therefore, how to alleviate or eliminate the volume variation is of great importance to fulfill the promising properties of Te. Since our first introduction of Li-Te x S y battery , it seems there is a hope to light a new path to conquer the volume expansion challenge by the incorporation of sulfur elements inside tellurium lattice. Although our prepared Li-Te x S y cathode materials were not perfectly composed of pure Te x S y phase, it has been demonstrated such Te x S y phase is surprisingly stable in terms of in situ TEM observation, which can be survived during the repetitive cycling without obvious volume variation. Many related works have tried to map the phase diagram of Te x S y , such as Te 0.92 S 0.08 , Te 0.04 S 0.96 , Te-n-S (where n represents the mass ratio) (Xu et al., 2018;Li et al., 2019a;Li et al., 2019b;Ge and Yin, 2019;Lee et al., 2019;Zhang et al., 2020). Sulfur incorporation leads to lattice distortion and d-spacing enlargement of Te phase, rendering the composited Te x S y with a fast transport of ions and electrons, as well as excellent structural stability during lithiation/delithiation processes . Together with the superior electronic conductivity and enhanced reaction kinetics derived from Te, Li-Te x S y batteries exhibit extraordinary energy storage performance and foreseeable bright future for next-generation battery systems. In this work, we have attempted to design new types of Te x S y phases and fill some blank in Te x S y phase diagram by applying different kinds of sulfur sources during the nonlinear electrochemical synthesis of Te x S y (Li et al., 2019a). The experimental results suggested that different sulfur sources give rise to distinguished lattice distortions of Te, and thus different types of Te x S y phases, among which, Na 2 S-derived Te x S y ball milled with multi-walled carbon nanotubes endows Li-Te x S y batteries profound volumetric capacity performance and high cycling stability. Synthesis of Te x S y Micro-nano Materials Sodium sulfide (Na 2 S·9H 2 O), tellurium ingot (Te) and sodium hydroxide (NaOH) were all purchased from Aladdin. The sintered Te rod, platinum wire and calomel electrode (Hg/ HgCl 2 ) was used as the working, counter and reference electrode, respectively. The three-electrode system was employed in an equilateral triangle manner with a distance of 1.8 cm. Before experiments, the working and counter electrodes were cleaned with ultrasonic cleaner (Branson 1510, United States) for 10 min, and then rinsed with distilled water. The temperature of reaction cell was maintained at 25.0°C. The electrochemical synthesis experiments were carried out at the CHI 660e Electrochemical Workstation (Shanghai Chenhua). A typical solution preparation is to dissolve 0.5 mol L −1 NaOH first, and then add 0.5 mol L −1 Na 2 S·9H 2 O (other sulfur sources with different concentrations were specified) to the solution to get a clear solution. The voltage window of 0-1.5 V was set by cyclic voltammetry (CV) with a scan rate of 0.1 mV s −1 , and the electrochemical reaction was carried out by 3 CV cycles. The black solid products were finally collected, cleaned and centrifuged, which was later identified as Te x S y micro-nano materials. Synthesis of Te x S y -C Nanocomposites The above as-prepared Te x S y materials were mixed with certain mass ratio of multi-walled carbon nanotubes (purchased from XFNANO, 50 µm in length, 8-15 nm in diameter, purity>95%) by using a ball milling machine (QM-3C, Nanjing University). After fully mixing for 20 h, the composites of Te x S y -multi-walled carbon nanotubes (Te x S y -C) were obtained. Characterization Scanning electron microscopy (SEM) was performed on a Nova Nanosem 200 system with an acceleration voltage of 15 kV. Transmission electron microscopy (TEM) and high resolution transmission electron microscopy (HRTEM) were conducted on JEM-2100F. Energy dispersive X-ray energy spectrum (EDX) and TEM measurements were performed simultaneously. Raman spectroscopy (INVIA, Renishaw, United Kingdom) was carried out at an ambient temperature with a 514 nm laser excitation. X-ray photoelectron spectroscopy (XPS) was performed in the spectrometer from Kratos axis Ultradld, using Mono Al Ka radiation power of 120 W (8 mA, 15 kV). X-ray diffraction (XRD) was tested by using a Cu-Ka radiation (A 0.15406 nm) on the Bruker D8 Advanced Diffractometer with a data acquisition range of 10°-80°and sweep rate of 0.02°s −1 . Thermogravimetric analysis (TGA) was performed on Perkin-Elmer PRIS1 TGA/Clarus SQ 8T at a heating rate of 5°C min −1 . Electrochemical Test The electrochemical properties of Te x S y -C nanocomposites were studied by using the 2025 coin battery on the Neware-battery testing system. The working electrode was prepared by pasted a mixture of 70 wt% Te x S y -C nanocomposites, 15 wt% acetylene black and 15 wt% polyvinylidene fluoride (PVDF) on the aluminum foil. The mass loading of the active material on the electrode was 1-2 mg cm −2 , and the lithium metal wafer was used as the counter electrode. The electrolyte was containing 1 mol L −1 lithium bis(trifluoromethane)sulfonimide (LiTFSI) electrolyte, 2% LiNO 3 , and 1,3-dioxolane (DOL) and 1,2-dimethoxyethane (DME) (volume ratio 1:1). The battery was assembled in a glove box filled with pure argon gas. RESULTS AND DISCUSSION In this work, we have tried various sulfur sources to fabricate different types of Te x S y phases via a nonlinear electrochemical approach. By changing the actively reducing sulfur species such as sodium dimethyldithiocarbamate (C 3 H 6 NNaS 2 ) and thiourea (CH 4 N 2 S), sodium hydrogen sulfide (NaHS) and sodium sulfide (Na 2 S) for the production of Te x S y phases, distinctive micro-nano structured Te x S y materials were engineered in Figure 1 via the control of nonlinear electrochemical dynamics in Supplementary Figure S2. The scanning electron microscope (SEM) images indicated that the presence of Na 2 S could lead to a distinguished morphology (flakes) compared to that of other products (rods). More importantly, Raman spectra in Supplementary Figure S3 revealed that the sulfur content was maximized in Te@Na 2 S, denoted as Te x S y phases prepared by Na 2 S. The optimal concentration of Na 2 S for the construction of nano-flaked Te x S y phases was determined as 0.5 mol L −1 . As the concentration of Na 2 S was increased, the nano-flaked Te x S y phases were broken into randomly downsized nano-particles, as shown in Supplementary Figure S4. Surprisingly, the increasing concentration of Na 2 S also led to an overwhelmed Raman peak intensity of sulfur than tellurium in Supplementary Figure S5. Transmission electron microscopy (TEM) characterization of Te x S y material was obtained when the concentration of Na 2 S was set to 2.0 mol L −1 . It can be found that the downsized nanoparticles have a very poor crystallinity from Figures 2A,B, in which a typical lattice parameter is emerged in a typically selected area, with a d-spacing of 0.334 nm representing Te (011) as-obtained materials are not composed of pure Te x S y phases. Another evidence is from XRD analysis in Supplementary Figure S5B, where the XRD peaks are broadened as the concentration of Na 2 S is increased, indicating the incorporation of sulfur in Te crystalline causes the poor crystallinity. Furthermore, Te x S y phases are dominated in the as-prepared materials, strongly supported by the homogeneous distribution of Te and S elements in Figure 2C. In order to further study the composition of Te x S y component and the formation mechanism of Te-S bond, the product synthesized Frontiers in Chemistry | www.frontiersin.org May 2021 | Volume 9 | Article 687392 3 by 2.0 mol L −1 Na 2 S solution was selected for XPS characterization (Figure 3). Figure 3A presented a XPS survey image of Te x S y components possessing the main elements of Te and S. Figure 3B , which are then chemically reduced by different organic or inorganic sulfides in this study to form Te x S y . The overall reaction mechanism is followed by the electrochemical-chemical (EC) reaction pathway, similar as the first two steps of our previous study . The distinguished reducibility of organic and inorganic sulfides enabled the self-assembly of Te x S y with different nano-micro morphologies and chemical compositions, rendering Te x S y with varied physicochemical properties for seeking the promising electrochemical performance. However, compared to our previous study , the asprepared Te x S y phases without any confinements from carbon hosts were failed to contribute a promising electrochemical performance towards lithium ion batteries. As shown in Supplementary Figure S6, high charge transfer resistance, poor cycling stability and rate performance seems a total failure. Therefore, in this work we applied multi-walled carbon nanotubes (MWCNTs) as a carbon host to confine the Te x S y phases. Astonishingly, the ball milling of Te x S y phases with MWCNTs rendered the Te x S y (Na 2 S)/MWCNT with a strange thermal degradation feature, that is, Te x S y phases actually reacted with MWCNTs, and resulted in a two-stage thermal degradation of Te x S y in Supplementary Figure S8B. In comparison with Te x S y , the Te x S y -C possessed a distinctive thermal degradation kinetics, where the less mass ratio of Te x S y :C such as 5:5 and 3:7 could lead to the lower decomposition temperatures (∼630°C as shown in Supplementary Table S1) than that of pure Te x S y and the high mass ratio of Te x S y :C 7:3 (∼700°C), indicating the rearrangement of Te x S y phases was occurred in the presence of appropriate amounts of MWCNTs. Afterwards, the Te x S y (Na 2 S)/MWCNT composited materials with different mixing ratios were further evaluated by battery performance test, as shown in Figure 4. Figure 4A showed a diagram of the rate performance of Te x S y (Na 2 S)/MWCNT composites with different mixing ratios. It is worth noting that the materials with composite ratio of 5:5 have relatively better rate performance than the other two materials. Electrochemical impedance spectroscopy (EIS) of Te x S y (Na 2 S)/MWCNT composited batteries with different mixing ratios were also tested in Figure 4B. As it can be seen, the increased amount of multi-walled carbon nanotubes would worsen the charge transfer, suggesting an integrated interfacial effects between Te x S y and MWCNTs. In addition, Figure 4C showed the charge-discharge curves of lithium ion battery with a mixture ratio of 5:5 at different current densities. The first charge-discharge curves suggest a relatively high initial Coulombic efficiency. Figures 4D-F presented that the compound ratio of 5:5 Te x S y (Na 2 S)/MWCNT composited battery exhibits a promising cycling behavior at 1.0, 2.0, 5.0 A g −1 , in which a larger current density would lead to a better cycling stability, indicating a potential fast-charging application in rechargeable batteries. When the current density was set to 5.0 A g −1 , the first specific capacity was obtained as 406.56 mAh g −1 , and capacity retention remained as 45.03% after 500 cycles. While the current density was set less than 5.0 A g −1 , the rate performance behaved much worse than that at 5.0 A g −1 , demonstrating Te x S y (Na 2 S)/ MWCNT composite material is more suitable for high rate performance in lithium ion batteries. The as-prepared lithiumtellurium sulfide battery may potentially tackle the cons of low rate performance in lithium-sulfur battery, and high volume expansion and low capacity performance in lithium-tellurium battery. CONCLUSION In summary, we designed a promising electrochemical method to control the synthesis of Te x S y micro-nano structured composites, verified the formation mechanism and qualitatively evaluated the influence of chemical composition on the battery performance. The morphology and composition ratio of Te x S y (Na 2 S)/MWCNT were controlled by the types of sulfur sources, concentration and synthetic voltage. In addition, MWCNTs as an ideal carbon host were used for the confinement of the dissolution of tellurium and sulfur, which significantly improved the electrochemical performance of the Te x S y (Na 2 S)/MWCNT composited battery. The nonlinear electrochemical synthetic method and ball milling aftertreatments provide a new way for the sustainable development of high-performance Li battery manufacturing. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS JL, HJ, and SW designed the experiments. GL and CY performed the material synthesis, characterization and battery tests. GL analyzed the data and drafted the manuscript. JL made the major revision. All authors participated in discussions. FUNDING This work was supported by the National Natural Science Foundation of China (51872209, 51972239, and 52072273), the Zhejiang Provincial Natural Science Foundation of China (Z21E020002) and Natural Sciences and Engineering Research Council of Canada (NSERC).
3,376.8
2021-05-25T00:00:00.000
[ "Materials Science" ]
Short Chain Fatty Acids Modulate the Growth and Virulence of Pathosymbiont Escherichia coli and Host Response Short chain fatty acids (SCFA), principally acetate, propionate, and butyrate, are produced by fermentation of dietary fibers by the gut microbiota. SCFA regulate the growth and virulence of enteric pathogens, such as enterohemorrhagic E. coli (EHEC), Klebsiella and Salmonella. We sought to investigate the impact of SCFA on growth and virulence of pathosymbiont E. coli associated with inflammatory bowel disease (IBD) and colorectal cancer (CRC), and their role in regulating host responses to bacterial infection in vitro. We found that under ileal conditions (pH = 7.4; 12 mM total SCFA), SCFA significantly (p < 0.05) potentiate the growth and motility of pathosymbiont E. coli. However, under colonic conditions (pH = 6.5; 65 to 123 mM total SCFA), SCFA significantly (p < 0.05) inhibit growth in a pH dependent fashion (up to 60%), and down-regulate virulence gene expression (e.g., fliC, fimH, htrA, chuA, pks). Functional analysis reveals that colonic SCFA significantly (p < 0.05) inhibit E. coli motility (up to 95%), infectivity (up to 60%), and type 1 fimbria-mediated agglutination (up to 50%). In addition, SCFA significantly (p < 0.05) inhibit the activation of NF-κB, and IL-8 production by epithelial cells. Our findings provide novel insights on the role of the regional chemical microenvironment in regulating the growth and virulence of pathosymbiont E. coli and opportunities for therapeutic intervention. Introduction Short-chain fatty acids (SCFA), primarily acetate, propionate, and butyrate, are produced by microbial fermentation of undigested carbohydrates and dietary fibers [1,2]. The amount and type of SCFA in the intestine are influenced by dietary intake, particularly non-digestible carbohydrates, protein and fat [3,4], and the composition of the gut microbiota [3,5]. The main producers of SCFA are Firmicutes and Bacteroidetes, the two most abundant phyla in human intestine [2,4]. Bacteroidetes produce mainly acetate and propionate, while Firmicutes produce butyrate [6,7]. Acetate is the most abundant SCFA in the gut and is produced from acetyl-CoA through glycolysis; butyrate and propionate are produced from both carbohydrate metabolism (glycolysis) and metabolisms of fatty and GC dog colon D + - [24] a NC101 induces IBD and cancer in IL10-/-mice. b Because of their cytotoxicity, these strains were not tested for AIEC characteristics. 2.1.1. Ileal SCFA (i-SCFA) Promote E. coli Growth SCFA are detected in the distal ileum at total concentrations of 10 to 20 mM [10]. The pH in the distal ileum is 7.4 in health and disease [45,46]. The influence of SCFA on E. coli growth under the ileal conditions (12 mM total SCFA, pH = 7.4; see Table 1) was examined in complex medium (Luria-Bertani, or LB) and chemically defined medium M9 (see Materials and Methods). To simulate the enteric luminal environment, bacteria were grown under microaerophilic conditions for 24 h at 37 • C. i-SCFA enhanced (p < 0.05) the growth of E. coli ( Figure 1A,B) in both media: 5/9 strains in LB ( Figure 1A) and 16/19 strains in M9 medium ( Figure 1B). The degree of stimulation ranged from 10 to 60% and was strain specific. Three of the four mouse strains (CUMSL1, CUMSL6, and CUMT8) Antibiotics 2020, 9,462 4 of 20 grew much better (>50%; p < 0.05) in the presence of i-SCFA compared to NaCl controls ( Figure 1B). In addition, i-SCFA stimulated-E. coli growth was media independent, and unaffected by the origins and disease association of E. coli. Antibiotics 2020, 9, x FOR PEER REVIEW 4 of 19 ileal conditions (12 mM total SCFA, pH = 7.4; see Table 1) was examined in complex medium (Luria-Bertani, or LB) and chemically defined medium M9 (see Materials and Methods). To simulate the enteric luminal environment, bacteria were grown under microaerophilic conditions for 24 h at 37 °C. i-SCFA enhanced (p < 0.05) the growth of E. coli ( Figure 1A,B) in both media: 5/9 strains in LB ( Figure 1A) and 16/19 strains in M9 medium ( Figure 1B). The degree of stimulation ranged from 10 to 60% and was strain specific. Three of the four mouse strains (CUMSL1, CUMSL6, and CUMT8) grew much better (>50%; p < 0.05) in the presence of i-SCFA compared to NaCl controls ( Figure 1B). In addition, i-SCFA stimulated-E. coli growth was media independent, and unaffected by the origins and disease association of E. coli. Colonic SCFA (c-SCFA) inhibit E. coli Growth Colonic concentrations of SCFA range from 60-150 mM in a healthy gut [9,10]. The physiological pH of the colon is between 5.6 and 6.7 [45,46]. To investigate the effect of SCFA on E. coli growth under the colonic conditions, we evaluated total SCFA at a concentration of 123 mM (65 mM acetate, 29 mM propionate, and 29 mM butyrate) and pH = 6.5 to simulate the in vivo milieu [10]. In both complex (LB, Figure 2A) and chemically defined (M9, Figure 2B) media, c-SCFA inhibited the growth of E. coli (p < 0.05) from human, mice, and dogs (except canine GC CUKD2 in M9 media, p = 0.194). The degree of inhibition varied by strain and was independence of media type. Non-pathogenic DH5α, canine GC-AIEC CUDC1 and CUDL1, and murine AIEC CUMSL1 and CUMSL6 were highly sensitive to c-SCFA ( Figure 2B), with average growth inhibited >80% (p < 0.05) under the same conditions. Colonic SCFA (c-SCFA) inhibit E. coli Growth Colonic concentrations of SCFA range from 60-150 mM in a healthy gut [9,10]. The physiological pH of the colon is between 5.6 and 6.7 [45,46]. To investigate the effect of SCFA on E. coli growth under the colonic conditions, we evaluated total SCFA at a concentration of 123 mM (65 mM acetate, 29 mM propionate, and 29 mM butyrate) and pH = 6.5 to simulate the in vivo milieu [10]. In both complex (LB, Figure 2A) and chemically defined (M9, Figure 2B) media, c-SCFA inhibited the growth of E. coli (p < 0.05) from human, mice, and dogs (except canine GC CUKD2 in M9 media, p = 0.194). The degree of inhibition varied by strain and was independence of media type. Non-pathogenic DH5α, canine GC-AIEC CUDC1 and CUDL1, and murine AIEC CUMSL1 and CUMSL6 were highly sensitive to c-SCFA ( Figure 2B), with average growth inhibited >80% (p < 0.05) under the same conditions. Antibiotics 2020, 9, x FOR PEER REVIEW 5 of 19 Figure 2. Colonic SCFA inhibit E. coli growth in vitro. E. coli were cultured in LB (A) or M9 (B) media containing either 123 mM NaCl (Control) or SCFA at pH = 6.5. Growth conditions were the same as for Figure 1. Data from three independent experiments; Mean ± SE. * p < 0.05; ** p < 0.01; *** p < 0.001; ns = not significant. Further analysis with different C-and N-sources in M9 background revealed that the inhibition of E. coli growth by c-SCFA was independent of C-and N-sources (data not shown). 2.1.3. Inhibition of Growth by c-SCFA is pH-Dependent Colonic pH ranges from 5.6 near the cecum to 6.6 in the left colon vs 7.4 in the distal ileum [46]. Colonic SCFA inhibit E. coli growth in vitro. E. coli were cultured in LB (A) or M9 (B) media containing either 123 mM NaCl (Control) or SCFA at pH = 6.5. Growth conditions were the same as for Figure 1. Data from three independent experiments; Mean ± SE. * p < 0.05; ** p < 0.01; *** p < 0.001; ns = not significant. Antibiotics 2020, 9, 462 5 of 20 Further analysis with different C-and N-sources in M9 background revealed that the inhibition of E. coli growth by c-SCFA was independent of C-and N-sources (data not shown). Inhibition of Growth by c-SCFA is pH-Dependent Colonic pH ranges from 5.6 near the cecum to 6.6 in the left colon vs 7.4 in the distal ileum [46]. To determine the influence of regional pH on the inhibitory effect of c-SCFA on E. coli growth, we used buffered-LB broth (containing 100 mM HEPES for pH = 7.4, 100 mM MOPS for pH = 6.5, or 100 mM PIPES for pH = 6.2 medium, respectively) ± c-SCFA. At pH = 7.4, none of the 10 E. coli strains were inhibited by c-SCFA ( Figure 3A), rather 50% grew better in the presence of c-SCFA (p < 0.05) ( Figure 3A). In contrast, the growth of pathosymbiont E. coli strains were inhibited (p < 0.05) at pH ≤ 6.5 ( Figure 3B,C), with the exception of AIEC CU576-1 (p = 0.061) in medium pH = 6.5 ( Figure 3B). The growth of symbiont E. coli CUT75 was less affected at pH = 6.5 (p = 0.119) but was reduced to 79% of control (p < 0.01) at pH = 6.2 ( Figure 3C). Colonic SCFA inhibit E. coli growth in vitro. E. coli were cultured in LB (A) or M9 (B) media containing either 123 mM NaCl (Control) or SCFA at pH = 6.5. Growth conditions were the same as for Figure 1. Data from three independent experiments; Mean ± SE. * p < 0.05; ** p < 0.01; *** p < 0.001; ns = not significant. Further analysis with different C-and N-sources in M9 background revealed that the inhibition of E. coli growth by c-SCFA was independent of C-and N-sources (data not shown). c-SCFA Inhibit Virulence Gene Expression in E. coli Based on their effects on E. coli growth (Figures 2-4), we speculated that c-SCFA may also impact virulence gene expression in pathosymbiont E. coli. We evaluated a panel of 11 virulence genes (Table 3) in CD-AIEC and CRC-pks -/+ E. coli ± c-SCFA. To achieve an adequate yield of bacteria for total RNA isolation (see Methods) at mid-log phase, we used SCFA at sub-maximal inhibitory concentration (65 mM vs.123 mM) with the same molar ratio (Table 1). Virulence gene expression, determined by qRT-PCR (Table 3; Primers in Table 4), showed that c-SCFA treatment down regulated numerous virulence genes (p < 0.05), especially those associated with motility (fliC), adhesion and invasion (fimH, ompC, yfgL, and lpfA), stress (htrA) and genotoxicity (pks) for majority of E. coli strains. For instance, the motility gene fliC was down regulated in 11/15 E. coli strains (p < 0.05), and the adhesin gene fimH in 9/15 strains (p < 0.05) ( Table 3). c-SCFA Inhibit Virulence Gene Expression in E. coli Based on their effects on E. coli growth (Figures 2-4), we speculated that c-SCFA may also impact virulence gene expression in pathosymbiont E. coli. We evaluated a panel of 11 virulence genes (Table 3) in CD-AIEC and CRC-pks −/+ E. coli ± c-SCFA. To achieve an adequate yield of bacteria for total RNA isolation (see Methods) at mid-log phase, we used SCFA at sub-maximal inhibitory concentration (65 mM vs.123 mM) with the same molar ratio (Table 1). Virulence gene expression, determined by qRT-PCR (Table 3; Primers in Table 4), showed that c-SCFA treatment down regulated numerous virulence genes (p < 0.05), especially those associated with motility (fliC), adhesion and invasion (fimH, ompC, yfgL, and lpfA), stress (htrA) and genotoxicity (pks) for majority of E. coli strains. For instance, the motility gene fliC was down regulated in 11/15 E. coli strains (p < 0.05), and the adhesin gene fimH in 9/15 strains (p < 0.05) ( Table 3). SCFA Modulate E. coli Motility Bacterial motility is involved in the virulence of pathogens [47] and directly correlates with the ability of AIEC to invade epithelial cells [28]. Recent studies of CD-E. coli have linked motility to the AIEC pathotype [28,48]. The flagellin protein FliC plays an essential role in bacterial motility. The functional consequence of down regulation of fliC by c-SCFA (Table 3) was evaluated by motility assays on sloppy agar containing different concentrations of total SCFA ( Figure 5A-D). Non-motile, non-AIEC symbiont CUT75 was excluded from the analysis. The motility of CD-associated E. coli was reduced significantly (p < 0.05) compared to the NaCl controls at pH 6.5, even at 60 mM level of c-SCFA ( Figure 5A,D), with the exception of AIEC CU578-1 (p = 0.215). At 120 mM, c-SCFA reduced E. coli motility to <5% ( Figure 5A,D), except for AIEC CU24LW-1 (<25%). Prototypical AIEC LF82 was greatly impacted ( Figure 5A), with a >90% reduction of motility at 60 mM c-SCFA. We observed similar dose-dependent inhibition for CRC-E. coli ( Figure 5B) under the same assay conditions. At 30 mM c-SCFA, more than 30% of inhibition was obtained across all strains ( Figure 5B), while the motility of CRC-E. coli was completely inhibited at 123 mM c-SCFA (data not shown). Antibiotics 2020, 9, x FOR PEER REVIEW 8 of 19 functional consequence of down regulation of fliC by c-SCFA (Table 3) was evaluated by motility assays on sloppy agar containing different concentrations of total SCFA ( Figure 5A-D). Non-motile, non-AIEC symbiont CUT75 was excluded from the analysis. The motility of CD-associated E. coli was reduced significantly (p < 0.05) compared to the NaCl controls at pH 6.5, even at 60 mM level of c-SCFA ( Figure 5A,D), with the exception of AIEC CU578-1 (p = 0.215). At 120 mM, c-SCFA reduced E. coli motility to <5% ( Figure 5A,D), except for AIEC CU24LW-1 (<25%). Prototypical AIEC LF82 was greatly impacted ( Figure 5A), with a >90% reduction of motility at 60 mM c-SCFA. We observed similar dose-dependent inhibition for CRC-E. coli ( Figure 5B) under the same assay conditions. At 30 mM c-SCFA, more than 30% of inhibition was obtained across all strains ( Figure 5B), while the motility of CRC-E. coli was completely inhibited at 123 mM c-SCFA (data not shown). In contrast, i-SCFA stimulated (p < 0.05) E. coli motility ( Figure 5C, and the left column in Figure 5D), particularly the AIEC CU576-1 and CU578-1. At the physiological concentrations (6 to 12 mM), i-SCFA stimulated the motility of AIEC CU576-1 up to 1.5 fold, and CU578-1 up to 2.5 fold (p < 0.05). c-SCFA Inhibit Type 1 Pili FimH-Mediated Yeast Agglutination E. coli type I fimbrial protein, FimH, can bind to the mannose residues on yeast cell surface, and subsequently initiate yeast cell agglutination [49]. This activity of FimH also mediates the adherence and colonization of E. coli to the intestinal epithelial cells [50][51][52] and stimulates TLR4 [53]. We visualized the functional consequences of c-SCFA on fimH gene expression with yeast agglutination assays. After pretreatment with 123 mM c-SCFA or NaCl (control) in LB broth at pH 6.5, E. coli were mixed with equal amount of yeast cells. Pretreatment of E. coli with c-SCFA reduced agglutination (Table 5 and Figure 6) compared to NaCl controls. c-SCFA induced a 2-fold or greater reduction in yeast cell agglutination for the majority (5/8) of E. coli strains ( Table 4). The biggest reduction was seen with CD-AIEC CU541-15 and CRC-pks -HM288 (Table 5; Figure 6). In contrast, i-SCFA stimulated (p < 0.05) E. coli motility ( Figure 5C, and the left column in Figure 5D), particularly the AIEC CU576-1 and CU578-1. At the physiological concentrations (6 to 12 mM), i-SCFA stimulated the motility of AIEC CU576-1 up to 1.5 fold, and CU578-1 up to 2.5 fold (p < 0.05). c-SCFA Inhibit Type 1 Pili FimH-Mediated Yeast Agglutination E. coli type I fimbrial protein, FimH, can bind to the mannose residues on yeast cell surface, and subsequently initiate yeast cell agglutination [49]. This activity of FimH also mediates the adherence and colonization of E. coli to the intestinal epithelial cells [50][51][52] and stimulates TLR4 [53]. We visualized the functional consequences of c-SCFA on fimH gene expression with yeast agglutination assays. After pretreatment with 123 mM c-SCFA or NaCl (control) in LB broth at pH 6.5, E. coli were mixed with equal amount of yeast cells. Pretreatment of E. coli with c-SCFA reduced agglutination (Table 5 and Figure 6) compared to NaCl controls. c-SCFA induced a 2-fold or greater reduction in yeast cell agglutination for the majority (5/8) of E. coli strains ( Table 4). The biggest reduction was seen with CD-AIEC CU541-15 and CRC-pks − HM288 (Table 5; Figure 6). * The yeast agglutination scores were made based on the degree of aggregation in Figure 6. Antibiotics 2020, 9, x FOR PEER REVIEW 9 of 19 Figure 6. c-SCFA inhibit yeast agglutination by E. coli. E. Coli were grown in LB media (pH = 6.5) with either 123 mM NaCl (control) or SCFA at 37 °C for 18 h. After centrifugation, the cell pellets were resuspended in PBS, followed by mix with equal volume of yeast suspension at the same optical density (OD 600 = 5.2) (see Methods). c-SCFA Inhibit E. coli Adhesion and Invasion of Intestinal Epithelial Cells c-SCFA down regulated a number of virulence genes (fimH, ompC, nlpL, and lpfA) involved in the process of adhesion and invasion [54][55][56][57] (Table 3), suggesting that c-SCFA would reduce the infection of intestinal epithelial cells by E. coli. In the presence of 65 mM c-SCFA, the ability of E. coli to adhere to and invade Caco-2 epithelial cells was significantly reduced (p < 0.05; Figure 7A,B). The impact of c-SCFA was greater (p = 0.0073) on invasion than adhesion compared with controls ( Figure 7A vs Figure 7B), particularly for CD-AIEC CU524-2, CU541-1, CU541-15, CU576-1, CU578-1, and Figure 6. c-SCFA inhibit yeast agglutination by E. coli. E. Coli were grown in LB media (pH = 6.5) with either 123 mM NaCl (control) or SCFA at 37 • C for 18 h. After centrifugation, the cell pellets were resuspended in PBS, followed by mix with equal volume of yeast suspension at the same optical density (OD 600 = 5.2) (see Methods). c-SCFA Inhibit E. coli Adhesion and Invasion of Intestinal Epithelial Cells c-SCFA down regulated a number of virulence genes (fimH, ompC, nlpL, and lpfA) involved in the process of adhesion and invasion [54][55][56][57] (Table 3), suggesting that c-SCFA would reduce the infection of intestinal epithelial cells by E. coli. In the presence of 65 mM c-SCFA, the ability of E. coli to adhere to and invade Caco-2 epithelial cells was significantly reduced (p < 0.05; Figure 7A,B). The impact of c-SCFA was greater (p = 0.0073) on invasion than adhesion compared with controls ( Figure 7A vs. Figure 7B), particularly for CD-AIEC CU524-2, CU541-1, CU541-15, CU576-1, CU578-1, and LF82. Under identical conditions, the average inhibition of c-SCFA was 71% for adhesion and 93% for invasion ( Figure 7A vs Figure 7B). Hek-Blue cells were infected with E. coli in the presence of 123 mM NaCl or SCFA at pH 6.5 (see Methods). At 24 h post infection, the supernatant of the infected cells was used to detect the reporter protein (secreted alkaline phosphatase, SEAP) production. (B). Caco-2 cells were infected by E. coli for 3 h in the presence of either NaCl or c-SCFA (65 mM) at pH = 6.5. The supernatant of infected Caco-2 cells was used for IL-8 detection using ELISA method. Data from three independent experiments; Mean ± SE. *p < 0.05; **p < 0.01; ***p < 0.001; ns = not significant. [13]. IL-8 is a pivotal chemokine produced by the gut epithelial cells during pathogen infection, and its production is down stream of the NF-kB signal transduction pathway [59]. We measured the levels of IL-8 produced by Caco-2 cells after infection by E. coli in the presence or absence of c-SCFA under colonic conditions. c-SCFA inhibited IL-8 secretion (p < 0.05) induced by all E. coli strains tested ( Figure 8B), including CD-AIEC and CRC-associated E. coli. The inhibition ranged from 20 to 60% depending on strains ( Figure 8B). Discussion Intestinal SCFA, predominantly acetate, propionate, and butyrate, are by-products of bacterial fermentation [1]. Concentrations of SCFA are 10-fold higher in the colon than the ileum, with concordant differences in luminal pH of 7.4 in the ileum and 5.6-6.7 in the colon [9,10,45]. We sought to determine the effects of SCFA in the context of region-specific variations in the chemical microenvironment (SCFA and pH) on the growth and virulence of pathosymbiont E. coli isolated c-SCFA Inhibit NF-kB Signal Transduction The nuclear factor, NF-κB, is a family of regulators controlling multiple cellular inflammatory responses through signal transduction. It is present in all types of cells, and can be induced by pathogen infection [58]. To determine the effect of c-SCFA on host epithelial responses to E. coli infection, we used HEK-Blue KD-TLR5 cells to detect the NF-κB pathway activation [59]. In the presence of 123 mM c-SCFA, the activation of NF-κB pathway by most E. coli strains was inhibited (p < 0.05), except for non-pathogenic CD-E. coli CUT75 and the pks-negative CRC-E. coli HM288 ( Figure 8A). The inhibition ranged from 17 to 70% compared to controls for all, except non-pathogenic E. coli CUT75, which was minimally able to activate NF-κB. c-SCFA Inhibit IL-8 Secretion by Epithelial Cells SCFA interact with intestinal epithelial cells and modulate immune responses in the gut [13]. IL-8 is a pivotal chemokine produced by the gut epithelial cells during pathogen infection, and its production is down stream of the NF-kB signal transduction pathway [59]. We measured the levels of IL-8 produced by Caco-2 cells after infection by E. coli in the presence or absence of c-SCFA under colonic conditions. c-SCFA inhibited IL-8 secretion (p < 0.05) induced by all E. coli strains tested ( Figure 8B), including CD-AIEC and CRC-associated E. coli. The inhibition ranged from 20 to 60% c-SCFA Inhibit IL-8 Secretion by Epithelial Cells SCFA interact with intestinal epithelial cells and modulate immune responses in the gut [13]. IL-8 is a pivotal chemokine produced by the gut epithelial cells during pathogen infection, and its production is down stream of the NF-κB signal transduction pathway [59]. We measured the levels of IL-8 produced by Caco-2 cells after infection by E. coli in the presence or absence of c-SCFA under colonic conditions. c-SCFA inhibited IL-8 secretion (p < 0.05) induced by all E. coli strains tested ( Figure 8B), including CD-AIEC and CRC-associated E. coli. The inhibition ranged from 20 to 60% depending on strains ( Figure 8B). Discussion Intestinal SCFA, predominantly acetate, propionate, and butyrate, are by-products of bacterial fermentation [1]. Concentrations of SCFA are 10-fold higher in the colon than the ileum, with concordant differences in luminal pH of 7.4 in the ileum and 5.6-6.7 in the colon [9,10,45]. We sought to determine the effects of SCFA in the context of region-specific variations in the chemical microenvironment (SCFA and pH) on the growth and virulence of pathosymbiont E. coli isolated from people with Crohn's disease (AIEC pathotype) and CRC (pks genotoxicity), dogs with granulomatous colitis (AIEC pathotype), and mice with intestinal inflammation (AIEC pathotype). We also examined the impact of SCFA on host-E. coli inflammatory responses. We found that SCFA affect the growth of pathosymbiont E. coli in a concentration and pH dependent fashion, with colonic [SCFA] at colonic pH suppressing growth, and ileal [SCFA] at ileal pH favoring growth. The effect was largely independent of E. coli pathotype, disease association, species of origin, and type of media, supporting a direct effect of SCFA and pH. The concentrations and proportions of SCFA and pH levels we used to model the ileal and colonic microenvironment were selected to be physiologically relevant for people: i-SCFA (12 mM with a molar ratio of 8:2.5:1.5 for acetate, propionate, and butyrate, respectively), c-SCFA (60 to 123 mM with a ratio of 65:29:29 for acetate, propionate, and butyrate, respectively), and pH = 6.2-7.4 [9,10,40]. MIC values for acetate, propionate, and butyrate were 20 to 40, 10, 10 mM, respectively. To further simulate the enteric microenvironment, E. coli were cultured in microaerophilic conditions at 37 • C. A mechanistic understanding of the interactions between the chemical microenvironment (e.g., SCFA, pH, bile acids), the resident microbiota, and host immune responses in the intact GI tract in health and disease remains to be elucidated. Lower fecal concentrations of butyrate and propionate in patients with IBD vs. controls [60], and acetate in CD vs. UC [60], may reflect decreased microbial production, increased utilization, or a combination of these processes. Region-specific differences in the chemical and microbial microenvironment in the ileum and colon may underlie the phenotypic variation of IBD. Crohn's ileitis is consistently linked to dysbiosis characterized by an overabundance of E. coli and depletion of Firmicutes (e.g., Faecalibacterium prausnitzii) [18,61], and E. coli with an AIEC pathotype have been more frequently isolated from ileal (36.4%) than colonic (3.6%) mucosa of CD patients in some studies [18,62]. Our finding that i-SCFA (12 mM, pH = 7.4) promote the growth of AIEC whereas c-SCFA (123 mM, pH = 6.5) suppress it, suggest region specific differences in the chemical environment may influence colonization by pathosymbiont E. coli. SCFA propionate and acetate have recently been implicated in regulating the growth, colonization, and virulence of AIEC [28][29][30]63]. Propionate-adapted AIEC LF82 more proficiently colonized the colon and ileum, but not the cecum, of mice fed propionate (at levels selected to simulating human gut: 20 mM) than non-propionate adapted LF82 [30]. Acetate utilization has also been linked to enhanced colonization by AIEC NRG857 (LF82-like, B2 O83) in mice, and E. coli from CD patients were better able to grow on acetate (K-acetate 0.4% w/v in M9 media), but not complex media, than E. coli from healthy controls [28]. In contrast to the growth enhancing effects of propionate and acetate, we found that propionate at ≥5 mM and acetate ≥10mM suppressed the growth of AIEC LF82 at colonic pH = 6.5. These differences may reflect the pH dependency of the effects of SCFA we observed, with stimulation of growth with c-SCFA at pH = 7.4 and repression at pH = 6.5. However, since the previous studies were conducted in mice, which have a mean intestinal pH of mice < pH = 5.2, and regional pH = 4.8-5.2 in the ileum, 4.4-4.6 cecum and 4.4-5.02 colon [64], substantially lower than that of humans and the conditions we simulated, it is difficult to reconcile these different outcomes. Differences in methodology, such as growth in the microaerophilic conditions and composition of media, may play a role. In addition to the effects of SCFA on bacterial growth, their regional concentration and composition throughout the gastrointestinal tract may serve as environmental cues that differentially regulate motility and virulence gene expression [15,65]. We found that i-SCFA (12 mM, pH = 7.4) promote the motility of pathosymbiont E. coli, whereas c-SCFA (120 mM, pH = 6.5) suppress motility, virulence gene expression, adhesion and invasion of cultured cells, and pro-inflammatory responses. Our findings parallel those with EHEC and Salmonella. For example, genes involved in EHEC flagella biosynthesis and motility are upregulated by SCFA simulating the small intestine, but down regulated by SCFA simulating the large intestine [15]. Similarly, the expression of virulence genes in Salmonella encoding invasion of epithelial cells and survival within macrophages are increased by ileal SCFA, but inhibited by colonic SCFA [65]. Transcriptional analysis of E. coli grown in c-SCFA revealed consistent down-regulation of virulence genes involved in motility (fliC), adhesion and invasion (fimH, ompC, lpfA, and nlpL), iron acquisition (chuA), stress protein (dsbA and htrA), and colibactin protein (pks). Reductions in transcription correlated with reduced functions, e.g., reduced fliC gene transcription with decreased motility; reductions in fimH, yfgL, ompC and nlpL with decreased yeast agglutination, adhesion and invasion of intestinal epithelial cells. c-SCFA also reduced the ability of CD-and CRC-E. coli induced activation of NF-kB, and the secretion of IL-8 by Caco-2 epithelial cells. NF-κB regulates a large array of genes associated with immune and inflammatory responses [58] and controls in part the secretion of IL-8, which is upregulated in patients with IBD and CRC [66,67]. Our findings support a direct role of the chemical microenvironment (SCFA, pH) in modulating crosstalk between pathosymbiont E. coli and the epithelium and pro-inflammatory signaling. The efficacy of SCFA against enteropathogens is exemplified by the use of propionate to suppress Salmonella associated disease in poultry [68,69]. In the context of the colonic environment (65-123 mM SCFA, pH = 6.5), we found that SCFA mixtures containing 29 mM propionate markedly suppressed parameters associated with virulence of pathosymbiont E. coli associated with intestinal inflammation across species. However, in the context of the ileal environment, we found that i-SCFA (containing 2.5 mM propionate, 8 mM acetate, 1.5 mM butyrate) stimulated growth and motility of pathosymbiont E. coli, including LF82. Previous studies have shown that propionate can enhance the ability of AIEC to adhere to (2/5 AIEC strains) and invade (3/5 AIEC) Caco-2 cells [30], and increase transcription of the eut operon and ability to utilize ethanolamine [29], which is linked to virulence in a number of enteropathogens [70,71]. Motility has been reported to correlate with the degree of invasion in vitro by AIEC, murine colonization by AIEC NRG857, and the isolation of E. coli from CD patients vs healthy controls [28]. These findings point to complex multifactorial interactions of the metabolism, growth, and virulence of pathosymbiont E. coli, region specific luminal microenvironment and host. Our investigations of the effects of SCFA on virulence extended to the genotoxic effects of E. coli associated with CRC. The polyketide synthase gene (pks) is responsible for the formation of colibactin, which is mutagenic [35,72]. We found that c-SCFA down-regulate pks transcription in E. coli NC101, which induces inflammation-associated CRC in mice [35], and E. coli isolated from patients with CRC [43]. These findings suggest that the chemical environment of the healthy colon may restrict the ability of pks+ CRC-E. coli to grow and produce colibactin. Recent studies in patients with CRC reveal a correlation between the loss of Bifidobacterium and reduced levels of total SCFA, especially butyrate, in CRC patients vs healthy controls [73]. While it remains to be established if reduced SCFA leads to proliferation and colibactin production by E. coli, it suggests the potential for therapeutic intervention with SCFA, which are known to be protective against the development of CRC [2,12]. The mechanism by which SCFA inhibit bacterial growth is postulated as intracellular acidification caused by uptake of these free acids at acidic pH [17,74]. At acidic pH, undissociated SCFA can freely diffuse through cell membrane and concentrate in bacterial cytoplasm, resulting in reduction of the intracellular pH [17]. Acidified bacterial cells have reduced transmembrane potentials and disrupted cellular biological activities (such as DNA replication), thereby exhibiting low growth phenotype. This explains the results that c-SCFA inhibit E. coli growth only at pH ≤ 6.5, but not at pH = 7.4. Sorbara et al. reported that E. coli and Klebsiella failed to replicate at internal pH = 7 or 7.25, respectively, and an internal pH = 6.75 or 6.5 is bactericidal [17], indicating the importance of luminal pH of the host in bacterial fitness. In the same report [17], the authors also found that at concentrations ≥10 mM, acetate, butyrate, and propionate (>10 mM) were able to reduce the intracellular pH of K. pneumonia and E. coli to pH < 6.7 at low medium pH (pH = 5.75), which resulted in slow growth of these bacteria. These results raise the speculation that when the levels of c-SCFA are reduced due to inflammation-associated loss of commensal bacteria, the luminal pH would rise, and ultimately the luminal microenvironment would change concurrently. These changes could be in favor of pathosymbiont (like AIEC) growth and virulence gene expression, and consequently potentiate inflammation in the intestine. Bacterial Strains In this study, we used 22 E. coli strains from different origins, including 1 laboratory strain (DH5α), 13 human, 4 mouse, and 4 dog strains ( Table 2). The human strains were from the intestinal mucosa of patients with IBD (9 strains) [32,41,42] and CRC (4 strains) [72]. All the IBD strains, except CUT75, have an AIEC pathotype. CUT75 is a non-pathogenic strain from a CD patient [32] and was used as a non-AIEC control in this study. Prototypical AIEC LF82 (kindly provided by Arlette Darfeuille-Michaud) [40] was used as a positive control for AIEC. The 4 CRC-associated E. coli strains ( Table 2) were kindly provided by Dr. Jonathan Rhodes [72]. The mouse strain NC101 was isolated from the feces of a healthy mouse, and it induces CRC in IL10-/-AOM treated and monocolonized mice [33,74]. CUMSL1 and CUMSL6 were isolated in our laboratory from Agr2 −/− mouse ileum provided by Dr. Steven Lipkin. CUMT8 was isolated from mouse ileitis tissue in our laboratory [25]. The 4 dog strains CUDC1, CUDLU1, CUKD1 and CUKD2 were isolated in our laboratory from dog colons with granulomatous colitis (GC) [24,38]. All E. coli strains were stored at −70 • C. Bacterial Culture A single colony from a fresh Luria-Bertani (LB) agar plate was used to prepare a stock culture for each experiment. Two types of liquid culture media were used in this study, LB and M9 minimal media. M9 minimal medium was made of 1 × M9 salts ( (20 mM) was used as the carbon source. All E. coli strains were first grown in LB broth overnight, then diluted 1:100 in media (LB or M9) supplemented with SCFA or NaCl at concentrations specified under each condition. For all controls, equal amount of NaCl was added in place of the SCFA used in the experiments described. Chemicals and Stock Solutions Sodium acetate, sodium propionate, sodium butyrate, and M9 salts were purchased from Sigma-Aldrich (St. Louis, MO). For consistency and reproducibility, total SCFA solutions were premixed as 1 M stock solutions with a molar ratio of 65:29:29 for acetate, propionate, and butyrate, respectively for c-SCFA, or 8:2.5:1.5 for acetate, propionate, and butyrate, respectively for i-SCFA. Standardized Growth Analysis E. coli were grown in LB broth overnight at 37 • C with shaking. Overnight cultures were diluted 1:100 into fresh LB broth or M9 medium containing NaCl (control) or SCFA at specified concentration in a 100 well-plate (Growth Curve, USA). On top of the growth medium in each well, 75 µL of mineral oil was gently added to achieve a microaerophilic growth environment. Growth of E. coli was monitored Antibiotics 2020, 9,462 15 of 20 24 to 48 h at 37 • C in a BioScreen C system (Growth Curve, USA). The OD 600 was taken every 15 min by the machine. Growth curves were generated with OD 600 as the function of time. For easier comparison between SCFA-treated and untreated control samples, the area under each growth curve (AUC) was calculated with Graphpad Prism7.03. Transcriptional Analysis of Virulence Genes E. coli was grown in media with either c-SCFA (65 mM) or NaCl (65 mM, Control) to mid log phase. Total RNA was extracted using the Qiagen RNAProtect-RNeasy Kit per manufacturer's protocol. Total RNA was treated with TURBO DNA-Free Kit (Ambion), followed by a two-step qRT-PCR analysis, using Qiagen's QuantiTect Reverse Transcription Kit and QuantiNova SYBR Green PCR Kit. Eleven genes associated with virulence of IBD-and CRC-associated E. coli (see Table 3) were selected for transcriptional analysis with or without SCFA treatment. Primers for these virulence genes are listed in Table 4. E. coli mdH was used as the reference gene. Each qPCR reaction contained 1 µL of cDNA, 0.7 µL of each forward and reverse primers (10 µM), 5 µL of 2× SYBR Green Master Mix, 1 µL of QN ROX Reference Dye and 2.3 µL of nuclease-free water to make the total volume of 10 µL. The reaction was run with ABI7000 (Applied Biosystems). The comparative quantification (∆C t ) method was used to determine the up-or down-regulated genes. The relative change of a targeted gene expression was calculated by using the equation RQ = 2 −∆∆CT . Motility Assay E. coli was grown overnight at 37 • C in LB broth. Soft agar plates (1% tryptone, 0.5% NaCl, 0.25% agar) were prepared the day before assay. Sterile 1 M SCFA or NaCl stock solution was added into the agar right before pouring the plates. The control plates contain the same amount of NaCl as the total SCFA. There were three replicates per treatment. The overnight cultures of E. coli were transferred (3 µL) on to the center of each plate, followed by incubation of the plates at 37 • C for 10 h. E. coli motility was quantified by measuring the diameter of the circular swarming area formed by the growing motile bacteria. E. coli T75 and HM334 were found to be non-motile and excluded from this assay. Yeast Agglutination Assay E. coli was cultured in LB ± 123 mM c-SCFA for 18 h at 37 • C. After centrifugation at 3000× g for 15 min at 4 • C, the pellet was resuspended in PBS at OD 600 = 5.2. Yeast cells were suspended in PBS at OD 600 = 5.2 on the day of assay. The E. coli suspension (100 µL) was mixed with equal volume of the yeast cell suspension in a well of 48-well plate. The plate was kept on ice and rocked for 30 to 60 min at 20 rpm. E. coli Adhesion and Invasion of Cultured Epithelial Cells E. coli was cultured overnight in LB at 37 • C with shaking. Bacterial pellets were re-suspended in PBS before dilution in cell culture media ±65 mM c-SCFA to an m.o.i (multiplicity of infection) of 10. Caco-2 cells were infected with bacteria using the same procedures as described by Zhang et al. [59]. At 3 h post infection, cells were washed 3× with PBS, and lysed with 1% Triton X-100. Serial dilutions of the lysates were made in PBS and plated on LB agar. The total number of colonies recovered was used to calculate the number of adherent bacteria. For invasion assays, cells were treated with gentamicin (100 µg mL −1 ) for one hour after initial infection and 3× wash with PBS to kill extracellular bacteria. Cells were then washed 3× after gentamicin treatment, lysed, and plated as described above. NF-kB Activation Assay HEK-Blue KD-TLR5 cells were used to detect the induction of NF-kB by E. coli infection, as previously described by Zhang et al. [59]. Briefly, cells were seeded in 96-well plates at a density of 5 × 10 4 cells per well. E. coli was diluted into fresh cell medium containing either 123 mM NaCl (control) or SCFA at an m.o.i of 200 as 10× inocula, followed by addition of this inoculum (10 µL) into each well containing 100 µL of medium for a final m.o.i of 20. At 3 h post infection, the cell medium was carefully removed from each well, and replaced with 100 µL of fresh medium containing gentamycin (200 µg mL −1 ). At 24 h post infection, the spent medium was collected, and centrifuged at 12,000 rpm for 5 min to remove any particulate matter. QUANTI-Blue Kit (InvivoGen, San Diego, CA, USA) was used to detect the reporter protein SEAP (secreted alkaline phosphatase) following the manufacturer's instructions. The SEAP activity was detected as optical density at 620 nm. Proinflammatory Cytokine IL-8 Secretion Supernatants of Caco-2 (at 3 h post infection) cell cultures were collected and centrifuged to remove any cells or cell debris. The concentrations of IL-8 secreted by Caco-2 cells were analyzed by ELISA methods, using the Human IL-8 Antibody Pair Kit (Invitrogen) as per the manufacturer's instructions. Statistical Analysis Differences in growth, gene expression, motility, adhesion, invasion, and cytokine production between control and SCFA-treated samples were analyzed by 2-way ANOVA with Dunnett's test for multiple comparisons. All statistical analyses were performed with GraphPad Prism 7.03 software and p < 0.05 was considered significant. Conclusions In conclusion, our data reveal a multifaceted and previously unrecognized role of the regional chemical microenvironment (SCFA and pH) on growth and virulence of IBD-and CRC-associated E. coli, and on pro-inflammatory pathosymbiont-host interactions. Our findings provide novel insights and opportunities for therapeutic intervention in people and companion animals centered on restraining the growth and virulence of pathosymbiont E. coli through modification of the luminal SCFA and pH.
9,269.4
2020-07-30T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Enhanced bit error rate in visible light communication: a new LED hexagonal array distribution Due to the exponential growth of mobile devices and wireless services, a huge demand for radio frequency has increased. The presence of several frequencies causes interference between cells, which must be minimized to get the lower bit error rate (BER). For this reason, it is of great interest to use visible light communication (VLC). This paper suggests a VLC system that decreases the BER with applying a new LED distribution with a hexagonal shape using different techniques: on–off keying (OOK), quadrature amplitude modulation, and frequency reuse concept to mitigate the interference between the reused frequencies inside the hexagonal shape. The BER is measured in two scenarios line of sight (LoS) and non-line of sight for each technique that we used. The recommended values of BER in the proposed model for soft frequency reuse in case of LoS at 4, 8, and 10 dB signal to noise ratio, are 3.6 × 10–6, 6.03 × 10–13, and 2.66 × 10–18, respectively. Introduction VLC is a new technology in optical communications and is a green power to use. It is now required in this era for smart cities and smart homes in case of implementation in indoor and outdoor environments. In this paper, the focus is on the indoor only. VLC has numerous features. It prevents damage to the human body unlike the wireless communication. The Radio Frequency (RF) is harmful on the human body, where the RF radiation is absorbed by the body in a large enough amount. It can produce heat that leads to burn and causes a body tissue damage. Also, it can cause cancer due to magnetic wave radiation. On the other hand, the VLC uses LEDs, where their light radiation is a non-harmful source for the human body, yielding no human risk. VLC also provides the security needed to be implemented indoor (Haas et al. 2016). Using VLC is the main purpose of this paper to avoid frequency overlapping, where the user having own phone and is moving between two cells, he can find an interruption for the established communication (Haas et al. 2016). To mitigate the interference between users, it is a main objective to decrease the BER while the handover technique is presented, in this paper, by using the FR concept which is divided into two types: SFR and FFR. Both of these techniques make the frequency overlapping too low and beside the proposed LED distribution array for the hexagon array it makes it efficient without frequency overlapping and to get the higher signal to noise ratio (SNR) (Mahfouz et al. 2018;Khadr et al. 2019). In this paper, we propose a four techniques: OOK, QAM, SFR and FFR to improve the BER by using a hexagonal array. Each cell has two FoVs and has four frequencies: F1, F2, F3 and F4. We will use it, in each cell, to reduce the overlapping between two cells while the user is moving between two cells or more as shown in Fig. 1 (Mahfouz et al. 2018;El-Garhy et al. 2019). In addition, adequate communication and illumination to multiple roaming devices must be achieved using VLC technology for a large indoor environment. Multiple LEDs need to be installed which act as access points (APs) in the ceiling of a room. The existence of the interference inside the cell, especially in adjacent cells, degrades SNR available to a cell-edge user. This significantly affects seamless wireless service and leads to high outage probability consequently; illumination is required in the cell (Khadr et al. 2019;El-Garhy et al. 2019). However, inside the two, cells there is the inner side and the outer side. The main priority is to give the maximum coverage for the edge of the cell if the user is moving, and they need a soft handover to avoid the interruption. The block diagram, in Fig. 2, illustrates each process with VLC system (El-Garhy et al. 2019). In Fig. 2, a typical single-user VLC system is shown, where the transmitter typically consists of the channel encoder and a modulator followed by the optical front end. The electrical signal modulates the intensity of the optical carrier to send the information over the optical channel. At the receiver, a photodiode receives the optical signal and converts into an electrical signal followed by the recovery of data (Shinwasusin 2015;Chowdhury et al. 2018). However, in VLC, this is achieved by fast turning of light on and off (also called OOK) by using 0 bit is OFF and 1 bit is ON (Shinwasusin 2015). The BER issue in VLC is solved by creating a circular cell only with a four frequency reuse and it causes an interruption for transferring data been the sender and receiver (Huynh et al. 2012). The remainder of the paper is organized as follows. In Sect. 2, the VLC communication system is illustrated using a new distribution array with a hexagonal shape by using the FR concept. Then, in Sect. 3, the simulation results are displayed and discussed for the BER in case of LoS and Non-LoS using MATLAB. Finally, we concluded the paper in Sect. 4. System model The system of interest is constructed for the indoor environment to have satisfactory coverage for this model. The entire coverage area depends on the model presented which depends on the design for the LED lighting. This area is divided into seven cells forming a hexagonal shape for the array of lightning in the celling. The proposed hexagonal shape is implemented, in this paper, to achieve the best results for BER with different techniques. In Fig. 3, there are three cells, and they use one frequency in the center of each circular cell. There is also a gap between the three cells represented by a triangle with a red color, which may cause interruption to transfer the data if the user moves between all these cells from the ground. This operation is called the handover frequency (Melikov 2011). Cells represented by a triangle with a red color, which may cause interruption to transfer the data if the user moves between all these cells from the ground. This operation is called the handover frequency (Melikov 2011). The handover frequency is used to implement a hexagonal shape for the LED array as illustrated in Fig. 4 (Sharma 2014(Sharma , 2011. This shape helps in obtaining the best result for transferring data including BER and the SNR. In Fig. 4, there are three cells. They use one frequency in the center of each hexagonal cell. There is no gap between the three cells. It enhances the data transfer if the user moves between all these three cells from the ground (Novlan et al. 2010). This hexagonal cell enhances the BER in order to avoid the data interruption (Sharma 2014). As shown in Fig. 1, we have developed an indoor VLC simulation model in a (5 × 5 × 3 m 3 ) room as in Nguyen et al. (2010) and Yang et al. 2009). The hexagonal array (Sharma 2010(Sharma , 2014 has seven cells. Each cell has two FoVs. The outer shape is the hexagonal shape and the inner FoV inside the hexagonal shape is a circular cell in the center. Four frequencies are implemented in each cell. The outer side contains F1, F2, F3, and F4. The inner FoV as a circular array has F0 (Mahfouz et al. 2018;Khadr et al. 2019). By creating all these frequencies in all these cells, FR is implemented to achieve the best result possible by using the SFR and FFR using the array of hexagon shape (Sharma and Dewangan 2014). The modulation techniques implemented with this array using: OOK, QAM, and FR concept. Our proposed model aims to enhance the BER with the higher SNR in the designed room compared to the model in Huynh et al. (2012). The simulation parameters are listed in Table 1. The number of LEDs per array is 60 × 60 for the outer FoV hexagonal array and is 5 × 5 inner FoV as illustrated in Fig. 4. The FoV for the cell center reaches up to 60° and up to 70° for the cell edge to cover all users in the cell edge. The area of photodetector (PD) is 1 cm 2 and the proposed number of users in the room is 35 users divided into: 28 users in the cell edge in the designed room dimension and 7 users dealing with cell center with a carrier frequency of 700 THZ. The concentrator of FoV is 60° and the thermal noise is − 160 dBm/Hz (Huynh et al. 2012). In Fig. 5, the VLC channel has two different types of paths: LoS and Non-LoS. The mathematical derivation of the LoS and Non-LoS channel links with reflected path and total receive power calculation is explained briefly as follows. The received signal, Y(t), from the PD is represented by Huynh et al. (2012), Yang et al. 2009) where R is the photo-detector sensitivity, * is the convolution sign while h(t) is the impulse response, X(t) is the input optical power from the LED to the PD while X (t) ≥ 0 and N (t) is the additive white Gaussian noise (AWGN). The optical transmitted power, P t , is calculated as (Huynh et al. 2012) (1) By definition, the received optical, power P r , from the PD is: where is the channel direct current (DC) gain (Novlan et al. 2011). where A is the PD area, D is the distance between transmitter and receiver (Huynh et al. 2012). The order of Lambertian emission, m, is where ϕ 1 ∕2 is the transmitter of FoV, ϕ is the angle of irradiance, ψ is the angle of incidence, T s (Ψ) is the signal transmission coefficient for the optical filter, g (ψ) is the gain for the optical concentrator of the PD, and the ψ C is the receiver for the FoV (Huynh et al. 2012;Ghassemblooy et al. 2013). While the g (ψ), is given by ln 2 ln cos 1∕2 Fig. 5 VLC model for LoS and Non-LoS where n is the refractive index that achieves the gain of the medium as shown in Eq. (5). To calculate the SNR, we will put in our consideration the received power is given by Huynh et al. (2012) and Ghassemblooy et al. 2013) and SNR is expressed as In addition, we have the variance of Gaussian noise 2 , which it is the sum of the thermal noise, shot noise and the intersymbol interference from the optical path difference (Huynh et al. 2012 where q is the electronic charge, B en is equivalent noise bandwidth, I bg is background current, I 2 is the noise bandwidth factor, k is Bolzmann's constant, T k is the absolute temperature, G is the open-loop voltage gain, C pd is the fixed capacitance of photo-detector per unit area, E is the field effect transistor (FET) channel noise factor, g m is the FET transconductance, I 3 = 0.868 , and P rsI is the received power by intersymbol interference (Huynh et al. 2012;Ghassemblooy et al. 2013). The received power by intersymbol interference is, P rISI , in the total Gaussian noise, is Then, the SNR in the final image is (Huynh et al. 2012) Accordingly, the BER is (Chowdhury et al. 2018) where (7) S = R 2 P 2 rsignal (8) σ 2 total = σ 2 shot + σ 2 thermal + R 2 P 2 rISI (9) σ 2 shot = 2qR P signal + P rISI B en + 2qI bg I 2 B cn (10) σ 2 thermal = 8 kT k G C pd AI 2 B 2 en + 16 2 kT k E g m C 2 pd A 2 I 3 (14), we will add our modulation techniques to the input channel DC gain to get the BER by using the hexagonal shape. Considering power due to the Non-LoS paths, the DC channel gain of the reflected path is given by Chowdhury et al. (2018) where represents the angle of irradiance from the reflective area of the wall, is the angle of irradiance to the wall, D 1 is the distances between the transmitter and the wall, and D 2 is the distance between the wall and a point on the receiving surface, and dA wall is the size of the reflective area. Now, if we consider both multipath propagation and the LoS component, for the more general case, the total received power , P r , is given by Chowdhury et al. (2018) Finally, the SNR is given by BER with M-QAM The M-QAM Orthogonal Frequency Division Multiplexing (OFDM) could be employed to enhance the transmission capacity. This subsection discusses the spectral efficiency of a single user VLC system employing M-QAM OFDM. Figure 6 The original data are firstly processed by serial-to-parallel (S/P) and mapped onto M-level QAM constellation before being transformed by the Hadamard block which operates in Hadamard transform and reduces the correlation of the input data sequences (Haitham et al. 2018). Hermitian symmetry is imposed with the Inverse Fast Fourier Transform to obtain real value signal (Haitham et al. 2018). When applying IFFT, we do oversampling to make the distribution of the amplitude of the discrete IFFT output signals close to that of continuous signals. After the parallel to-serial (P/S) operation, the cyclic prefix (CP) is added to eliminate the intersymbol interference. The OFDM time domain signal must be both real and positive, the DC bias is introduced. Intensity Modulation (IM) is employed at the transmitter. The forward signal Y(t) drives the LED, which, in turn, converts the magnitude of the input electric signals Y(t) into optical intensity. At the receiver side, direct detection (DD) where Q(⋅) is the Q-function. The value of M in M-QAM is 8, 16, 64 and 256 because while increasing the M-QAM values the higher data rate can be offered but the BER will be high due to increasing the M values and we need to enhance the BER while increasing at M-QAM values. Fractional frequency reuse (FFR) The FFR is based on the inter-cell interference (ICI) in OFDMA based wireless networks. But, we are using this technique in optical networks especially in our proposed model for the indoor VLC, where the major frequencies in the cell center region are founded a partitioned frequencies in the cell edge. This reduces the ICI because cell border regions use orthogonal frequency to uplink (Novlan et al. 2010;Svahn et al. 2019). The use of FFR in optical networks leads to improve the data rate and coverage for celledge users and overall network throughput and spectral efficiency (Melikov 2011;Novlan 2011). In the proposed model, for a user served by the LED light as a source, the associated signal to interference noise ratio SNR as the following equation (Nguyen et al. 2010). where y Is a user, x is LED source, P t is the transmitted power from the source, h ty is the channel fading power, D ty is the path loss between the user and the LED source, 2 total is the total noise power, X represents all the interference from the LED arrays. Then, the BER is given by Eq. (13). Soft frequency reuse (FFR) The SFR is used in the proposed model to get the accurate data while transmitted and received in the indoor VLC. It depends on a cell which is partitioned into center zone and edge zone for the seven cells per cluster. The frequency band is divided into four subbands. The edge zone of the cells in a cluster is allocated to different sub-bands and the center zone uses the sub-band selected for the neighboring cell's edge zone (Novlan 2010). While SFR is more bandwidth efficient than FFR, it results in more interference to both cell center and edge user, and the SNR is given by Novlan (2010). where Q i consists of all interfering base stations transmitting to cell interior users on the same sub-band as user y, Q e consists of all interfering base stations transmitting to celledge users on the same sub-band, is a power control factor ≥ 1 and is introduced to the transmit power, P t , to create different classes P interior = P for cell center and P exterior = P for the cell-edge user, where Q i consists of all interfering base stations transmitting to cell interior users on the same sub-band as user y, Q e consists of all interfering base stations transmitting to cell-edge users on the same sub-band, is a power control factor ≥ 1 and is introduced to the transmit power, P t , to create different classes P interior = P for cell center and P exterior = P for the cell-edge user. Similar and detailed analysis are performed in literature concerning phase distribution of a hexagonal array at fractional Talbot planes (Guo et al. 2007), LED diffused transmission freeform surface design for uniform illumination (Zhu 2019), and software design of SMD LEDs for homogeneous distribution of irradiation in the model of dark room (Liner et al. 2014). Simulation results Our proposed model was presented by a new LED distribution with a hexagonal shape by dividing it into seven cells in Fig. 4. Figure 7 displays the SNR per single LED for the designed room dimension (5 × 5 × 3 m 3 ) showing an improved SNR distribution. The maximum SNR at the maximum peak is 38.9 dB, which is good for our proposed model to apply, to get an improved BER. Fig. 1 to get the good coverage to the end user. Figure 8 displays the luminance of the proposed model for the hexagonal array shown in Fig. 1 to get the good coverage to the end user. We apply the OOK for the proposed model in two scenarios: LoS and Non-LoS to check the BER after applying the LED distribution for a hexagonal shape of Fig. 1. In Fig. 9, we present the BER for our proposed model in case of LoS using OOK. In Fig. 9, while the SNR is increased, the BER is getting lower. Figure 10 shows the BER for our proposed model in case of Non-LoS using OOK. In Fig. 10, while the SNR is increased, the BER is getting lower. The BER at different values of SNR is summarized in Table 2, showing also a comparison between LoS and Non-LoS scenarios. As expected, the LoS case achieves better (lower) BER. After applying the OOK, we applied the M-QAM technique with M values = 8, 16, 64 and 256 to obtain the BER in case of LoS and Non-LoS. In Fig. 11, the BER is plotted in case of LoS at different M-QAM values to show the enhancement. Table 3. In Fig. 12, the BER is shown in case of Non-LoS at different M-QAM values. in LoS and Non-LoS, the final proposed schemes for SFR and FFR are presented in Figs. 13 and 14, respectively. In Fig. 13, we have BER for the LoS scenario for the FR concept (for SFR and FFR). The result is enhanced after applying the hexagonal array using FR as compared to the previous work (Huynh et al. 2012). Table 5 shows the difference between the SFR and FFR at the same values of SNR. Figure 14 shows the FR concept for SFR and FFR BER for the Non-LoS scenario under reflection and is summarized in Table 6. Finally, one can conclude that this paper presents the best results shown in the following Conclusions VLC is a promising technology for the near future that uses LEDs for illumination and communication simultaneously. In order to achieve the best results for BER, the higher SNR is required. In this paper, the performance of BER was evaluated with different techniques in two scenarios LoS and Non-LoS using hexagonal array to mitigate the interference between cells. The optimal results in this proposed model were found in SFR specifically in LoS unlike the Non-LoS and the BER in case of LoS was decreased by 47% compared in case of Non-LoS due to a reflections. The BER performance for SFR in case of LoS for the proposed model and comparing it to the previous work (Huynh et al. 2012) the BER is enhanced by 90%. OOK modulation technique in the proposed model was measured in LoS and Non-LoS and the BER is enhanced by 70% compared to the Non-LoS state. For M-QAM modulation technique in the proposed model was measured at different values of M and at different SNR values in the previous scenarios: LoS and Non-LoS. The BER in case of LoS is enhanced by 60% compared to the Non-LoS state. To conclude the optimal result is found in SFR specifically in case of LoS due to applying the hexagonal shape and using FR concept to have a good result for BER. It is recommended for the future work to implement it in a practical way. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). The authors have not disclosed any funding. Conflict of interest The authors have not disclosed any conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
5,160.4
2022-07-12T00:00:00.000
[ "Engineering", "Computer Science" ]
Transient Receptor Potential Channel A1 (TRPA1) Regulates Sulfur Mustard-Induced Expression of Heat Shock 70 kDa Protein 6 (HSPA6) In Vitro The chemosensory transient receptor potential ankyrin 1 (TRPA1) ion channel perceives different sensory stimuli. It also interacts with reactive exogenous compounds including the chemical warfare agent sulfur mustard (SM). Activation of TRPA1 by SM results in elevation of intracellular calcium levels but the cellular consequences are not understood so far. In the present study we analyzed SM-induced and TRPA1-mediated effects in human TRPA1-overexpressing HEK cells (HEKA1) and human lung epithelial cells (A549) that endogenously exhibit TRPA1. The specific TRPA1 inhibitor AP18 was used to distinguish between SM-induced and TRPA1-mediated or TRPA1-independent effects. Cells were exposed to 600 µM SM and proteome changes were investigated 24 h afterwards by 2D gel electrophoresis. Protein spots with differential staining levels were analyzed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and nano liquid chromatography electrospray ionization tandem mass spectrometry. Results were verified by RT-qPCR experiments in both HEKA1 or A549 cells. Heat shock 70 kDa protein 6 (HSPA6) was identified as an SM-induced and TRPA1-mediated protein. AP18 pre-treatment diminished the up-regulation. RT-qPCR measurements verified these results and further revealed a time-dependent regulation. Our results demonstrate that SM-mediated activation of TRPA1 influences the protein expression and confirm the important role of TRPA1 ion channels in the molecular toxicology of SM. Introduction The chemical warfare agent sulfur mustard (SM) causes severe damage to the skin, eyes, and the respiratory system [1,2]. Although SM and the associated injuries have been intensively investigated over decades, the molecular toxicology is still not understood in detail. In aqueous environments, SM forms a highly reactive sulfonium and subsequent carbenium ion [3]. A plethora of nucleophiles including the N7 atom of guanine bases in the DNA helix are targeted by SM. Monofunctional DNA alkylation and in particular DNA crosslinks were regarded as the exclusive mechanism of toxicity. However, Stenger et al. demonstrated that the alkylating substances CEES (2-chloroethyl-ethyl sulfide, a mono-functional SM analogue) and SM activate transient receptor potential ankyrin 1 cation channels (TRPA1) in vitro, thereby affecting cell viability [4]. TRPA1 channels belong to the TRP channel superfamily and are located in the plasma membrane of different human cell types, predominantly of neuronal cells [5]. They usually form homotetramers, but heterotetramers with TRPV1 have also been described [6,7]. TRP channels share the overall architecture of voltage-gated ion channels with six transmembrane domains (TMs). TM5 and TM6 form the pore region that is permeable for monovalent K + , Na + and bivalent Ca 2+ or Mg 2+ cations [8]. The intracellular N-terminus of TRPA1 possesses multiple characteristic ankyrin repeat domains that contain free cysteine residues that are important for channel activity [9]. The physiological function of TRPA1 is the perception of sensory stimuli like pain and cold but also of certain reactive chemicals such as acrolein, a highly reactive substance present in tear gas or vehicle exhausts [10][11][12]. The activation of TRPA1 by reactive compounds is assumed to rely on covalent modification of cysteine residues in the ankyrin repeat sequence [9,10,12]. Reactive oxygen species (ROS), hypochlorite and protons were also identified as TRPA1 activators [13][14][15][16][17][18]. The latter seem to interact with an extracellular interaction site of TRPA1 and not via modification of intracellular cysteines [13]. The highly reactive SM and CEES were also identified as distinct TRPA1 activators with a not yet identified binding site [4,19]. Both chemicals provoked a TRPA1-dependent increase of intracellular calcium levels ([Ca 2+ ] i ) that could be efficiently prevented by pre-incubation with the TRPA1-specific blocker AP18 [4,19]. There is some evidence that TRPA1 activation is involved in the molecular toxicity of alkylating compounds [4,20,21]. However, the cellular consequences of an SM-induced and TRPA1-mediated elevation of [Ca 2+ ] i have not been investigated in detail so far. In the present study we analyzed TRPA1-dependent effects after SM exposure in human TRPA1-overexpressing HEK cells (HEKA1). Proteome changes were analyzed by 2D gel electrophoresis (2D-GE) with subsequent matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS(/MS)) and nano high performance liquid chromatography electrospray ionization tandem mass spectrometry (nanoHPLC-ESI MS/MS). AP18 was used to distinguish between TRPA1-dependent and -independent effects on protein expression. Results were validated by RT-qPCR in HEKA1 cells and as well as in human A549 lung epithelial cells endogenously expressing TRPA1 channels [22][23][24]. Cell Culture HEK293 wild-type cells (introduced as HEKwt) and HEK293-A1-E cells (introduced as HEKA1) with a stable expression of human TRPA1 (hTRPA1) were kindly donated by the Walther-Straub-Institute of Pharmacology and Toxicology (Ludwig-Maximilians-Universität, Munich). Cells were grown in DMEM containing 4.5 g/L glucose, Earl's salts and L-glutamine. This medium was supplemented with 10% FBS (v/v) and 1% P/S (v/v). Cells were cultured in a humidified atmosphere at 5% (v/v) CO 2 and 37 • C (standard conditions). HEKwt cells were split every 2-3 days while HEKA1 cells were subcultivated every 3-4 days. Cells were detached using trypsin-EDTA for 3 min and resuspended in the respective medium. A549 cells were grown in DMEM (Biochrom, Berlin, Germany) supplemented with FBS (Biochrom, Berlin, Germany) and gentamycin (5 µg/mL). Cells were split every 2-3 days detached by trypsin-EDTA for 5 min. Sample Preparation HEKwt, HEKA1 or A549 cells were exposed to 600 µM SM according to Stenger et al. [4,19]. A concentration of 25 µM AITC was used to stimulate HEKwt and HEKA1 cells. Cell lysates of SM-exposed or AITC-treated cells were generated at 24 h for 2D-GE. Pre-incubation with the TRPA1-specific inhibitor AP18 (2 µM, application 5 min prior to SM or AITC exposure) was also performed according to Stenger et al. for all groups [4]. Controls were incubated without AITC or SM but with medium and, if applicable, with AP18. After the respective incubation time, cells were washed with PBS first and then harvested with trypsin-EDTA for 3 min and resuspended in 10 mL DMEM. Cell number was determined using a Neubauer counting chamber (NanoEnTek, Seoul, Korea). Cells were lysed in lysis buffer (7 M urea, 2 M thiourea, 4% w/v CHAPS, 2% v/v IPG buffer, 40 mM DTT) for 2D-GE. Samples were sonicated (4 cycles with 10 s) on ice. Supernatants were collected after centrifugation (30 min, 4 • C and 21,130 RCF) and subsequently cleaned up using the 2D Clean-Up Kit according to the manual of the provider. Protein concentration was determined using the 2D-Quant Kit. Samples were aliquoted and frozen at −80 • C. For RT-qPCR, HEKA1 cells were investigated 1, 3, 5, or 24 h after exposure while HEKwt and A549 cells were analyzed after 24 h only. Approx. 10 × 10 6 cells were collected in 1 mL of RNA protection reagent from QIAGEN (Hilden, Germany) at the respective time points. RNA was extracted using the RNeasy Mini Protect Kit (QIAGEN, Hilden, Germany) according to the instructions given by the manufacturer. In brief, cell pellets were lysed in 600 µL RLT lysis buffer and homogenized using a QIA shredder (QIAGEN). RNA was precipitated in 600 µL 70% (v/v) EtOH and purified by washing several times in different buffers according to the manufacturer's protocol. The concentration of RNA was measured using the NanoDrop 8000 Spectrophotometer from Thermo Scientific (Schwerte, Germany). Protein spots with significant different staining levels were identified using Progenesis SameSpots software v5.0.0.7 (Nonlinear Dynamics, Newcastle, UK). Threshold levels were defined with a fold change > 2.0 and an ANOVA p value < 0.05. Spots were filtered to identify only those which applied to both criteria. At least 3 biological replicates were investigated for each group. EtOH solvent control gels were chosen as reference. MALDI-TOF MS(/MS) or NanoHPLC-ESI MS/MS Analysis Relevant protein spots were excised and proteolyzed in-gel using the trypsin profile IGD kit (Sigma-Aldrich). In brief, the gel piece was covered with 200 µL destaining solution and incubated at 37 • C for 30 min. The gel piece was dried before 20 µL (0.4 µg of trypsin) of the prepared trypsin solution and 50 µL of the trypsin reaction buffer were added. It was incubated overnight at 37 • C. Following tryptic cleavage, peptides were desalted and concentrated using ZipTip-C18 pipette tips (Merck Millipore, Darmstadt, Germany). First, ZipTip was equilibrated using 10 µL methanol and 10 µL 0.1% (v/v) TFA. Afterwards, sample was loaded by pipetting the digested protein up and down for 10 times. ZipTip was washed with 10 µL 0.1% (v/v) TFA before sample was eluted with 10 µL of acetonitrile/0.1% (v/v) TFA (80/20 v/v). Using the dried-droplet technique, samples were spotted onto a polished steel target by mixing 1 µL each of sample and CHC (5 mg/mL in a 1:2 mixture of ACN and 0.1% v/v TFA). MALDI-TOF MS(/MS) measurements were performed in the positive reflector ion mode using an Autoflex III smartbeam mass spectrometer (Bruker, Billerica, MA, USA) equipped with a modified pulsed all-solid-state laser 355 nm (Bruker Daltonics). A peptide mass fingerprint (PMF) was recorded in a mass range from m/z 900-3400 with the following settings: Ion source I, 19 kV; ion source II, 16.5 kV; lens, 8.3 kV; reflector I, 21 kV; reflector II, 9.75 kV. MS/MS experiments were executed in the LIFT mode with the following parameters: Ion source I, 6 kV; ion source II, 5.3 kV; lens, 3.0 kV; reflector I, 27 kV; reflector II, 11.6 kV; LIFT I, 19 kV; LIFT II, 4.2 kV. Mass spectra were recorded using the flex control software v.3.0 (Bruker, Billerica, MA, USA) and further processed by flex analysis v.3.0 and BioTools v.3.1.2.22 (both Bruker). Identification of proteins was achieved via the SwissProt protein database using MS ion search of the Mascot search engine (Matrix Science, London, England) with following search criteria: Taxonomy Homo sapiens (human), enzyme trypsin, fragment mass tolerance 0.1%, significance threshold p < 0.05, maximum number of hits 20. Protein spots that could not be identified by MALDI-TOF MS(/MS) were analyzed by more sensitive nanoHPLC-ESI MS/MS (proteome factory AG, Berlin, Germany). The LC MS/MS system consisted of an Agilent 1100 nanoHPLC system (Agilent, Waldbronn, Germany), PicoTip electrospray emitter (New Objective, Woburn, MA, USA) and an Orbitrap XL mass spectrometer (ThermoFisher Scientific, Bremen, Germany). Peptides were first trapped and desalted on the enrichment column (Zorbax 300SB-C18, 0.3 × 0.5 mm, Agilent, Santa Clara, CA, USA) for five minutes (solvent: 2.5% ACN/0.5% formic acid). Then, they were separated on a Zorbax 300SB-C18, 75 µm × 150 mm column (Agilent) using a linear gradient from 15% to 40% B (solvent A: 0.1% formic acid in water, solvent B: 0.1% formic acid in ACN). Ions of interest were data-dependently subjected to MS/MS according to the expected charge state distribution of peptide ions. MS/MS data were matched against the SwissProt protein database using MS/MS ion search of the Mascot search engine (Matrix Science, London, UK) with following parameters: Enzyme trypsin, fixed modifications carbamidomethyl (C), variable modifications deamidated (NQ) and oxidation (M), mass values monoisotopic, peptide mass tolerance 3 ppm, fragment mass tolerance 0.6 Da, significance threshold p < 0.05, taxonomy Homo sapiens (human). Real-Time qPCR Extracted RNA (500 ng) was transcribed into complementary DNA (cDNA) using the RT 2 First Strand Kit. Transcription was performed according to the manufacturer's protocol. In brief, 10 µL of a reverse transcriptase mixture was added to the RNA samples. The mixture was incubated for 15 min at 42 • C and then for another 5 min at 95 • C. From each resulting cDNA sample, 675 µL were mixed with 675 µL of RT 2 SYBR Green/ROX qPCR Mastermix. A volume of 25 µL from each sample was transferred into a specially designed RT 2 Custom Profiler PCR 96-well plate using a TECAN freedom evo (TECAN, Crailsheim, Germany). 96-well plates were pre-spotted with specific primers according to the results of 2D-GE. Plates were sealed with cap s-trips and placed into the Mastercycler 2S (Eppendorf, Hamburg, Germany). The qPCR was carried out with the following PCR program: 10 min at 95 • C followed by 40 cycles of 15 s at 95 • C, and 1 min at 60 • C. At the end of the PCR program, a melting profile of the DNA amplifications was measured with the following settings: 95 • C for 15 s, 60 • C for 15 s and a final temperature gradient from 60 • C to 95 • C over 20 min. PCR data were analyzed with the realplex software from Eppendorf and with an online software from QIAGEN [25]. 2D Gel Electrophoresis and Mass Spectrometry Analysis of HEKA1 cells exposed to SM and investigated after 24 h revealed differential detection of 22 protein spots compared to the control group ( Figure 1A) in CBB-stained 2D gels. Three of these spots were identified with a threshold level of a fold change > 2.0 together with a p value < 0.05 and to be dependent on TRPA1 ( Figure 1B-D). Dependency on TRPA1 was proven as pre-incubation with AP18 prevented SM-induced effects. Up-regulation of one ( Figure 1B) and down-regulation of two protein spots ( Figure 1C,D) were observed. The up-regulated protein was identified by MALDI-TOF MS peptide mass fingerprint ( Figure 2A) and subsequent MS/MS analysis of characteristic protein-derived peptides as heat shock 70 kDa protein 6 (HSPA6, UniProtKB-P17066). Fragmenting of the ion at m/z 1487.5 is exemplarily shown in Figure 2B and documents the internal peptide (39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49)(50)(51). The overall sequence coverage was 35.5% ( Figure 2C). The Mascot probability score was calculated to be 86.6. Identification of the two down-regulated protein spots was not successful by the MALDI-TOF technique. Therefore, spots were analyzed by the more sensitive nanoHPLC-ESI MS/MS which identified 4 proteins for each spot all with a high probability score. Table 1 gives an overview on the detected proteins. Values for the respective molecular weight were taken from UniProt database. Silver-stained 2D gels identified 28 additional protein spots (12 up-and 16 down-regulated) that were affected after SM exposure ( Figure S1). AP18 pre-incubation did not influence the SM-induced changes, thereby excluding the involvement of TRPA1 (data not shown). Table 1. Experiments were carried out with n = 3 per group. Molecular weight is indicated on the left and the pI value on top of the gels. RT-qPCR The results of 2D-GE analysis were confirmed by independent RT-qPCR experiments ( Figure 3). Accordingly, genes for HSPA6, CAPRIN1, ELAVL1, FHL1, GPHN, NOSIP, NCL, SFXN1 and STRN4 were chosen as targets. Effects on transcription of these genes were assessed 24 h after SM exposure. In HEKA1 cells, SM significantly increased HSPA6 mRNA ( AITC treatment, also with AP18 pre-incubation, of HEKA1 was conducted to elucidate the role of TRPA1 activation in more detail ( Figure S2). As expected, AITC resulted in a pronounced increase of HSPA6 mRNA levels 24 h after treatment that was minimized to less than 50% by AP18. AP18 alone without AITC did not affect HSPA6 mRNA. Also, no changes of HSPA6 mRNA levels were observed in HEKwt cells after AP18 or SM exposure ( Figure S2). Levels of FHL1, NOSIP or STRN4 mRNA showed some slight SM-induced changes, but levels were not in the range of ±1.5-fold compared to controls. CAPRIN1, ELAVL1, GPHN, NCL and SFXN1 mRNA levels were down-regulated 24 h after SM exposure. However, AP18 was unable to increase these mRNA levels. A summary of the fold change values is given in Table S1. Discussion Cell damage caused by alkylating compounds is assumed to rely on DNA mono-adducts and particularly on DNA crosslinks or the biological consequences thereof [28][29][30]. However, cytotoxic effects of alkylating agents are strongly attenuated by cellular DNA repair processes [29,[31][32][33]. Therefore, additional complex mechanisms have been proposed including PARP signaling, nitric oxide and oxidative stress and activation of multiple cellular pathways that contribute to cytotoxicity [28,[34][35][36][37][38][39]. In this context, chemosensing TRPA1 channels were described as targets of SM and related alkylating compounds [4,19]. A distinct increase of [Ca 2+ ] i occurred after the activation of TRPA1 by SM. Some biological effects thereof, e.g., influence on cell viability, have already been described [4]. Additional SM-induced and TRPA1-mediated effects have not been studied so far and were investigated in this study. HEKA1 cells, overexpressing human TRPA1 channels, as well as human A549 lung epithelial cells, endogenously expressing TRPA1, were chosen as the in vitro model. Both cell types were used in several studies before and were found very well suited for the investigation of TRPA1-related effects [23,40,41]. Several genes and proteins have been reported to be specifically up-regulated in mouse skin and in human keratinocytes after exposure to SM [42,43]. Thus, we focused on proteome changes after SM exposure with special focus on the involvement of TRPA1. 2D-GE with subsequent protein identification by MS were used to detect changes of protein levels in HEKA1 cells. Cell lysates from controls, which were only treated with the solvent EtOH, were selected as control group. SM-treated or cells pre-incubated with the specific TRPA1 inhibitor AP18 [44], were examined to unambiguously identify SM-induced and TRPA1-regulated proteins. Our results indicated 22 differentially expressed protein spots after SM exposure in HEKA1 cells compared to un-exposed controls ( Figure 1A). It should be noted that we have chosen 7 cm first dimension gel strips covering pH-ranges between 4-7 or 6-11 and proteins were visualized after SDS-PAGE separation by CBB staining. CBB staining detects high-abundant proteins with a very good chance of success for the identification by MALDI-TOF MS(/MS) while changes in low-abundant proteins may be undiscovered. Additional silver staining experiments were conducted and identified 28 further protein spots. However, AP18 pre-incubation had no effect on these spots, thereby excluding a role of TRPA1. Nevertheless, we successfully identified three SM-induced and TRPA1-regulated protein spots 24 h after exposure with one up-regulated and two down-regulated proteins ( Figure 1B-D). The up-regulated protein was unequivocally identified as heat shock 70 kDa protein 6 (HSPA6) by MALDI-TOF MS peptide mass fingerprint and further MS/MS fragmentation of prominent peptide ions (Figure 2A,B). Identification of the down-regulated protein spots by MALDI-TOF MS was not successful, most probably due to insufficient protein amounts. Therefore, nanoHPLC-ESI MS/MS was chosen as an alternative method. Using this highly sensitive MS/MS method, multiple proteins with high probability scores were unambiguously verified (Table 1). All proteins that were assigned to the respective spot revealed a similar MW and a pI, in line with the 2D-GE results. It is not uncommon in 2D-GE that protein spots, especially of high-abundant proteins, do not represent a single protein. Instead proteins with similar MW and pI can overlap which is also the case in our experiments. The identity of proteins was confirmed by RT-qPCR. SM exposure resulted in down-regulation of all investigated mRNA except STRN4 ( Figure 3C). Some effects were weak and failed to meet the criteria of a ±1.5-fold change (STRN4, FHL1, NOSIP) while CAPRIN1, GPHN, NCL and ELAVL1 mRNA were down-regulated to some extend ( Figure 3C). However, AP18 did not significantly influence mRNA levels in any case. Our results indicate that TRPA1 has no major effect on mRNA transcription of these genes. Effects on translation, post-translational modification or degradation of target proteins that could explain the obtained results in 2D-GE may be present but have not been elucidated so far. SM-affected proteins identified in our study are involved in several steps of gene transcription or mRNA translation. Caprin-1 is discussed to mediate the transport and translation of mRNAs of proteins involved in cell proliferation and migration in multiple cell types [45]. Striatin-4 binds calmodulin in a calcium-dependent manner and may function as scaffolding or signaling protein [46]. Nucleolin is a nucleolar phosphoprotein involved in fundamental aspects of transcription regulation, cell proliferation and growth [47]. It is thought to play a role in pre-rRNA transcription and ribosome assembly and in the process of transcriptional elongation [48]. GPHN and FHL1 are proteins involved in organization of the cytoskeleton and protein-cytoskeleton interactions [49,50]. In addition, FHL1 is involved in nuclear gene regulation processes [50]. ELAVL1 is an RNA-binding protein that binds to the 3'-UTR region of mRNAs and increases their stability [51,52]. Only for SFXN1, a protein that might be involved in the transport of a component required for iron utilization into or out of the mitochondria [53,54], and NOSIP, a ubiquitin-protein ligase that negatively regulates nitric oxide production by inducing NOS1 and NOS3 translocation to actin cytoskeleton and inhibiting their enzymatic activity [55], a direct function in protein biosynthesis has not been described yet. Whether the identified proteins are indeed involved in the molecular toxicology of SM or related compounds has to be proven but is not part of this study. Exposure of human keratinocytes with CEES (a monofunctional analog of SM) increased HSPA6 levels [56]. In addition, CEES was identified as an activator of TRPA1 [4]. Results obtained in our study suggest a link between the expression of HSPA6 and TRPA1 activation by alkylating compounds: SM increased HSPA6 mRNA levels beginning 3 h after exposure, which was significantly prevented by AP18 pre-treatment ( Figure 3A) and induced HSPA6 protein formation after 24 h ( Figure 1B). HEKwt cells did not respond to SM while human A549 lung epithelial cells, endogenously expressing TRPA1, revealed similar results with regard to HSPA6 mRNA levels compared to HEKA1 cells ( Figure 3B). HSPs are molecular chaperones that regulate the folding, degradation and assembly of proteins [57]. After cellular stress such as intense heat, heavy metal exposure, UVB light, oxidative stress or inflammation, HSPs are up-regulated to protect cell proteins against aggregation [58,59]. HSPA6 is especially responsible for the correct folding and activation of many proteins [60]. It is well known that SM exposure results in the formation of reactive oxygen species (ROS) or reactive nitrogen species in vitro [34,35]. ROS can induce protein damage, instability, aggregation and can even provoke cell death but have also been shown to induce HSPs [61]. Therefore, it is reasonable to assume that HSP induction may also be the consequence of SM-induced oxidative stress in our experiments. It was previously postulated that elevation of [Ca 2+ ] i through TRP channel activation (in particular TRPA1 and TRPV1) activates the mitochondrial tricarboxylic acid cycle, which generates ATP as well as ROS [62,63]. In a study by Gould et al., CEES was proven to affect mitochondrial function in lung cells resulting in ROS formation [64]. Ray et al. demonstrated that SM induced an increase of [Ca 2+ ] i localized to mitochondria [65]. However, TRP channels were not considered as potential mediators of the observed phenomena. Our work suggests a distinct role of TRPA1 beyond an immediate effect of SM on mitochondria. Inhibition of TRPA1 by AP18 is suggested to attenuate the SM-induced increase of [Ca 2+ ] i and thus, the generation of ROS and HSPA6 subsequently. However, AP18 was insufficient to completely prevent HSPA6 induction in our experiments indicating that TRPA1 is not exclusively responsible for HSPA6 up-regulation. In addition to TRPA1-mediated ROS formation, SM-induced depletion of antioxidants, lipid oxidation or direct effects on mitochondria can cause severe oxidative stress which is independent of TRPA1 and may also trigger HSPA6 expression [66][67][68]. Other biological effects of SM-induced increase of [Ca 2+ ] i such as phospholipase A activation and subsequent arachidonate release, terminal differentiation of human keratinocytes, induction of cell death through caspases, in addition to the above discussed Ca 2+ /ROS/HSPA6 cascade, were described [4,65,[69][70][71]. A synopsis depicting the interaction of cellular events after SM-exposure with focus on TRPA1 and HSPA6 is given in Figure 4. Our results suggest that SM causes an increase of [Ca 2+ ]i and subsequent induction of HSPA6, which is in part mediated by TRPA1 channels. Increase of ROS, presumably originating from mitochondrial stress, is a feasible cause for the observed HSPA6 induction. An inhibition of TRPA1 in HEKA1 and A549 cells attenuated the SM-induced expression of HSPA6 and thereby pointing to a distinct role of TRPA1 ion channels. Whether induction of HSPA6 after SM exposure is a protective cellular defense mechanism as it may protect against stress-induced apoptosis [72] cannot be answered at this point and should be addressed in future research. Nevertheless, TRPA1 channels were proven to be part of the very complex molecular toxicology of SM and a step closer into the spotlight of SM research. Our results suggest that SM causes an increase of [Ca 2+ ] i and subsequent induction of HSPA6, which is in part mediated by TRPA1 channels. Increase of ROS, presumably originating from mitochondrial stress, is a feasible cause for the observed HSPA6 induction. An inhibition of TRPA1 in HEKA1 and A549 cells attenuated the SM-induced expression of HSPA6 and thereby pointing to a distinct role of TRPA1 ion channels. Whether induction of HSPA6 after SM exposure is a protective cellular defense mechanism as it may protect against stress-induced apoptosis [72] cannot be answered at this point and should be addressed in future research. Nevertheless, TRPA1 channels were proven to be part of the very complex molecular toxicology of SM and a step closer into the spotlight of SM research.
5,796
2018-08-31T00:00:00.000
[ "Chemistry", "Environmental Science", "Biology" ]
Major Membrane Protein TDE2508 Regulates Adhesive Potency in Treponema denticola The cultivation and genetic manipulation of Treponema denticola, a Gram-negative oral spirochaeta associated with periodontal diseases, is still challenging. In this study, we formulated a simple medium based on a commercially available one, and established a transformation method with high efficiency. We then analyzed proteins in a membrane fraction in T. denticola and identified 16 major membrane-associated proteins, and characterized one of them, TDE2508, whose biological function was not yet known. Although this protein, which exhibited a complex conformation, was presumably localized in the outer membrane, we did not find conclusive evidence that it was exposed on the cell surface. Intriguingly, a TDE2508-deficient mutant exhibited significantly increased biofilm formation and adherent activity on human gingival epithelial cells. However, the protein deficiency did not alter autoaggregation, coaggregation with Porphyromonas gingivalis, hemagglutination, cell surface hydrophobicity, motility, or expression of Msp which was reported to be an adherent molecule in this bacteria. In conclusion, the major membrane protein TDE2508 regulates biofilm formation and the adhesive potency of T. denticola, although the underlying mechanism remains unclear. Introduction Treponema denticola is a Gram-negative anaerobe that is classified as a spirochaete and has periplasmic flagella, which confer motility to enable the bacterium to move in a semisolid medium [1]. The bacterium is a member of the ''red complex'' bacteria, which are critical pathogens associated with human periodontal diseases [2], and is also believed to influence arteriosclerosis [3]. T. denticola colonizes and forms a biofilm in the gingival sulcus, further exacerbating inflammation and destruction of periodontal tissues [4]. The virulence factors of T. denticola have been reported and are summarized in reviews [1,5,6]. Msp (named from major sheath protein), the most abundant protein in the bacteria, acts as an adherent factor to bacteria and host tissues [7,8]. It has also reported to function as a porin [9,10]. Although the localization of Msp has been argued [11][12][13], Anand et al. recently demonstrated that it was localized in the outer membrane and exposed on the surface [10]. However, a substantial quantity of Msp also exists in the periplasm [10,11]. The chymotrypsin-like protease dentilisin is also a major virulence molecule in this pathogen [14,15]. Dentilisin is a complex consisting of several proteins, and affects various host functions, such as activation of the complement system and degradation of cytokines and other host proteins [1,5,6]. It is also reported that dentilisin functions as an adherent molecule to other oral bacteria [16] and host molecules [17]. Although other molecules involving in pathogenicity have been also reported, little is known about the pathogenic mechanisms of T. denticola, largely because of the difficulties in handling this organism; complicated media such as TYGVS and NOS are usually required for its culture [18]. Additionally, genetic manipulation of T. denticola, such as the construction of genetic mutants, is still challenging [19]. In this study, we found that T. denticola grew well in a medium that was formulated based on a commercially-available medium, and we also established a highly efficient method for genetic modification. Bacterial surface molecules are important for growth and pathogenicity because they directly interact with environmental factors such as other bacteria and host tissues [1]. They often play a critical role especially in biofilm formation and adhesion to host cells. T. denticola has an outer membrane at the outermost layer, but its composition is totally different from a general outer membrane of Gram-negative bacteria. The outer membrane of T. denticola does not contain lipopolysaccharide; rather, it has a lipid that is similar to lipoteichoic acid found in Gram-positive bacteria [12,20]. Although T. denticola has a unique outer membrane, few studies have conducted a comprehensive investigation of its surface molecules [21,22]. In this study, we analyzed the major membrane-associated proteins of T. denticola, identified an unknown protein TDE2508, and demonstrated that this protein regulated biofilm formation and adherence to host cells. T. denticola Strains and Culture Conditions We primarily used T. denticola ATCC 35405, and also used ATCC 33520 strain, which were provided by the RIKEN BRC through the National Bio-Resource Project of the MEXT, Japan. For the bacterial culture, we largely used Modified GAM (Nissui Pharmaceutical Co., Ltd., Tokyo, Japan) supplemented with 0.001% thiamine pyrophosphate and 5% heat-inactivated rabbit serum (herein referred to as mGAM-TS). We also used two additional media; TYGVS, which is widely used for the culture of T. denticola [18], and Modified NOS (mNOS), which is a relatively simple medium [23]. The bacteria were anaerobically and statically cultivated at 37uC. When needed, highly pure agar (Difco Agar Noble, Becton, Dickinson and Company, Franklin Lakes, NJ, USA) and antibiotics (described in detail below) were added to the media. For the osmotic pressure test, NaCl and KCl were added. T. denticola was generally cultivated in mGAM-TS until the late logarithmic phase for use in the experiments. Antibiotics and Antibiotic Sensitivity Test For the selection of transgenic mutants and antibiotic sensitivity testing, we used the following antibiotics: ampicillin, chloramphenicol, erythromycin, gentamicin, kanamycin, penicillin G, tetracycline, and vancomycin (all were obtained from Sigma-Aldrich, St. Louis, MO, USA). The minimum inhibitory concentration (MIC) was evaluated by employing the liquid dilution assay. Briefly, bacterial culture was inoculated in mGAM-TS broth at 0.1 of an optical density (OD) at 620 nm (OD620). After 5 days of anaerobic incubation, the turbidity was measured at 620 nm and the growth was determined. Subcellular Fractionation Subcellular fractionation was performed as described previously [24]. All procedures were performed under cold conditions. T. denticola cells were washed in a buffer consisting of 20 mM Tris, pH 7.5, supplemented with protease inhibitors (1 mM phenylmethylsulfonyl fluoride, 0.1 mM N-a-p-tosyl-L-lysine chloromethyl ketone and 0.1 mM leupeptin). The cells were disrupted in a French pressure cell by passing them three times at 100 MPa in the presence of 25 mg/mL DNase and RNase. The undisrupted cells were removed by centrifugation at 1,0006g for 10 min. The resultant whole cell lysate (WCL) was subjected to ultracentrifugation at 100,0006g for 60 min. The supernatant and sediment were collected as soluble and envelope fractions, respectively. For further fractionation of the envelope fraction, it was suspended in a buffer containing 0.5-8% Triton X-100. The soluble and insoluble fractions in the Triton X-100-containing buffer were separated by ultracentrifugation at 100,0006g for 60 min. The protein concentration was determined using a Pierce BCA Protein Assay kit (Thermo Scientific, Rockford, IL, USA). We also extracted a surface layer from intact cells of T. denticola in a similar manner as described previously [25]. Briefly, washed bacterial cells were gently suspended and incubated for 5 min at room temperature in phosphate-buffered saline (PBS), pH 7.4, supplemented with 0.1% Triton X-100, then centrifuged at 4,0006g for 15 min. The supernatant was filtrated with a 0.22-mm pore filter membrane and concentrated by ammonium sulfate precipitation. After dialysis, it was subjected to SDS-PAGE and Western blot analyses as described below. The remaining cell pellet was observed by electron microscopy to confirm disappearance of the cell surface layer and existence of the cell body. SDS-PAGE and Western Blot Analyses The samples were denatured in a buffer containing 1% SDS with 0.4 M 2-mercaptoethanol (2-ME) at 100uC for 5 min, unless otherwise noted. SDS-PAGE gels were stained with Coomassie brilliant blue R-250 (CBB). The protein concentration was estimated by comparison with the protein bands obtained for a known quantity of bovine serum albumin. For Western blotting, the protein bands in the gel were electrophoretically transferred to a PVDF membrane. The membrane was blocked with 5% skim milk in Tris-buffered saline (TBS), pH 7.5, with 0.05% Tween 20. Subsequently, the membrane was incubated with specific antisera as described below, followed by incubation with peroxidaseconjugated anti-rabbit IgG. Amersham ECL Prime Western Blotting detection reagent (GE Healthcare UK Limited, Buckinghamshire, UK) was used for development of the target bands. Mass Spectrometry Analysis CBB-stained protein bands were identified by MALDI-TOF MS [26]. After in-gel tryptic digestion, the peptides were extracted, desalted, and analyzed using a 4800 MALDI TOF/ TOF Analyzer (Life Technologies Corporation, Carlsbad, CA, USA). The identity of the proteins was deduced from the MS peaks by a comparative analysis of the mass with that in the Mascot database (http://www.matrixscience.com/). Annotation and homology searches were also performed using BLAST (http://blast.ncbi.nlm.nih.gov/Blast.cgi). Preparation of Antisera The tde2508 (denotes gene name in this paper) gene encoding the entire TDE2508 (denotes protein name) protein was amplified by PCR from the chromosomal DNA of ATCC 35405 using primers His-2508-F and 2508-R (Table 1). To add a hexahistidine tag, the His-2508-F primer included a DNA sequence encoding an amino acid sequence of MGSSHHHHHHSSG. The DNA fragment was cloned in a vector pCR-Blunt II-TOPO (Life Technologies Corporation) and the integrity of the nucleotides was confirmed. Although we tried to purify the His-tagged TDE2508 using a Ni-affinity column, our attempts were unsuccessful since the recombinant protein formed an insoluble inclusion body. Therefore, we purified the protein by extracting the corresponding band after separation by SDS-PAGE. The purified protein was confirmed to be TDE2508 by mass spectrometry. Anti-TDE2508 antiserum was obtained by immunizing a rabbit with the purified protein emulsified with Freund's complete adjuvant. We also prepared an antiserum to whole cells of T. denticola by immunizing a rabbit with whole cells of the bacteria. We confirmed that the antiserum to the whole cells significantly reacted to Msp and TmpC in this organism (data not shown). This study was carried out in strict accordance with the recommendations of the Regulations on Animal Experimentation at Aichi Gakuin University. The protocol was approved by the University Animal Research Committee (permit number: AGUD 065). All surgery was performed under sodium pentobarbital anesthesia, and all efforts were made to minimize animal suffering. Construction of a tde2508-deletion Mutant We prepared electrocompetent cells of T. denticola as follows. T. denticola culture was spread on mGAM-TS solidified with 2.5% agar and cultivated anaerobically at 37uC for 1 week. The cells were then collected from the surface of the agar plate with a cotton swab and suspended in chilled 1 mM HEPES, pH 7.4. The cells were washed three times with the HEPES buffer and once with 10% glycerol by centrifuging at 4,0006g for 10 min at 4uC, and finally suspended in 10% glycerol. The cell concentration was adjusted so that the OD600 was 10, which corresponded to 5610 10 cells/mL. The competent cells were prepared immediately prior to use to avoid freeze-thaw of the cells. The tde2508 gene was deleted by replacing it with an erythromycin-resistant gene (ermB) [27]. Briefly, a tde2508-deletion cassette was constructed using a PCR-based overlap-extension method. The relevant primers are shown in Fig. 1 and Table 1. The 824-bp upstream region (ending immediately before the translation initiation codon; primers 2508U-F/2508U-R) and 839bp downstream region (beginning immediately after the terminating nonsense codon; primers 2508D-F/2508D-R) of the tde2508 gene were amplified by PCR from the chromosomal DNA of ATCC 35405, and the 1036-bp region containing the ermB gene (primers ermB-F/ermB-R) was amplified from pVA2198 [28]. These 3 amplicons were fused into 1 piece by the overlap extension method [29] using primers 2508U-F and 2508D-R. The final 2,699-nucleotide fragment was cloned into a pGEM-T Easy vector (Promega, Madison, WI, USA), and sequenced to confirm that there were no errors introduced during PCR amplification. The plasmid construct (50 mg) was linearized by digestion with restriction enzymes, purified, and dissolved in 50 mL of TE buffer. The linearized plasmid (50 mL) and 50 mL of the competent cells were set together in an electroporation cuvette with a 0.2-cm gap, then pulsed at 1.8 kV for approximately 5 ms. The pulsed cells were immediately transferred into 2 mL of mGAM-TS that was warmed under anaerobic conditions beforehand, and anaerobically incubated at 37uC for at least 24 h. Semisolid mGAM-TS containing 0.8% agar supplemented with 40 mg/mL erythromycin was used for the selection of transformants. Semisolid media were often used for the isolation of T. denticola clones [30] because the bacteria hardly form a colony from single cells on a solid agar medium. The selection medium was melted and incubated at 40uC to maintain its condition. The culture, including the transformants, was gently mixed in the selection medium, poured into a dish and anaerobically incubated at 37uC for 7 days. Analysis of Transcriptional Activity To investigate polar effects by the mutation of tde2508, we examined the transcriptional activity of tde2509, which is a downstream gene of tde2508 (Fig. 1). Total RNA was isolated using ISOGEN (Nippon Gene Co., Ltd, Tokyo, Japan) and treated with RNase-free recombinant DNase I (Takara Bio Inc., Otsu, Japan) to eliminate contaminating genomic DNA. The purified RNA (100 ng) was used to generate cDNA with a PrimeScript RT-PCR Kit (Takara Bio Inc.). The resulting cDNA was used as a template for PCR. The primers 2509-F and 2509-R are used to examine the transcription of tde2509 ( Fig. 1 and Table 1). After a standard 10, 15, 20, and 25-cycle PCR, the transcriptional activity was determined by analyzing the target band after agarose electrophoresis and ethidium bromide-staining of the gel. Chemical Crosslinking Assay The chemical crosslinking assay was performed as described previously [31]. T. denticola cells were suspended in PBS, pH 7.4. The cross-linker suberic acid bis N-hydroxysuccinimide ester, spacer arm length 11Å (Sigma-Aldrich) was added at a final concentration of 10-200 mM. After incubation of the reaction mixture at 4uC for 2 h, the crosslinking reaction was stopped by the addition of 1.0 M Tris buffer, pH 8.0. The reacted cells were disrupted by sonication and analyzed by Western blotting. Slide Agglutination Assay The slide agglutination assay was performed following a standard protocol. T. denticola ells were washed twice with PBS, pH 7.4, and the OD600 was adjusted to 0.5. The bacterial cells were mixed with the antisera to whole cells of T. denticola and TDE2508. Immunofluorescence Assay T. denticola culture was applied to a well of the filtration plate (MultiScreen-GV, 96-well membrane plate, 0.22 mm pore size, Millipore Corporation, Billerica, MA, USA). The solutions in the wells were removed through the membrane by centrifugation at 2,0006g for 3 min. The wells were blocked with TBS containing 3% bovine serum albumin at room temperature for 15 min and were incubated with antisera to whole cells of T. denticola or TDE2508 (1:1,000 dilutions) for 30 min at room temperature. After washing three times with TBS, Alexa Fluor 488-conjugated goat IgG fraction to the rabbit IgG secondary antibody (1:1,000 dilution; Life Technologies Corporation) was added and incubated for 30 min at room temperature in the dark. After washing, the cells were suspended in a small volume of TBS, placed on the slide glass and mounted using ProLong Gold antifade reagent (Life Technologies Corporation). The stained cells were examined by confocal laser scanning microscopy (LSM 710, Carl Zeiss, Oberkochen, Germany). Biotinylation of Cell Surface Proteins Cell surface labeling with bulky biotin reagent was performed as described previously [32]. T. denticola cells suspended in PBS, pH 8.0, and supplemented with 1 mM MgCl 2 were labeled with EZ-Link NHS-PEG12-Biotin (Thermo Scientific) at 4uC for 30 min. The reaction was stopped by the addition of 0.1 M glycine. The biotinylated cells were disrupted by sonication, subjected to SDS-PAGE and blotted onto a nitrocellulose membrane. Biotinylation was detected using peroxidase-conjugated streptavidin (Dako, Glosrup, Denmark) and Amersham ECL Prime Western Blotting detection reagent. Biofilm Assay The turbidity of the T. denticola cells in a fresh medium was adjusted so that the OD600 was 0.2. Aliquots (10 mL) were poured into a 60-mm dish (Iwaki, a brand of Asahi Glass Co. Ltd., Tokyo, Japan) and then anaerobically incubated at 37uC for 48 h. We also used plates coated with human collagen type I and human fibronectin (Iwaki). Unbound bacterial cells were removed by gently washing with PBS, pH 7.4, and the cells were then collected in 0.2 mL of PBS, pH 7.4, by scraping. The biofilm volume was evaluated by measuring the cell suspension at OD600. The biofilm was also investigated by scanning electron microscopy (SEM) as described below. Assay of Adherence to Gingival Epithelial Cells Human gingival epithelial cells, Ca9-22 (provided by the RIKEN BRC), were seeded into 8-well Lab-Tek II chamber slides (Thermo Scientific) and cultivated in DMEM with 10% fetal bovine serum, heat-inactivated under 5% CO 2 at 37uC for 24 h to near confluence at 5 610 5 cells per well. T. denticola cells were washed, and the OD600 of the cell concentration in DMEM with 10% fetal bovine serum was adjusted to 0.1 and 1.0 (corresponding to 5610 8 and 5610 9 cells/mL, respectively). Then 0.4 mL of each bacterial suspension was added to a well of the slide, corresponding to multiplicity of infection (MOI) values of 400 and 4,000, respectively. After incubation under 5% CO 2 at 37uC for 1 h, the chamber slides were gently washed three times with PBS, pH 7.4, and the cells were fixed with 4% paraformaldehyde in PBS. The fixed cells were washed three times with PBS and then permeabilized with PBS containing 0.1% Triton X-100 at 37uC for 30 min. The cells were washed again and then blocked with TBS containing 3% bovine serum albumin at room temperature for 30 min. The slides were incubated with anti-T. denticola whole cell antiserum (1:1,000 dilution) for 30 min at room temperature and washed three times with TBS. Then, Alexa Fluor 488conjugated goat IgG fraction to the rabbit IgG secondary antibody (1:1,000 dilution) and Alexa Fluor 568-conjugated phalloidin (1 mg/mL; Life Technologies Corporation) were simultaneously added and incubated for 60 min at room temperature in the dark, in order to label bacterial cells and actin filaments of Ca9-22 cells, respectively. After extensive washing with TBS, the chamber slides were mounted using ProLong Gold antifade reagent. The stained slides were examined by confocal laser scanning microscopy. T. denticola cells that adhered to the epithelia were counted in the captured images, in which a field corresponded to 0.045 mm 2 . Autoaggregation, Coaggregation and Hemagglutination Assays For all assays, T. denticola cells were washed twice with 20 mM phosphate buffer, pH 8.0 supplemented with 1 mM CaCl 2 , 1 mM MgCl 2 , and 150 mM NaCl, and the OD600 was adjusted to 0.5. For the autoaggregation assay, 1 mL of the cell suspension was placed in a cuvette and the OD600 values were intermittently monitored. For the coaggregation assay, we used another red complex bacteria, Porphyromonas gingivalis ATCC 33277. P. gingivalis was cultivated in mGAM (without any supplements) and prepared in the manner described for T. denticola. Each bacterial suspension was mixed in an equal volume and the OD600 values were monitored. The hemagglutination assay was performed in a microtiter plate as described previously [33]. Human (O-type), rabbit and chicken red blood cells were washed in PBS, pH 7.4. Bacterial cells were mixed with an equal volume of a 1% red blood cell suspension, incubated at 4uC for 1 h, and the hemagglutination was determined. Cell Surface Hydrophobicity Assay The hydrophobicity assay was performed as described previously [15]. Briefly, the washed cells were suspended in a phosphate buffer containing 0.03 M urea and 0.8 mM MgCl 2 , and the OD400 was adjusted to 0.5. Aliquots (1.2 mL) in tubes were vigorously mixed with n-hexadecane (0.6 mL) for 60 s. The OD400 of the aqueous phase was measured as an index of hydrophobicity. The relative hydrophobicity of the cell surface was calculated using the following formula: % hydrophobicity = [1-(OD400 after mixing)] 6100/(OD400 before mixing). Motility The OD600 of T. denticola cells in mGAM-TS was adjusted to 0.2. The cell suspensions (1 mL) were carefully placed on mGAM-TS agar plate which was solidified by adding agar at 0.3% and 0.5% final concentrations. The plate was anaerobically incubated at 37uC, and the turbid plaque was monitored as an index of bacterial motility. T. denticola does not spread on an agar medium, but does penetrate into the agar medium. Electron Microscopy T. denticola cells were negatively stained with 1% ammonium molybdate, pH 7.0. Whole cell samples were observed and photographed using a JEM-1210 transmission electron microscope (TEM, JEOL, Tokyo, Japan). We also observed T. denticola by SEM to investigate the biofilm. T. denticola cells were incubated on a cover glass as described for the biofilm assay. Then, the cells were fixed and dehydrated following a standard method. Cells were coated with 5 nm of platinum and observed using a scanning electron microscope (JXA-8530FA, JEOL). Statistical Analysis The data were evaluated using Student's t tests. Statistical differences were considered significant at p,0.05. All experiments were repeated at least three times except for those presented in Fig. S3 in the supporting information. Cultivation of T. denticola in a Novel Medium For the cultivation of T. denticola, complicated media such as TYGVS and NOS are widely used [18]. However, Ruby et al. showed that T. denticola grew in a relatively simple medium consisting of brain-heart infusion broth and some supplements, which called Modified NOS (mNOS, which is entirely different from NOS) [23]. This prompted us to find an alternative simple medium, such as one based on a commercially available medium. To this end, we tried Modified GAM (mGAM), which is widely used in Japan as a general medium for cultivation of anaerobic bacteria. We added 0.001% thiamin pyrophosphate (an active form of thiamin) because T. denticola has a deficiency in thiamin synthesis [34][35][36]. Additionally, heat-inactivated rabbit serum was added to the media, since addition of 10% serum was generally necessary for T. denticola cultivation. The addition of 5% and 10% sera in mGAM showed a similar growth of T. denticola, while the addition of 2.5% decreased the growth (data not shown). Therefore, our final medium comprised mGAM supplemented with 0.001% thiamin pyrophosphate and 5% rabbit serum, and was named mGAM-TS. We compared the growth of T. denticola in mGAM-TS (5% serum), TYGVS (10% serum), and mNOS (10% serum) (Fig. 2). The mGAM-TS provided a similar rate of growth compared to the other media although the plateau was slightly lower for mGAM-TS. After the inoculation, the bacteria showed a logarithmic growth between 24-48 h, and then reached a stationary phase. We decided to use a 48-h culture (late logarithmic phase) for the other experiments in this study. The use of complicated media hampers the study of T. denticola. Therefore, mGAM-TS, which is based on a commercially available medium and easily prepared, offers an advantage to the study of this organism. Another strain of T. denticola ATCC 35020 showed similar growth (data not shown), and mGAM-TS was also useful as selection medium for transformants (described below). These indicates the versatility of this medium. Analysis of Major Membrane-Associated Proteins in T. denticola We analyzed the major proteins of the envelope (membrane) fraction in T. denticola ATCC 35405. Ten micrograms of the fraction was subjected to SDS-PAGE, then stained with CBB. The major proteins (i.e., intense bands containing more than 100 ng of protein or 1% of total protein) were analyzed by mass spectrometry, and 16 proteins were identified (Fig. 3 and Table 2). Flagellar proteins (#10, 11, 12 and 14) were detected in the envelope fraction although the flagella reside in the periplasmic space in the bacteria. In addition, cytoplasmic filament protein A (#3) was detected although it exists in cytosol as a cytoskeletal protein [37]. However, it is reasonable that these were fractionated into the envelope fraction because they anchor to the inner membrane [37]. Indeed, flagella of this bacteria were reported to be fractionated into the envelope fraction [21]. Among the uncharacterized proteins including bands #7, 13, and 15, we chose to characterize the most intense band (#7, TDE2508) in this study. TDE2508 consists of 455 amino acids, and the molecular weight was calculated to be 50708.47 Da. BLAST search showed that TDE2508 exists in other T. denticola strains including ATCC 33520, MYR-T, H1-T, AL-2, ASLM, US-Trep, F0402, and H-22, and there are also homologous proteins in T. socranskii and T. azotonutricium, but not in T. pallidum. SignalP (http://www.cbs. dtu.dk/services/SignalP/) predicted that 22 amino acids in the N-terminal region present a signal peptide. The predicted signal sequence was consistent with the characteristics of the well-characterized signal peptides of T. denticola and other spirochaetes [38]. These suggest that TDE2508 is localized in the membrane or periplasm. Although we applied to N-terminal amino acid sequence analysis, it was not determined likely because of N-terminal modification of the protein. Other online programs for subcellular localization prediction, such as SOSUI (http://bp. nuap.nagoya-u.ac.jp/sosui/) and PSORT (http://www.psort.org/ ), also predicted TDE2508 to be a membrane protein. Phyre, which is a protein structure prediction program (http://www.sbg. bio.ic.ac.uk/,phyre/), predicted it to be a porin, which generally exists as major outer membrane proteins in Gram-negative bacteria and functions as a permeable pores to substances such as nutrients and antibiotics [39]. Subcellular Localization and Complex Formation of TDE2508 We examined the subcellular localization of TDE2508. TDE2508 was detected in the envelope fraction, but not in the soluble fraction (Figs. 4A and B). The envelope fraction was further fractionated by differential solubilization in Triton X-100. Fig. 4C indicated that Msp was substantially dissolved in 1% Triton X-100, suggesting that the outer membrane of T. denticola was solubilized in 1% Triton X-100 because Msp is localized in the outer membrane [10,13]. However, TDE2508 was not sufficiently solubilized even in 4% Triton X-100, and was eventually solubilized in 8% Triton X-100 (Fig. 4D). Although these results seem to indicate that TDE2508 firmly localizes in the outer membrane, the reason for the lower solubility of TDE2508 is still unclear. Furthermore, we found TDE2508 in the surface layer fraction extracted from intact cells of T. denticola by treating with 0.1% Triton X-100. TEM observation showed that the extraction caused loss of the most outer surface layer (i.e., outer membrane), whereas the cytoplasmic membrane (or cell body) remained (Fig. S1 in supporting information), indicating that the only surface layer was extracted. TDE2508 was detected in the surface layer fraction in SDS-PAGE and Western blot analyses (Fig. S2 in supporting information). This result supports that TDE2508 is primarily localized in the outer membrane. Next, we examined whether TDE2508 was exposed on the cell surface. The slide agglutination assay showed that anti-TDE2508 antiserum did not cause an agglutination of intact T. denticola cells, while anti-T. denticola whole cell antiserum caused it (positive control). Furthermore, anti-TDE2508 antiserum did not label the intact cells in immunofluorescence assay, while anti-T. denticola whole cell antiserum developed an intense signal (data not shown). We also tried to label the cell surface with biotin, but the band corresponding to TDE2508 was not biotinylated (data not shown). Thus, we did not obtain any evidence that TDE2508 was exposed on the cell surface. Further studies are required to define the cellular localization by other methods such as morphological analysis. We next examined TDE2508 complex formation. As the denaturation temperature decreased, the higher molecular weight bands of TDE2508 were more intense in Western blot analysis (Fig. 5A). The 2-ME present in the sample buffer did not affect the denaturation, and TDE2508 does not contain any cysteine residues. Treatment with a chemical cross-linker increased the higher molecular weight bands of TDE2508 in a dose-dependent manner (Fig. 5B). These results indicate that TDE2508 forms a complex, but it is unclear whether the complex consists of a homoor hetero-multimer. Construction of a tde2508-deletion Mutant We first tried to generate a tde2508 gene-deletion mutant according to a method reported in a review by Kuramitsu et al. [40]. Briefly, competent cells of T. denticola were prepared from liquid culture media such as mGAM-TS as well as TYGVS, and then DNA constructs were introduced as described in the review. We used several genetic markers of antibiotic resistance including the erythromycin-resistant genes ermF-ermB [40] and ermB [27], modified gentamicin-resistant gene (aacCm) [41], kanamycinresistant gene (kanA) [42] and chrolamphenicol-resistant gene Table 2. Major proteins of the envelope fraction (including of the inner and outer membranes) in T. denticola. Band # Gene ID* Description (cat) [42], which was used for the selection of a genetic mutants of T. denticola or other relevant bacteria. However, we could not obtain any transformants using this method. We have previously reported the successful transformation of Tannerella forthysia, an oral pathogenic bacterium, by electroporation [33]. In that study, although we could not obtain a genetic mutant by a standard protocol, we achieved the construction of a mutant when we prepared electrocompetent cells from bacterial cells grown on an agar medium. We think that collecting bacterial cells from the surface of a solid medium could reduce unfavorable ingredients and facilitate transformation. Therefore, we used the same protocol to prepare competent cells from bacterial cells developed on the surface of a solid medium in the present study. T. denticola grew on the surface of a solid medium containing 2.5% agar. When we used the competent cells and a DNA construct replaced with the ermB gene, colony spots of potential transformants appeared. Successful gene replacement was confirmed in all of clones tested by PCR (data not shown). We also confirmed abolition of the TDE2508 protein in one of the transformants as shown in Fig. 6; the band corresponding toTDE2508 disappeared in the tde2508-deletion mutant in SDS-PAGE (Fig. 6A) and Western blot analyses (Fig. 6B). Additionally, the high molecular weight bands were seen only in the wild type when the samples were denatured without heating, demonstrating that the bands were derived from TDE2508 but not artificial. Additionally, the mutant did not show a down regulation in the The envelope fraction was further fractionated by differential solubilization in 0, 0.25, 0.5, 1, 2, 4, and 8% Triton X-100, and the soluble fractions (lanes 1-7, respectively) and insoluble fractions (lanes 8-14, respectively) were subjected to Western blot analysis with anti-T. denticola whole cell antiserum (C) and anti-TDE2508 antiserum (D). The grey, white and black arrowheads denote Msp, TmpC and TDE2508, respectively. M denotes a standard marker. doi:10.1371/journal.pone.0089051.g004 transcription of tde2509, which is a downstream gene of and is cotranscribed with tde2508 (data not shown). We have constructed other mutants using this same method (unpublished data), indicating that this method is widely useful. General Characterization of the TDE2508-deficient Mutant We examined the growth of the wild type and mutant of T. denticola, and observed that the mutant did not show any growth retardation in mGAM-TS (Fig. S3 in supporting information). The mutant also showed similar growth against high osmotic pressures, although the additions of NaCl and KCl decreased the growth of both strains (Fig. S3). TEM observation showed no obvious differences between the strains ( Fig. S1A and B in supporting information), however, T. denticola produced a large number of vesicles ( Fig. S1A and B and reference [43]), and therefore it is difficult to comment on the integrity of the outer membrane. Furthermore, we did not find an obvious difference in protein patterns between the strains (Fig. S4A). Since TDE2508 was predicted to be a porin as described above, and since porin deficiency sometimes influences antibiotic resistance [44], we examined the MIC of several classes of antibiotics. However, the wild type and mutant showed the same MIC as follows; penicillin G, 0.013 mg/mL; ampicillin, 0.025 mg/ mL; gentamicin, 0.78 mg/mL; kanamycin, 12.5 mg/mL; chloramphenicol, 25 mg/mL; tetracycline, 0.39 mg/mL; metronidazole, 10 mg/mL; vancomycin, 3.13 mg/mL. Although anaerobic bacteria are generally resistant to aminoglycoside antibiotics such as gentamicin and kanamycin, these MICs of T. denticola were considerably low. We also examined motility, which is one of the characteristic features of T. denticola. When the bacterial cultures were placed on the semisolid media containing 0.3% and 0.5% agar, the bacteria grew by diffusely penetrating into the media. The mutant showed a similar diffusion rate to the wild type, indicating that TDE2508 did not influence the motility (Fig. S5 in supporting information). Adhesive Activity and Cell Surface Properties Although we did not obtain any evidence to indicate that TDE2508 is exposed on the cell surface, the protein likely localized in the outer membrane. Therefore we thought it might influence cell surface properties including adhesive activity. We first examined biofilm formation on a polystyrene plate. Surprisingly, the gene deletion remarkably increased biofilm formation (Fig. 7A). SEM observation confirmed that the mutant showed significantly more developed bacterial aggregates (Fig. 7B). We next examined the adherence to human gingival epithelial cells. The mutant also showed significantly higher numbers of adherent bacteria in both low (400) and high (4,000) MOI ( Fig. 8A and B). In order to investigate the host ligands, we examined the bacterial adherence to human collagen type I and fibronectin which are expressed on the epithelial cells and are often reported to interact with bacterial adhesins. It was reported that T. denticola bound to fibronectin through Msp [7] and other molecules [45], and to collagens through Msp [46,47] and other proteins [48]. However, in our hands, no substantial adherence was observed in both the wild type and mutant, and we could not compare them with a reliable model (data not shown). We next examined the aggregation activity and hydrophobicity of the cells. Both the wild type and mutant showed only a slight autoaggregation over 6 h (Fig. S6 in supporting information). It has been reported that T. denticola co-aggregates with P. gingivalis [16,49]. We confirmed the coaggregation with P. gingivalis, but no significant difference between the strains was observed (Fig. S6). T. denticola induces the hemagglutination of red blood cells in humans and other animals [50]. In this study, T. denticola indeed showed a hemagglutination of human, rabbit, and chicken red blood cells, but there was no difference between the strains (data not shown). Cell surface hydrophobicity was also similar between the stains; the wild type, 45.763.2%; tde2508-deletion mutant, 46.463.9%. We also examined the expression and subcellular localization of Msp which is reported to be an adherent factor of the bacteria [7,8]. However, the expression and localization of Msp was similar between the strains (Fig. S4). Conclusions We developed a simple medium for the cultivation of T. denticola, and improved a method for the construction of a genetic mutant. Our findings suggest that the preparation of competent cells is a critical factor in the genetic manipulation of T. denticola. We also showed that a major membrane protein, TDE2508, formed a complex and regulated biofilm formation and adhesion to epithelia, although the underlying mechanism needs to be clarified. Taking that TDE2508 was not likely exposed on the cell surface, it may be involved in processing or modifying other adherent factors. Figure S1 Transmission electron micrographs of T. denticola ATCC 35405 (Wild, A) and tde2508-deletion mutant (D2508, B). (C) T. denticola cells treated with 0.1% Triton X-100, showing the disappearance of the surface layer. The bacterial cells were negatively stained with 1% ammonium molybdate, pH 7.0. Bars indicate 100 nm. (TIF) Figure S2 Detection of TDE2508 in surface layer extraction. Cell surface layer was extracted from intact cells of T. denticola ATCC 35405 by suspending in 0.1% Triton X-100. The extraction was subjected to SDS-PAGE with CBB-staining (A) and Western blot analysis with anti-TDE2508 antiserum (B). Lanes 1 and 2 in panel B are the whole cell lysate and the surface layer extract, respectively. The single black arrowheads denote a monomer form of TDE2508. (TIF) Figure S3 Growth curves of T. denticola ATCC 35405 (Wild, A and C) and tde2508-deletion mutant (D2508, B and D). The mGAM-TS medium was supplemented with NaCl (A and B) and KCl (C and D) at 0-300 mM. The strains were anaerobically incubated at 37uC and the optical density (OD) at 600 nm was monitored. The figures representative ones from two independent experiments. (TIF) Figure S4 Msp expression. Whole cell lysates of T. denticola ATCC 35405 (Wild) and tde2508-deletion mutant (D2508) were fractionated into soluble and envelope fractions. The envelope fractions were further fractionated by differential solubilization in 1% Triton X-100 into soluble and insoluble fractions. The samples were denatured by heating at 100uC for 10 min and subjected to SDS-PAGE with CBB-staining (A) and Western blot analysis with anti-T. denticola whole cell antiserum (B). The odd and even lanes denote Wild and D2508, respectively. Lanes 1-2, 3-4, and 5-6 are the whole cell lysate, soluble, and envelope fractions, respectively. Lanes 7-8, and 9-10 are soluble and insoluble fractions in 1% Triton X-100, respectively. The black, grey and white arrowheads denote TDE2508, Msp and TmpC, respectively. M denotes a standard marker. (TIF) Figure S5 Motility test. T. denticola ATCC 35405 (Wild) and tde2508-deletion mutant (D2508) were seeded on mGAM-TS agar plates which were solidified with 0.3% (left) and 0.5% (right) agar. The plates were anaerobically incubated at 37uC, and the turbid plaque was monitored for 2 weeks as an index of bacterial motility. The two strains showed almost the same motility in any concentration of agar over the entire period. Images of the plates at 7 (upper) and 14 (lower) days after the incubation are presented. The numbers in the rulers indicate centimeters. (TIF) Figure S6 Aggregation assay. For the autoaggregation test, T. denticola ATCC 35405 (open circle) and tde2508-deletion mutant (open triangle) were set in a cuvette, and the optical density at 600 nm (OD600) was monitored. For the coaggregation test, T. denticola ATCC 35405 (closed circle) and tde2508-deletion mutant (closed triangle) were mixed with P. gingivalis, and the OD600 was monitored. The open squares show autoaggregation of P. gingivalis. (TIF)
8,666
2014-02-21T00:00:00.000
[ "Biology", "Medicine" ]
Ringworm in calves: risk factors, improved molecular diagnosis, and therapeutic efficacy of an Aloe vera gel extract Background Dermatophytosis in calves is a major public and veterinary health concern worldwide because of its zoonotic potential and associated economic losses in cattle farms. However, this condition has lacked adequate attention; thus, to develop effective control measures, we determined ringworm prevalence, risk factors, and the direct-sample nested PCR diagnostic indices compared with the conventional methods of dermatophytes identification. Moreover, the phenolic composition of an Aloe vera gel extract (AGE) and its in vitro and in vivo antidermatophytic activity were evaluated and compared with those of antifungal drugs. Results Of the 760 calves examined, 55.79% (424/760) showed ringworm lesions; 84.91% (360/424) were positive for fungal elements in direct-microscopy, and 79.72% (338/424) were positive in culture. Trichophyton verrucosum was the most frequently identified dermatophyte (90.24%). The risk of dermatophytosis was higher in 4–6-month-old vs. 1-month-old calves (60% vs. 41%), and in summer and winter compared with spring and autumn seasons (66 and 54% vs. 48%). Poor hygienic conditions, intensive breeding systems, animal raising for meat production, parasitic infestation, crossbreeding, and newly purchased animals were statistically significant risk factors for dermatophytosis. One-step PCR targeting the conserved regions of the 18S and 28S genes achieved unequivocal identification of T. verrucosum and T. mentagrophytes in hair samples. Nested-PCR exhibited an excellent performance in all tested diagnostic indices and increased the species-specific detection of dermatophytes by 20% compared with culture. Terbinafine and miconazole were the most active antifungal agents for dermatophytes. Gallic acid, caffeic acid, chlorogenic acid, cinnamic acid, aloe-Emodin, quercetin, and rutin were the major phenolic compounds of AGE, as assessed using high-performance liquid chromatography (HPLC). These compounds increased and synergized the antidermatophytic activity of AGE. The treated groups showed significantly lower clinical scores vs. the control group (P < 0.05). The calves were successfully treated with topical AGE (500 ppm), resulting in clinical and mycological cure within 14–28 days of the experiment; however, the recovery was achieved earlier in the topical miconazole 2% and AGE plus oral terbinafine groups. Conclusions The nested PCR assay provided a rapid diagnostic tool for dermatophytosis and complemented the conventional methods for initiating targeted treatments for ringworm in calves. The recognized antidermatophytic potential of AGE is an advantageous addition to the therapeutic outcomes of commercial drugs. (Continued from previous page) Conclusions: The nested PCR assay provided a rapid diagnostic tool for dermatophytosis and complemented the conventional methods for initiating targeted treatments for ringworm in calves. The recognized antidermatophytic potential of AGE is an advantageous addition to the therapeutic outcomes of commercial drugs. Keywords: Calves dermatophytosis, Risk factors, Direct-sample nested PCR, Antifungal drugs, Aloe vera gel extract, Treatment Background Fungal infections associated with zoonotic transmission are an important public health problem worldwide [1]. Cattle dermatophytosis is a major public and veterinary health concern, not only because of its high zoonotic impact, but also because of economic losses in cattle farms attributed to hide damage, loss of weight, decimated meat and milk, contagiousness among animals, treatment costs, and difficulty to implement control measures [2,3]. Ringworm is usually enzootic in cattle herds and is more prevalent in calves [2]. This may be explained by stressors such as rapid growth, weaning, or parasite burden (which weaken their immunity and skin health), as well as close confinement, dietary factors (deficiencies), and production systems [4]. Importantly, Trichophyton verrucosum is the predominant zoophilic dermatophyte causative species of dermatophytosis in cattle and can occasionally spread to humans through direct contact with cattle or infected fomites, causing highly inflammatory skin and hair dermatophytoses [4][5][6]. Therefore, the development of a precise laboratory test for the identification of dermatophyte species is pivotal for the prevention and effective control of dermatophytoses [2]. In this context, research articles that addressed the prevalence, risk factors, and treatment of calves' ringworm in Egypt are scarce. Furthermore, literature on the direct molecular diagnostic assays that are used for the detection and identification of dermatophytes in animal clinical samples is lacking [7,8], and there is a need to surpass the time-consuming conventional methods based on microscopy and fungal cultures, which require weeks [8]. The nested polymerase chain reaction (PCR) technique is an effective practical diagnostic approach for dermatophytosis that has helped clinicians initiate rapid and targeted, as opposed to empirical, treatments of animal ringworm [7]. Dermatophytosis in animals remains difficult to eradicate because of antifungal resistance, the scarcity of accessible and authorized antifungal agents for use in veterinary practice, the restricted systemic treatment of livestock because of hepatotoxicity, and drug residues in products consumed by humans [2,9]. Thus, the discovery of natural, less-toxic, and more-specific therapeutic alternatives is gaining ground. However, the antidermatophytic potential of natural products is plagued by a lack of in vivo studies affirming the antifungal activity of bioactive compounds discovered using in vitro studies [9]. Aloe vera is a plant of the Liliaceae family that has multiple applications, including antifungal, antibacterial, antioxidant, and antiseptic properties and use in cosmetics industries [10]. Nevertheless, the investigations of the in vitro and in vivo antidermatophytic potential of Aloe vera and the determination of its bioactive compounds remain modest. Hence, this work was designed to investigate (i) the prevalence and risk factors of calves' ringworm in Egypt, (ii) the diagnostic indices of direct nested PCR for the detection and identification of dermatophyte species on hair and scale samples compared with those of the conventional microscopic and culture methods, (iii) the biological activity and phenolic composition of an Aloe Vera gel extract (AGE), (iv) the antifungal activity of AGE in comparison to the antifungal drugs, and (v) the application of AGE for the treatment of calves' ringworm. Prevalence of dermatophytosis among clinically examined calves On clinical examination, 55.79% of calves (424/760) showed grayish-white, crusty, circular, and circumscribed discrete lesions (Additional Fig. 1A); moreover, alopecic, erythematous areas that remained after the removal of raised greasy crusts were observed occasionally (Additional Fig. 1B). The skin lesions were mostly found on the head and neck (46.69%) and all over the body (44.81%). Some cases (8.49%) also had lesions on the head, neck, and trunk. The degree of infection varied from moderate (55.18%) to severe (44.81%). Potential risk factors for calves' ringworm As revealed in Table 1, there was a highly significant (P < 0.001) association between ringworm infection and the investigated risk factors. A significant effect of age was showed on likelihood of ringworm infection as the risk of infection was 2.201 times higher in 4-6-monthold animals vs. younger calves with relative risk ratio 1.481. Meanwhile, crossbred animals were more likely (5.558 times higher) to be infected compared to purebred ones with relative risk ratio 1.724. The highest risk of calves' dermatophytosis was observed in summer and winter compared with spring and autumn seasons (65.5 and 54% vs. 48%). During winter, spring, and autumn seasons, the animals were less likely to be infected compared with summer season with risk ratios 0.824, 0.732 and 0.726, respectively. The risk of infection in intensive breeding system, newly purchased animals introduced to the farm, and conditions of parasitic infestation was higher (2.971, 5.497, and 1.720 times, respectively) compared with semi-intensive breeding system, the animals born at the farm, and the absence of parasitic infestation with relative risk 1.486, 1.713, and 1.245, respectively. The bad ventilation resulted in significant increase (6.559 times) in the likelihood of ringworm .896 times more likely in animals reared for meat production compared with animals reared for milk production with risk ratio 1.88. Also, regular use of disinfectant decreased the likelihood of ringworm infection significantly (0.209 times) compared with irregular system with risk ratio 0.589. The random forest classification model and box plot ( Fig. 1a and b) confirmed this observation, i.e. the age of calves was the most important risk factor, followed by the production system, presence of parasitic infestation, and irregular use of disinfectant was the fourth most risk factor. Nested PCR for the detection and identification of dermatophytes in clinical samples Pan-dermatophyte, one-step and nested PCR methods were evaluated in the context of dermatophyte identification in 75 samples that were direct microscopy and culture-positive, 36 samples that were positive by microscopy alone, nine samples positively diagnosed by culture alone, and 30 negative samples. The Pan-dermatophyte PCR could specifically detected dermatophyte DNA in 58%; one-step PCR did so in 62%; and nested PCR was positive in 72% of 150 samples ( Table 2) with 440 bp pchs-1 amplicons, ∼ 900 bp ITS+ amplicons, and 400 bp ITS-1 amplicons, respectively (Fig. 2 a, b, and c). Nested PCR increased the species-specific detection of dermatophytes by 20 and 10% compared with culture alone or the combination of culture and direct microscopy, respectively. Fungal culture identified dermatophytes in 56% (84/150), whereas direct microscopy identified dermatophytes in 74% (111/150) of samples. Out of the 66 samples that were negative for dermatophytes in culture, non-dermatophyte molds were cultured from 21 samples that were test-positive only by one-step PCR. In addition, non-dermatophyte molds were co-cultured with dermatophytes from six samples that were negative in the pan-dermatophyte, one-step, and nested PCRs. As depicted in Table 3 the performance of the nested-PCR assay was excellent regarding all diagnostic indices tested. Using fungal culture as a reference standard, sensitivities of 82.14 and 71.43% and specificities of 72.73 and 50% were recorded for pan-dermatophyte and onestep PCR, respectively, whereas the corresponding values Fig. 1 (a) Random forest classification showing the most important risk factor (y-axis) as a classifier differentiating between diseased and nondiseased calves when it was clinically examined. The X-axis refers to the predictive accuracy of the studied risk factors. The mini heatmap shows the frequency distribution of each factor across the two outcomes (ringworm lesion and without lesion). Each dot refers to the value of mean decrease accuracy of one risk factor, (b) Box plot for a normal distribution of age (as a continuous variable) across the examined calves (n = 760), each dot represents one case and the horizontal line refers to the median of age distribution were 92.86 and 54.55% for nested PCR. In contrast, using the combination of culture and nested PCR as the gold standard, nested PCR was superior to the other methods as it achieved a sensitivity value of 94.74%, whereas culture and direct microscopy exhibited sensitivities of 73.68 and 78.95%, respectively. Specificity and PPVs were 100% for nested PCR and culture; therefore, they were considered as the gold standard, while the corresponding values were 41.67 and 81.08% for direct microscopy. Nested PCR was very accurate (AUC = 96%), whereas pan-dermatophyte PCR (82%) and culture (80%) were moderately accurate. A lower diagnostic accuracy was recorded for the direct microscopy and one-step PCR (50 < AUC ≤ 70%). The diagnostic odds ratio (DOR) of nested PCR is much higher than that of any other test which implies that the diagnostic performance of nested PCR was the best and was in strong agreement with the results obtained using culture and nested PCR results (Kappa value = 0. 91 and P < 0.001). Confirmatory DNA sequencing for the representative ITS+ and pchs-1 amplicons was performed, and the BLAST search of the resulting sequences produced hits that corresponded to T. verrucosum and T. mentagrophytes sequences available in the GenBank. Susceptibility of dermatophytes to antifungal drugs The minimum inhibitory concentration (MIC) values of the five antifungal drugs for T. verrucosum and T. mentagrophytes are presented in Additional Table 1. The comparison of the values of the five antifungals for the two species tested revealed that those obtained for terbinafine were the lowest, followed by miconazole (MIC range, 0.03-0.25, 0.03-1 μg/mL; 0.06-0.5, 0.03-0.5 μg/ mL, respectively). Moreover, the MIC 50 and MIC 90 values of terbinafine and miconazole were the lowest when compared with those of other antifungals. The mean MIC values ± SD of the tested antifungal agents did not differ between T. verrucosum and T. mentagrophytes (P > 0.05). Fluconazole was the least effective drug, with an overall MIC range of 8-64 μg/ mL. Table 4 the AGE yield was 1.02 g extract of 100 g − 1 . The amount of total phenolic compounds in AGE was 111.78 mg of gallic acid equivalent (GAE) g − 1 of gel. The flavonoid content of the extract was 45.6 mg of quercetin equivalent (QE) g − 1 of gel. Flavonoids have a broad range of chemical and biological activities, including radicalscavenging properties. For this reason, the extract was analyzed for total phenolic and flavonoid content. The major phenolic compounds of AGE were identified by HPLC and are presented in Table 4. They included gallic acid, caffeic acid, chlorogenic acid, cinnamic acid, aloe-Emodin, quercetin, and rutin. All compounds increased and synergized the antidermatophytic activity of AGE. As depicted in The results of the analysis of the 1, 1-Diphenyl-2picrylhydrazyl (DPPH˙) radical-antioxidant activities of Representative a pchs-1 and b ITS+ amplicons were sequenced for confirmation of the dermatophytes identification results. GenBank accession numbers of the nucleotide sequences were mentioned in the methods section Effectiveness of AGE in the eradication of T. verrucosum from calves In the treated calves, gradual improvement of the lesions was observed within 7-12 days post-treatment. Complete clinical recovery (full hair growth) was observed within 14-19 days after treatment for calves in G2 and G4 and within 21-28 days for animals in G1 and G3 (Additional Fig. 2). In contrast, the lesions detected on the control animals (G5) progressed and did not heal until 42 days of the study. Direct microscopic examination and fungal cultures yielded negative results within the 4 th week of treatment. In contrast, samples from the untreated control calves repeatedly yielded positive mycological results during the investigation period. As revealed in Fig. 4, the clinical scores of the five animal groups did not differ significantly on days 0 and 7, while the treated groups showed significantly lower clinical scores than did the control (untreated) group (G5) on days 14, 21, 28, and 42 (P < 0.05). Neither recurrence nor gross side effects were observed throughout the study period or during the clinical follow-up. Discussion An enzootic circumstance of animal dermatophytosis is the outcome of the confinement of animals in breeding and the viability of the arthrospores in the environment for many months [2]. Prevention is difficult, but periodic surveys of the prevalence and risk factors of cattle ringworm may permit the adoption of increasingly effective prophylactic and control measures to prevent infection both to other animals and to humans [2,11,12]. In this study, the prevalence rate of ringworm in calves aged 1-6 months was 55.79%, that was nearly identical to that reported in Iran (57.5%) [5]. In contrast, the prevalence rate was higher than the 1.6% documented in Pakistan [12], but lower than the 87.7% reported in the Tuscany region [4] and the 71.7% observed in nearby Umbria, in Italy [11]. This discrepancy among countries is perhaps attributable to cattle breed, production, breeding system, origin of the cattle in the farm, and climatic conditions [11]. To answer the question which potential risk factor is most important and would best differentiate between the infected and non-infected clinically examined calves, we depended on the random forest classification model, which best suits doing this task (Fig. 1a) as demonstrated previously [13] . In accordance with other studies [4,11], the random forest classification and box plot model indicated that age was the most important risk factor, as the risk of infection was higher in calves aged 4-6 month than it was in younger suckling calves (60% vs. 41%); and this could be attributed to the stressors of weaning and rapid growth. Furthermore, we found a highly significant correlation between several risk factors found in the examined calf population and ringworm infection, mainly season, bad ventilation, overcrowding, and irregular use of disinfectants. This reinforces the broadly accepted concept that high humidity, close contact between calves, poor hygienic conditions in stables play a significant role in the increase in ringworm prevalence [4,5,11,12]. Hence, repeated topical treatment of all infected animals, together with good ventilation and thorough disinfection of stables, halters, fences, cleaning tools, and all of the materials that come into contact with the animals are the basis for the effective control of cattle ringworm [11,14]. Of interest, there was a highly significant (P < 0.001) association between the risk of dermatophytosis and the new introduction of animals to the farm. In support of this finding, Papini et al. [4] debated that calves that are newly introduced into a herd spread the infection to both calves and humans, as they are carriers of dermatophytes before the development of clinical signs. As described previously [2,5,11,15], the detected clinical signs of cattle dermatophytosis were crusty lesions on the head and neck regions and other parts of the body. However, a study in Tanzania [16] reported occasion detection of the widespread lesions of alopecia and erythema which were also observed here. The detection rates were 84.91% by direct microscopy and 79.72% by fungal culture. In this context, inadequate scraping of the lesions and the slow and poor growth of T. verrucosum which hampered its detection, are probable explanations for the falsenegative results of direct microscopy and culture, respectively [4]. According to previous studies [11,15], T. verrucosum is the main dermatophyte causing cattle ringworm, although T. mentagrophytes which is usually associated with the presence of small rodents in farm has also been isolated in this context. The present findings showed that calves' ringworm was caused by T. verrucosum in 90.24% and T. mentagrophytes in 9.76% of cases. Nevertheless, Aghamirian and Ghiasian [5] isolated T. verrucosum exclusively from 352 infected cows in Iran. To date, molecular assays have been used for the detection of dermatophytes in clinical samples, as well as confirmation tests of the results of culture [17,18]. Wollina and coauthors [17] have failed to cultivate T. verrucosum from the sample of a patient with severe tinea barbae. However, real-time PCR and ITS2 sequencing successfully detected T. verrucosum. No previous studies have attempted to identify dermatophyte species from hair samples of calves using a nested PCR assay. The findings obtained here revealed that one-step PCR correctly identified T. verrucosum and T. mentagrophytes in samples that were culture-positive (n = 72/150), with an amplicon size of 900 bp and 872 bp, respectively. In addition, nested PCR amplified ITS+ of both species in 108 samples and produced ITS-1 amplicons of 400 bp. This was in agreement with another study [7] that showed that the one step-PCR accurately identified Microsporum canis in hair samples from canines and felines with a band appearing at 922 bp, whereas ITS+ amplicons of 851-872 bp were obtained for T. mentagrophytes, T. terrestre, or M. gypseum; moreover, nested PCR achieved unequivocal identification of these species. The highly sensitive nested PCR method also exhibited a high specificity and PPV for detecting additional dermatophyte-positive samples that were missed by culture (n = 30/150) or by both microscopy and culture (n = 15). Other studies highlighted the incorporation of direct PCR in the laboratory diagnosis of onychomycosis for increasing the detection of dermatophyte-positive samples that are culture negative [19,20]. A possible explanation for the low specificity and accuracy obtained using one-step PCR compared with pan-dermatophyte and nested PCR is the use of the universal fungal regions of rDNA. Moreover, the low sensitivity of the culture method could be attributed to the overgrowth of nondermatophyte molds in the culture or the dermatophytes cultures were not yet positive after 4 weeks of incubation [4,7]. Other reasons are the presence of non-viable fungal material in specimens from treated calves or that the DNA extraction step ease overcoming the impediment of trapping fungus in the keratin [20]. As reported previously [21,22], terbinafine and miconazole were effective antifungal drugs for dermatophytes followed by itraconazole and griseofulvin; in contrast, fluconazole was the least-active antifungal agent. Recently, Pal [23] recommended the performance of further research for the development of cheap, safe, and potent chemotherapeutic agents for cattle dermatophytosis management. AGE is a cheap, easily obtainable, and safe natural product. Moreover, it is a limitless source of bioactive compounds with recognized antifungal activities that correlate with its antioxidant activities [24]. The results of the three assays used for measuring the antioxidant activity indicate that phenolic compounds have a high antioxidant capacity [25] because of their redox properties, which can play a significant role in the absorption and neutralization of free radicals, decomposition of peroxides, and quenching of singlet and reductive heavy metals with two or more valence states [26]. Phenolic compounds are the active antimicrobial constituents of various plants extract. However, the whole extract has a more noteworthy antifungal activity. Accordingly, AGE might be more advantageous than the isolated components, as the properties of a bioactive individual constituent can change its properties in the presence of other compounds [27]. The additive and synergistic effects of phenolic compounds account for their efficient bioactive properties, which explains why no single antimicrobial agent can supplant the combination of these natural components in achieving the antifungal activity [28]. The antidermatophytic activity of AGE recognized here was inconsistent with the findings of a previous report [29], which showed that the water extract of Aloe vera was effective against T. mentagrophytes. However, no reports exist of the activity of AGE against T. verrucosum. Nonetheless, most investigations were performed on fungal isolates, which hampers the extrapolation of the findings to real conditions. Therefore, additional in vivo studies are needed to ensure the reliability of the results [9]. The efficacy of the topical application of AGE for 2 weeks twice daily was compared with topical miconazole 2%, oral terbinafine with topical AGE and once-daily oral terbinafine in proven T. verrucosum infected calves. The clinical scores were significantly lower in all treated groups after 14 days of treatment compared with the untreated group (P < 0.05), whereas complete clinical recovery was achieved earlier in the miconazole group and AGE with oral terbinafine group vs. both the oral terbinafine group and AGE alone group. This indicates that the combination of AGE with oral terbinafine is effective for the treatment of dermatophytosis in calves. The results obtained were comparable with the findings of the treatment of calves with dermatophytosis with topical application of propolis and Whitfield's ointment [30], as well as a polyherbal lotion combined with levamisole and griseofulvin [31]. Conclusion This study highlighted the need for good hygienic conditions, regular disinfection of holdings, rapid treatment of infected calves, and examination of the incoming calves to prevent dermatophytic epizoonoses in calves and humans. The implementation of a nested PCR assay provided a rapid diagnostic tool for dermatophytosis and complemented the conventional methods for dermatophytespecies-specific detection for the initiation of targeted treatment, thus reducing the burden of the economic losses caused by ringworm infection. The recognized antidermatophytic potential of AGE is an advantageous addition to commercial drugs and the combination of AGE with oral terbinafine has a potential therapeutic value against ringworm in calves. Population and collection of clinical samples From May 2015 to December 2018, a total of 760 Holstein cow calves (597 weaning and 163 suckling calves) raised in different farms in Egypt, were clinically examined for evidence of ringworm infection. Data about age, breed, farm production, breeding system, production management system, and the origin of calves of the farm were obtained for each calf as potential risk factors. For the assessment of parasitic infestation, fecal samples were examined for enteric parasites and thin blood films were prepared, fixed in absolute methyl alcohol, and stained with freshly filtered and diluted 10% Giemsa stain. After cleaning the skin lesion of the suspected ringwormaffected calf with 70% ethanol, scales and dull hair samples from the margins were collected using a sterilized plastic hair brush and tweezers, respectively [4]. Portions of hair and scales were examined microscopically after clearing with 20% potassium hydroxide (KOH), cultured on Mycobiotic Agar (Remel™, Thermo Fisher Scientific) slants with 10% thiamine and inositol, incubated at 30°C for 4-6 weeks, and observed for growth at 3-day intervals. Dermatophyte isolates were identified according to their macro-and micromorphological characteristics [32]. Extraction of DNA from hair and scale samples and PCR amplification The direct molecular identification of dermatophytes was executed in 150 clinical samples that were selected based on the results of direct microscopy and culture analyses. For the high-throughput disruption of samples, 50 mg of hair and scales were placed in a 2 mL safe-lock tube and incubated overnight at 55°C with 360 μL of ATL buffer and 20 μL of QIAGEN protease (QIAamp DNA Mini kit, Qiagen, Germany, GmbH). Subsequently, tungsten carbide beads were added, and the tubes were placed into the TissueLyser adapter set for disruption using the TissueLyser for 2 min at 20-30 Hz twice. DNA extraction was performed using a QIAamp DNeasy Plant Mini kit (Qiagen, Germany, GmbH) according to the manufacturer's instructions. DNA was eluted with 50 μL of elution buffer and the concentration was assessed using a NanoDrop™2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Nested PCR A nested PCR was applied to amplify 400 bp of a conserved region in the dermatophyte 5.8S gene from the ITS+ amplicons of the primary PCR using the DMTF18SF1 and DMTFITS1R (5-CCGGAACCAAG AGATCCGTTGTTG-3) primers [7]. PCR was performed in an amplification reaction containing 12.5 μL of EmeraldAmp Max PCR Master Mix (Takara, Japan), 1 μL of each primer (20 pmol), 6 μL of DNA template in the case of primary PCR or 1 μL of diluted product from the primary PCR (dilution, 1:1 with molecular-grade water) for nested PCR and nucleasefree water was added up to 25 μL. T. verrucosum ATCC®28203™ and an amplification reaction without DNA template were used as a positive and negative control, respectively. Thermocycling conditions described previously [7] were used in an Applied Biosystems 2720 thermal cycler (Thermo Fisher Scientific, USA). The amplified products were electrophoresed on ethidium-bromide-stained 1.5% agarose gels (Applichem, Germany, GmbH). A gelpilot 100 bp DNA ladder (Qiagen, Gmbh, Germany) and a 100 bp DNA ladder H3 RTU (Genedirex, Taiwan) were used to determine the amplicon sizes. A gel documentation system (Alpha Innotech, Biometra) was used to photograph the gels and the analysis of the data was performed using a computer software. DNA sequencing and sequence analysis Thirty-seven representative ITS+ and pchs-1 PCR products were purified using the QIAquick PCR Product extraction kit (Qiagen, Valencia) and then sequenced using a Bigdye Terminator V3. Antifungal susceptibility testing of dermatophytes isolates The broth micro-dilution method (according to CLSI M38-A2 guidelines [33]) was used for testing the sensitivity of the dermatophyte isolates to the most commonly used antifungal drugs. Fluconazole was obtained from Pfizer International (New York, NY, USA), itraconazole, and miconazole were obtained from the Janssen Research Foundation (Beerse, Belgium), griseofulvin was purchased from Sigma Chemical Company (St. Louis, MO, USA), and terbinafine was purchased from Novartis (Basel, Switzerland). All drugs were dissolved in dimethyl sulfoxide (DMSO, Sigma-Aldrich), with the exception of fluconazole, which was dissolved in RPMI1640 medium (Sigma Co. St. Louis, USA), buffered at pH 7.0 with 165 mM of 3-(N-morpholino) propanesulfonic acid (MOPS; Sigma), and serially diluted two-fold to final concentrations of 0.125-64 μg/mL for fluconazole and 0.03-16 μg/mL for the other antifungal agents. MIC values, MIC 50 , and MIC 90 were determined. Preparation of Aloe vera gel extracts (AGEs) Aloe vera leaves were obtained from the Agriculture Faculty, Zagazig University, Zagazig, Egypt. Aloe vera gel was obtained from the leaves by scratching. The aqueous extract of the gel was prepared using a magnetic stirrer (Fisher Scientific) and filtered using Whatman No. 1 filter paper. The extraction ratio was 1:5 (W:V, gel:solvent). The filtrate was freeze-dried (Thermo-Electron Corporation-Heto power dry LL300 Freeze Dryer) and the extract was then weighed to decide the yield and stored at − 20°C. Chemical characterization of AGE Determination of phenolic compounds The concentration of total phenols in the extract was measured by a UV spectrophotometer (Jenway-UV-VIS Spectrophotometer 6705) based on the colorimetric reduction of the reagent by phenolic compounds, as described by Škerget et al. [34]. The total phenolic content, expressed as GAE, was calculated as follows: y = 0.0228 x + 0.0086 and R 2 = 0.9969, where x is the concentration (μg GAE) and y is the absorbance. Determination of total flavonoids Total flavonoid content, expressed as QE, in AGE at a final concentration of 1 mg mL − 1 was calculated as follows: y = 0.0142 x -0.007 and R 2 = 0.9994, where x is the concentration (μg QE) and y is the absorbance [35]. Antioxidant and biological activity of AGE DPPH˙radical-scavenging activity The electron-donation ability of AGE was measured by bleaching of the DPPH˙(Sigma, St. Louis, MO, USA) purple-colored solution using a UV spectrophotometer (Jenway-UV-VIS Spectrophotometer 6705) [37]. The absorbance was determined against the control at 515 nm [38]. The percentage of the scavenging activity of the DPPH˙free radical was calculated as follows: where A control is the absorbance of the control reaction and A sample is the absorbance in the presence of the plant extract. Gallic acid and TBHQ (Sigma, St. Louis, MO, USA) (1 mg/1 mL of methanol) were used as positive controls. Samples were tested in triplicate. β-Carotene/linoleic acid bleaching The ability of AGE and synthetic antioxidants (gallic aid and TBHQ) to hinder the bleaching of β-carotene (Sigma, St. Louis, MO, USA) was examined according to Dastmalchi et al. [39]. A control sample with no added extract was also analyzed. Antioxidant activity was calculated as follows: where A 0 sample is the absorbance of the AGE or synthetic antioxidant at time 0, A 120 sample is the absorbance after 120 min, and A 0 control and A 120 control are the absorbances of the control at time 0 and after 120 min, respectively. Ferric reducing antioxidant power The reducing power of the extract was assessed [38]. Distilled water was used as a negative control and gallic acid and TBHQ were used as positive controls. The absorbance of this mixture was measured at 700 nm using a UV spectrophotometer (Jenway-UV-VIS Spectrophotometer 6705). A decrease in absorbance indicated the ferric reducing power capability of the sample. Testing the antidermatophyte activity of AGE The procedure of Silva et al. [40] was used to test the antidermatophyte activity of AGE. The freeze-dried AGE (3.5 g) was dissolved and serially two-fold diluted in RPMI-1640 broth, to obtain a concentration range of 1000-20,000 μg/mL as TPC. A final concentration of 50-1000 μg/mL was obtained by mixing 2 mL of this solution with 18 mL of liquefied Mycobiotic Agar medium (Remel™, Thermo Fisher Scientific) at 45°C in a sterile Petri dish. Subsequently, wells with a diameter of 3 mm were made in the center of this agar plate and filled with 10 μL of the fungal spore suspension (10 6 CFU/mL) that was prepared from freshly cultured isolates. The plates were incubated for 5 days at 25°C. The assay was carried out in triplicate and growth and drug controls were incorporated into the test. The concentration that inhibited the fungal growth was considered as the MIC. Investigation of AGE effectiveness for the treatment of calf ringworm Seventy-five calves showing evident clinical signs of ringworm were used for the investigation of AGE effectiveness in comparison with antifungal drugs for the treatment of this condition after obtaining informed consent from the farm owners. The enrolled calves were positive on mycological examination and T. verrucosum was isolated from clinical samples. Sample size calculation at a 0.05 significance level and 80% power revealed that 15 calves per group (G) would have been required. Calves exhibiting an equivalent severity of lesions distributed on the head, neck, and body were allocated randomly into five groups by using random number generator. Animals in G1 were treated orally with 250 mg/day of terbinafine (Lamisil®; Novartis, Basel, Switzerland). The crust on the skin lesions was removed with a brush and topical miconazole (Janssen Research Foundation, Beerse, Belgium) (G2), AGE solution (500 ppm) (G3), or oral terbinafine in combination with AGE (G4) were applied twice a day for 2 weeks. Animals in G5 were left untreated (as controls). Calves were observed daily for 6 weeks. In the beginning, during and after the treatment, the clinical efficacy was assessed by scoring dermatophytosis lesions on a 0-3 scale using previously published criteria [41][42][43] (Additional Table 2). The scoring was performed by the same investigator who was blinded to the treatment groups. The scores for each evaluated area (e.g. head, neck, and body) were averaged as follows: Sum of scores assigned to all lesions on the area/ Number of lesions on this area. Average total score of each animal = Sum of scores assigned to all evaluated areas/ Number of affected areas [44]. The lesions were assessed on every examination. The mycological examination was performed every week until two consecutive fungal cultures gave negative results [30,44]. The control animals were treated after the observation period. Each animal group was housed in a separate well ventilated, open sided pen with sheltered area. All the pens received similar management conditions. Area per calf in each pen was not less than 4m 2 to avoid the overcrowding. The pens were bedded with straw that was changed every 1-2 weeks and antifungal disinfection for the entire pen and all materials with which animals come in contact was performed using 0.2% enilconazole (Clinafarm® EC; Merck Animal Health USA). Data analysis Various risk factors that were recorded on the whole set of the 760 clinically examined calves for ringworm lesions were included as independent variables in a multiple stepwise logistic regression model (PROC LOGISTIC, SAS Institute Inc. [45]). An approximate measure of relative risk was determined using odds ratio (the antilogarithm of the coefficient) with 95% confidence intervals. To confirm results and to identify the most important risk factor as a classifier that differentiated between infected and noninfected calves, a random forest non-parametric classification analysis was performed using the MetaboAnalystR web server [46]. Briefly, the occurrence of each variable was first used to build up a random forest classification model (an ensemble of 500 tree trials; out-of-bag (OOB) error = 0.6) in the respective outcome. The importance of the risk factor was determined by measuring the increase in the OOB error when the respective factor was permuted. The sensitivity, specificity, negative and positive predictive values, positive and negative likelihood ratio, and diagnostic odds ratio, which express the strength of the association between the test results and disease, with 95% confidence intervals for direct-sample PCR assays were estimated. All diagnostic indices were predestined based on (a) culture and (b) culture and nested PCR as the gold standard for the detection/identification of dermatophytes causing calves' ringworm. The Kappa value was used to test the agreement between test results. Student's t-test was used to compare the mean MIC values ± SD of each antifungal drug for the tested species. Kruskal-Wallis test was used to analyze the differences in clinical score changes among the treated and untreated groups over time after confirming the significance of Shapiro-Wilk test results [47]. The differences in clinical scores between groups were assessed by the Mann-Whitney U test after a significant result on the Kruskal-Wallis test. Significance was set at P < 0.05.
8,160.6
2020-11-04T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
The Imbalance in Serum Concentration of Th-1- and Th-2-Derived Chemokines as One of the Factors Involved in Pathogenesis of Atopic Dermatitis Atopic dermatitis (AD) is an inflammatory skin disease in which pathogenesis chemokines are partially involved. The aim of the paper was to assess the serum level of CXCL-9, CXCL-10, CXCL-11, CXCL-12, CCL-17, CCL-20, CCL-21, CCL-22, CCL-27, and IL-18 chosen in AD patients by ELISA assay. Forty patients (mean age 11.4 years old) with AD and 50 healthy controls were enrolled into the study. The patients and controls were divided into two age categories: under 10 years old (Group 1 and Control 1) and over 10 years old (Group 2 and Control 2). Significantly lower serum concentration of CXCL-9, CXCL-10, CCL-17, and IL-18 and higher concentration of CXCL-12 and CCL-27 were found in Group 1 when compared to Control 1. In Group 2 serum concentration of CXCL-12, CCL-17, CCL-22 was higher than in Control 2. The obtained results indicate the imbalance in chemokine serum levels in AD what suggests their role in the disease pathogenesis. Introduction Atopic dermatitis (AD) is a chronic and recurrent disease which concerns 10-20% of population [1]. The onset of AD is usually before 2 years old, and in approximately 60% of the patients skin lesions of different intensity remain for the whole life [2]. AD is characterized by typical morphology and distribution of skin lesions, severe pruritus, and familial atopic history. Clinical phenotype of the disease depends on multiple interactions between genetic and immunological disturbances, epidermal barrier impairment, and environmental factors [3,4]. Histopathologically skin lesions present mainly with a dermal infiltrate of mainly CLA+ memory T cells and Langerhans cells [5]. Leukocyte trafficking into the skin in AD patients is probably mainly regulated by adhesion molecules and chemoattractive proteins, so-called chemokines [6,7]. Chemokines are small secreted proteins involved in migration and activation of lymphocytes T. Recently published papers indicate their role in pathogenesis of multiple inflammatory skin diseases including atopic dermatitis [8]. In active skin lesions in the course of AD infiltrates of Th-2 lymphocytes releasing IL-4 and/or IL-13 are found. These cells express CCR4, CCR8, and CRTH2 receptors on their surface. Lymphocytes Th-2 migration is selectively induced by such chemokines as CCL17 and CCL22, which are highly overexpressed on the keratinocytes in AD patients epidermis. These phenomena lead in a consequence to development of local inflammation [9,10]. In the studies analyzing serum concentration of these chemokines, increased levels of CCL-17 and CCL-22 were found in AD patients, and their concentrations strongly correlated with disease activity [11,12]. Shimada et al. [13] found increased levels of Th-2 (CCL-17 and CCL-22) and Th-1 (CXCL-9) chemokines in AD patients. They also found a positive correlation between Th-2 chemokines serum level and total IgE concentration, moreover Th-2 chemokines correlated with Th-1 ones. In another study performed in infantile AD patients (mean age 4.5 months) also increased levels of CCL-17, CCL-20, CCL-27 were observed which strongly correlated with disease activity, however the most prominent correlation was observed for CCL-27 [14]. Interleukin (IL)-18 is involved in pathogenesis of type-2 helper cells-mediated diseases including atopic dermatitis. According to literature, its serum concentration is significantly elevated in AD patients and correlates with clinical severity of the disease [15,16]. Most literature data point out an important role of chemokine network imbalance in development of atopic dermatitis, however there are scarce data on the complex analysis of serum levels of Th-1-and Th-2-derived chemokines in AD patients. Thus, the aim of the paper was to assess the serum level of CXCL-9, CXCL-10, CXCL-11, CXCL-12, CCL-17, CCL-20, CCL-21, CCL-22, CCL-27, IL-18 in two AD patient groups, below and over 10 years old. Additionally we analyzed serum levels of the chosen chemokines in two groups of healthy volunteers, aged-matched, to note any agedependent variations in healthy population. Patients. Forty patients (mean age 11.4 years old; 23 F, 15 M) with AD and 50 healthy controls, age and sex matched were enrolled into the study. AD was diagnosed according to criteria proposed by Hanifin and Rajka [17]. The enrolled patients were divided into two age categories: under 10 years old (Group 1; n = 23) and over 10 years old (Group 2; n = 17). According to this criterion, the control group was divided as well (Control 1 under 10 years old n = 30; and Control 2 over 10 years old n = 20). We used criterion of 10 years old as a cut-off point because this age is believed to initiate adolescence life period. Clinical characteristics of the patients are presented in Table 1. Methods. Each patient or his/her parents gave written informed consent before entering the study, and all the experiments were approved by the Local Ethics Committee. The investigations were carried out in accordance with Declaration of Helsinki. Before entering the study the subjects underwent thorough physical examination, and scoring atopic dermatitis (SCORAD) index was assessed [18]. The patients enrolled to the study had moderate AD (mean SCORAD index 23 range 16-39). Serum samples were analyzed for CXCL-9, CXCL-10, CXCL-11, CXCL-12, CCL-17, CCL-20, CCL-21, CCL-22, CCL-27, IL-18 concentration with ELISA assay (R&D system, Mineapolis, Minn, USA) according to manufacturer's instructions. The concentrations were calculated from the standard curve generated by curve-fitting programme. Statistical Analysis Data were analyzed using the Mann-Whitney U test, and correlations coefficients were determined by using the Spearman rank correlation test. A P-value of <.05 was considered as statistically significant. Taking the whole AD group (Group 1 and 2) into statistical analysis we found no correlation between serum levels of analyzed parameters and patients' age, while doing the same analysis for whole control group (Control 1 + Control 2) we found negative correlations between age of the subjects and the following proteins: CCL-17, CCL-22 and IL-18 (r = −0.38, P = .01; r = −0.67, P = .000002; r = −0.3, P = .045; resp.). Recent data indicate increased serum level of CCL-27 in AD patients, correlating with disease activity, what suggests its role in inflammatory process [22,23]. Hon et al. [23] assessed CCL-27 serum level in children aged 1-11 years with mean SCORAD [29.7] and found its higher serum concentration, what is in line with our results, however contrary to the authors we found no correlation between CCL-27 and CCL-17 and CCL-22 concentrations. Also level of CCL-17 in Group 1, although statistically different than that in Control 1, was lower in AD patients than in healthy ones. Such discrepancy between our results and published ones in other papers, indicates that it is still unclear either increased or decreased CCL-17 level is a characteristic for AD patients, however its distinct level when compared to the control groups testifies its role in AD pathogenesis. In our study similar observations concern CXCL-9, CXCL-10, and IL-18. Interestingly, in Group 2 (patients over 10 years old) we found elevated serum levels of CCL-17, CCL-22, and CXCL-12 what also proves their role in AD pathogenesis. Other authors also showed significantly higher concentrations of CCL-17, CCL-22, and eotaxin in AD patients than in healthy control. They found positive correlations between serum level of CCL-17 and CCL-22 and total IgE concentration and as well these chemokines correlated positively with SCORAD index [24]. In our study CCL-17, CCL-27 and IL-18 serum levels correlated positively with total IgE in Group 2, and eosinophilia correlated positively with CCL-27. Hon et al. [25] revealed correlation between IgE and CCL-17 and eosinophiles count but not with CCL-27 what is only partially consistent with our results. In skin biopsies taken from AD patients CCL-20 expression is observed in basal layer of epidermis. This protein is a strong chemoattractant for immature dendritic cells and memory T cells via interactions with CCR6. CCL20 may be induced on keratinocytes under proinflammatory cytokines such: IL-1α or TNF-α. In healthy epidermis CCL20 is constitutively expressed in epidermal basal layer, however its expression is significantly lower than in inflammatory skin [26,27]. These data prove CCL-20 role in AD pathogenesis. Although we found no differences between CCL-20 serum concentration in AD patients and controls, its higher concentration was observed in Group 1 than in Group 2. Moreover, in younger population (Group 1) IgE and eosinophilia correlated with CCL-20. To our knowledge, there are no data in literature on the subjects, however observed association and data mentioned above provide its role in AD pathogenesis in younger population and its level normalization in line with the age. To our knowledge, there are no data on CCL-21 serum levels in AD patients. We examined this chemokine as it is strongly chemoattractive to lymphocytes T, enhances expression of LFA-1 on these cells, and mediates cell-tocell adhesion. Besides, CCL-21 and CCR7 receptor influence naive T cell migration to lymph nodes where antigen is presented. In healthy skin immunostaining for CCL21 is negative, however it is expressed on dermal endothelial cells in atopic dermatitis [28]. Weninger et al. [29] showed that CCL21 expression on blood vessels positively correlated with the presence of CD45RA+ T cells in the inflammatory infiltrate. Although CCL21 expression was found in inflammatory T cell-mediated diseases, its exact role in their pathogenesis is not elucidated. Our study in which we found no differences in CCL-21 serum concentration between AD patients and controls does not prove the role of the chemokine in AD pathogenesis. To assess chemokines serum levels depending on age in healthy population, we attempted to check differences between Controls 1 and 2. Our analysis revealed a distinct pattern in healthy population than that in AD groups. In younger healthy population we found increased levels of CXCL-9, CXCL-11, CCL-17, CCL-20, CCL-22, and IL-18 while comparing Groups 1 and 2 the only significance concerned CCL-20. In the study published by Furusyo et al. [30] no differences in serum level of CCL-17 between AD patients (children 0-5 years old) and age-matched healthy control were revealed. Moreover in healthy children they observed that serum CCL-17 concentration decreased with age while serum CCL-17 in AD patients did not differ in relation to age. The authors revealed strong dependence between CCL-17 serum concentration and AD course during childhood. These data are partially consistent with ours, as we also observed a decrease in CCL-17 with age in healthy population and no such age-dependence in AD group. To our knowledge there are no more data analyzing chemokines in this aspect. Lack of these naturally occurring changes in chemokine serum levels in AD patients provides their role in the disease pathogenesis. This hypothesis may be partially proven by our observation on the lack of correlation between age and chosen chemokines serum levels in AD patients and the presence of multiple negative correlations between CCL-17, CCL-22, IL-18, and age in the whole control group. Concluding, we may assume that in younger children with AD a decreased serum level of Th-1-derived chemokines is one of the factors involved in the disease development. The imbalance between Th-1 and Th-2 is probably involved in AD pathogenesis as well, what in our paper is especially emphasized by differences in chemokine concentration between two AD groups and two age-matched controls. Our study, similar to others, revealed significant changes between chemokine levels in AD patients and controls, however not always consistent with other authors what may result from two main reasons. The first one is a new aspect of AD pathogenesis, mostly focused now on the impairment of epidermal barrier and innate immune defense as the primary causative factors involved in AD; only these disturbances lead secondary to induction of adaptive immune response, inflammation development involving chemokines disturbances. The second reason may be the lack of objective and standardized method for AD clinical evaluation, thus the patients enrolled to the studies in different centers, although with the same SCORAD index, may have a little different clinical picture. Based on literature and our results we conclude that chemokine imbalance is involved in AD pathogenesis, however discrepancies obtained in many studies and relatively small number of the patients included in our study do not allow to draw equivocal conclusions. In our opinion further studies correlating chemokine serum levels, their expression in the skin, and AD clinical picture are required and probably will give new light on the disease pathogenesis.
2,825.2
2009-07-22T00:00:00.000
[ "Medicine", "Biology" ]
Predicting Network Behavior Model of E-Learning Partner Program in PLS-SEM : The Ministry of Education of Taiwan conducted an e-learning partner program to o ff er life-accompaniment and subject teaching to elementary and secondary students through a network platform with cooperation from university undergraduates. The aim of the e-learning partner program was to improve the motivation and interest of the children after learning at school. However, the outcome of this program stated that the retention rate of the undergraduates was low over three semesters in the case universities. Therefore, the training cost for the program was wasted each semester, and it was necessary to solve the problem and improve the situation. The evaluation of self-e ffi cacy directly a ff ects a person’s motivation for the job. This research examined inner self-e ffi cacy (teaching and counseling) and outer support (administration and equipment) that would contribute to and predict the success and the persistence of the e-learning partner program. There were 94 valid self-evaluation records in the 2019 academic year. ANOVA, post hoc, and partial least squares (PLS) analyses were conducted. The results showed that the year level, experience, and teacher education program background were significantly di ff erent in this study. The network behavior model was set up e ff ectively to predict the retention from four scopes. A higher teaching self-e ffi cacy would have better passion and innovation scores than the others. Using the suggestions for improvement, decreasing the gap between undergraduates’ expectations and promoting sustainability in the e-learning partner program can be achieved. Introduction From 2016 to now, the Ministry of Education (MOE) of Taiwan conducted a digital application promotion plan in rural schools to improve digital skills, enrich digital competency (e-commerce and e-marketing), and promote enjoyment of mobile services as well as applications. The MOE e-learning partner program was one part of the plan [1]. Based on companionship, learning, and improving the motivation and interest of rural students, undergraduates used the internet to overcome barriers between urban and rural areas. The purpose was to promote equal learning opportunities for elementary and secondary students everywhere. In addition, the core values of this program included life accompanying life and living teaching living. There are two main targets: one is the learning of the children and the other is the teaching of the undergraduates. There was a total of ten weeks in one semester, with classes twice a week and two lessons each time (Chinese, English, or Mathematics). Each lesson lasted for 45 min with one-on-one online interaction. Regarding the partners, there were five primary and secondary schools that cooperated with the case university from 2017 until 2.1. Self-Efficacy Self-efficacy, as developed by Bandura, indicated that people had confidence in a relevant area [41]. Effectiveness and self-efficacy of teaching were discussed in many studies [42][43][44]. Self-efficacy was effective in different aspects such as classroom management, teaching methods and techniques, and the use of computers and instructional tools [45]. In addition, it also affected students' learning performance. Sun found that environmental transformation, teaching innovation, class management, parent-teacher communication, teaching execution, and teaching evaluation influenced teaching quality [44]. Xu found that teacher self-efficacy was positively related to the teacher's perceived pressure, class management style, teaching-thinking style, commitment to teaching, willingness to carry out teaching innovation, and students' learning performance [46]. In this study, considering teaching self-efficacy and retention (passion and innovation), teaching indicators that included teaching preparation, teaching methods, and teaching attitude were adapted from Danielson, Keller, Pan et al. and the Ministry of Education [47][48][49]. Secondly, counseling self-efficacy is defined as a belief in the ability of individuals to perceive counseling cases effectively [50]. According to Larson and Daniels, studies on counseling self-efficacy are mostly conducted to understand the self-efficacy of counselors, including trainees or graduate students, school counselors, psychologists, or mental health-related personnel. Regarding the factors that affect the effectiveness of counseling, Larson and Daniels found that counselor characteristics, sexual orientation, age, training background, work experience, and other variables were important factors in predicting the self-efficacy of counselors in multiple studies. Case studies based on self-assessment by trained counselors have shown that those with higher self-efficacy in counseling have higher self-evaluations [51,52]. There are three sections related to counseling training: knowledge, skill, and belief [53]. In this study, counseling self-efficacy is divided into two parts. One is the attitude when tutoring the child (this includes understanding the child's position, accepting the child, trusting the child's ability, and respecting the child). The other is the positive perception of counseling knowledge and attitude. Finally, the outer environment support and institutional resources, as well as administrative support and IT equipment were two important factors in the questionnaire that were designed by the MOE of Taiwan. The program hosts and assistants shared their experiences with the administrative support. With regard to the IT section, the platform of MOE, the e-learning interactive equipment (such as the software and the writing board) and the computers' network flow speed were employed to investigate the self-evaluation of the undergraduates during the semester at the case university. Retention According to [54], individual characteristics such as background and attitude affect students' dropout rates. Vianden and Barlow studied student loyalty with the assumption that students who develop positive attitudes toward their institutions are more likely to continue [55]. Furthermore, passion for long-term goals predicted retention among novice teachers [56,57]. Some researchers showed a strong link between talent retention and innovation [58,59]. Therefore, innovation and passion were two key factors related to retention in this paper. The control variables for the background used in this research were college, gender, subject, e-learning partner experiences, and teacher education program background. PLS PLS-SEM is a method used to estimate path models with latent variables and extend the principal component and canonical correlation analysis in statistics [60,61]. PLS can cope with smaller sample sizes than structural equation modeling (SEM) for the same effect size and model complexity, and it can more easily specify formative constructs [62][63][64]. In the formative model, it was necessary to assess the indicator weights and loads, and perform redundancy analyses. Chin provided redundancy analysis, in which each formatively specified construct correlated with its alternative measure [65]. The SmartPLS 3 software with a graphical user-interface was used to estimate the PLS-SEM models [66]. The first PLS software was published nearly ten years after LISREL III [67]. A relational predictive model is a research model that aims at determining the presence and the extent of the retention among the four self-evaluation variables. Methods In this context, the effect of the undergraduate retention toward using teaching, counseling, administration, and equipment evaluation on each other and their mutual relation are considered in this study. Samples and Procedure This research was conducted in two stages. The first was the factor analysis and the reliability and validity analyses of all items of the questionnaire to reduce the number of questions. ANOVA and post hoc analyses were conducted to determine the significant factors. The demographic features of the participants were gender, year level, college, teaching subjects, experience, and teacher education program background. Through ANOVA, we detected which of the background variables were significantly different in this study. Through post hoc analysis, multiple comparison analysis (over two levels) was explored. Furthermore, the PLS modelling in the final semester questionnaire was created to detect the correlation between retention (passion and innovation) and the four factors (teaching, counseling, administration, and IT equipment). Thereafter, it was extended to predict the future trend in this e-learning partner program to enhance the perseverance in the tutoring and teaching side. The students completed the first questionnaire in the fifth week of the semester and the second one in the tenth week. In the 2019 academic year (from 21 October 2019 to 20 December 2019), 94 valid self-evaluation records were obtained twice. There were 10 weeks in total, with classes twice a week, covering two lessons each time (Chinese, English, or Mathematics). Each lesson lasted for 45 min with one-on-one online companion and learning. The ethical rule was stated as a declaration in the beginning of the questionnaire, and it was answered online. All the participants voluntarily attended the scale implementation process. The demographic features of the participants were as follows. In terms of gender, 80.9% of the participants were female, and 19.1% were male participants. In terms of year level, 17% of the participants were in the first year, 38% in the second year, 24% in the third year, 13% in the fourth year, and 7% were masters students. In terms of college, 30% of the participants studied in the College of Humanities and Social Sciences, 6% in the College of Science and Engineering, 10% in the College of Design, 24% in the College of Management, and 30% in the College of Informatics. Considering the teaching subjects, 16% of the participants taught Chinese, 41% taught English, and 43% taught Mathematics. Considering experience, 76% were novices and 24% were experienced. Concerning teacher education program background, 34% studied in the program, and 66% did not. Measures and Variables The main variables of the teacher were college, gender, subject, e-learning partner experience, and teacher education program background. There were teaching, counseling, administrative, and equipment questions in the self-evaluation survey. With regard to retention, passion and innovation were two main variables. Students answered the questionnaire, using a five-point scale. The options were "strongly agree", "agree", "neutral", "disagree", and "strongly disagree". The scoring order was 5-1 points, respectively. The items on teaching, counseling, administration, and IT equipment are shown in Table 1. Factor Description Teaching T1 I can analyze students' prior knowledge and learning needs before teaching. T2 I can understand the physical and mental characteristics, learning experience, learning interests, and learning environment of students before teaching. T3 I can correctly grasp the teaching material of the subjects taught. T4 I can choose textbooks and copies that meet the needs of school children. T5 I can specify the learning goals and show the impact of teaching materials on the life or future of students. T6 I can provide new and old knowledge (such as information familiar to specific children, daily life experience, etc.) related to school children. T7 I will create a teaching environment that respects harmony and fun and makes my interaction with students effective. T8 I will effectively use a variety of teaching methods (such as questioning, multimedia, games, etc.) to maintain the attention of students, creating challenges and novelties. T9 I can use the teaching process from shallow to deep to provide students with a chance to succeed. T10 I can provide the students challenging, moderate, and meaningful learning lessons at the right time. T11 I can clarify to students the standards and implementation methods of learning evaluation. T12 I can provide learners the opportunity (such as online real-time quizzes) to demonstrate real results at the right time. T13 I will provide prompt and specific feedback (such as praise, drumming) and suggestions at the appropriate time T14 I will use adaptive multiple evaluations such as oral expression and student self-evaluation. T15 I have never used high-quality sound and light effects media (such as the use of video games) to stimulate students to learn. T16 I am passionate about helping school children move towards positive self-development. T17 I am willing to participate in knowledge training to improve the profession. Counseling C1 I believe that company is a meaningful thing. C2 I think the opinions of every student should be respected. C3 I am willing to accept the individual differences of school children in good faith. C4 I believe that school children have the potential to overcome difficulties in life. C5 I agree that students have an autonomy without harming themselves and others with their power of decision. Administration A1 The teacher with the class really helps to confirm the status of the elementary school. A2 The class teacher helps with network obstacles. A3 The teacher with the class really assists in the troubleshooting of hardware and software equipment. A4 The class teacher replied to the teaching diary. A5 The class teacher counsels me on the teaching problems and tracks them. A6 The education and training planned by our school team helped me. Equipment E1 The diary filling function meets the usage requirements (click on the screen after the university is accompanied by the classroom). E2 Textbook upload function meets the needs of use. E3 JoinNet operation platform is stable. E4 Computer classroom network connection is stable. E5 Computer classroom hardware equipment is stable. Output Innovation My teaching activities can motivate students to learn. Passion I often face my teaching work with enthusiasm and hope. The PLS Algorithm Procedure A participant group with smaller numbers was necessary for complex models in partial least square structural equation modelling. The bias-corrected and accelerated (BCa) bootstrapping procedure was used to assess the control variables' significance and interaction effects. The assessment of the significant control variables should use the f 2 effect size to consider their relevance. In the beginning, the reliability and validity (first stage) were examined. Then, the degree of collinearity of the indicators and the significance and relevance of the indicator weights were analyzed. Finally, redundancy analysis was conducted. Results There were 94 valid self-evaluation records in the 2019 academic year (from 21 October 2019 to 20 December 2019). The post scores were significant (p-value < 0.05) and higher in T11, E5, and Innovation than those in the first survey. Teaching self-efficacy, administration, and equipment evaluations increased with time. Counseling self-efficacy decreased in the second stage; however, it was not significant. Factor Analysis According to the factor analysis used to reduce the items, there are four main functions in this study. The items that did not score higher than 0.5 in the loading weight would be deleted from the study. Initially, in the teaching items, the 17 items were classified into four main factors (TP = teaching preparation, TC = teaching conference, TA = teaching attitude, and TW = teaching method). The factor of TP = {T1-T6, T11-T12, T15}, TW = {T7-T10, T13-T14}, TA = T16, and TC = T17. Moreover, E1 was deleted because the weight was lower than 0.5. Through the principal analysis, a total of four functions were created in this study as shown in Table 2, and the significance was smaller than 0.05 in the KMO and Bartlett test. Related to the rotation sum of the squared loading, the cumulative percentage was 76.52 among the four main factors. After the factor analysis, we classified TP, TC, TA, and TW as Teaching; C1-C5 as Counseling; A1-A6 as Administration; and E2-E5 as Equipment. In the basic statistics in Figure 1, the score of Administration is the highest (mean = 4.761) and the score of Teaching is the lowest (mean = 4.276). It showed a high level of inner self-efficacy and outer support. Besides, the reliability and validity of the questionnaire were explored in Table 3. The Cronbach's alpha value of the scale was detected to be 0.886 for Teaching, 0.942 for Counseling, 0.912 for Administration, and 0.881 for Equipment. The Cronbach alphas of all four functions are greater than 0.8. This indicated that the result had a good reliability. In convergence validity, the average variance extracted (AVE) was greater than 0.5. In addition, all the correlation coefficients were better than 0.3. In the discriminate validity, the coefficient was better than 0.7. Furthermore, the four weights were higher than the others. This indicated good validity. Administration is the highest (mean = 4.761) and the score of Teaching is the lowest (mean = 4.276). It showed a high level of inner self-efficacy and outer support. Besides, the reliability and validity of the questionnaire were explored in Table 3. The Cronbach's alpha value of the scale was detected to be 0.886 for Teaching, 0.942 for Counseling, 0.912 for Administration, and 0.881 for Equipment. The Cronbach alphas of all four functions are greater than 0.8. This indicated that the result had a good reliability. In convergence validity, the average variance extracted (AVE) was greater than 0.5. In addition, all the correlation coefficients were better than 0.3. In the discriminate validity, the coefficient was better than 0.7. Furthermore, the four weights were higher than the others. This indicated good validity. ANOVA and Post Hoc Analysis The demographic features of the participants were gender, year level, college, teaching subjects, experience, and teacher education program background. The freshman and sophomores were coded as 1, the juniors and seniors were coded as 2, and the graduates were coded as 3. Through ANOVA and post hoc analysis (Scheffe method), year level, experience, and teacher education program background were significantly different in this study as demonstrated in Table 4. The scores of third-and fourth-year participants were significantly higher than those of the first-and second-year undergraduates. In the e-learning partner program experience, the scores of the experienced participants was higher (4.47) than those of the novices (4.21) in self-efficacy of teaching. The participants who were enrolled in the teacher education program concurrently scored significantly higher in teaching (4.48 vs. 4.17) and counseling (4.81 vs. 4.60) than those who were not enrolled in the teacher education program at the case university. PLS Analysis In this study, SmartPLS is used to process the original data from SPSS to CSV file. SmartPLS is advantageous for samples smaller than 100. There are a total of 94 participants in the survey, which is an appropriate sample size. Reasons for analyzing the data with this tool include small sample size, non-normal data, formative measures, focus on prediction, model complexity, exploratory research, and theory development. There are reflective, formative, and redundancy models in the PLS. From previous studies, a model that shows the effect level of the latent variables of teaching, counseling, administration, equipment, and retention on each other and their ratios to each other was put forward. In Figure 2, all the outer loadings are higher than 0.7. In collinearity statistics, the inner variance inflation factor was smaller than 10. The four factors were not collinear in this research. First, from Table 6, all the composite reliability values are around 0.8-0.9, supporting the internal consistency and reliability of the measures. Second, the values of convergence validity (AVE) were all higher than 0.5, and the values of composite reliability (CR and Cronbach's alpha CA) were all higher than 0.7. In discriminate validity, the coefficient was smaller than 0.7. Although the teaching coefficient was 0.873, it is smaller than 0.918. Therefore, it is still in the reasonable range. Considering redundancy, if it is higher, the model is a good fit (goodness of fit = Sqrt (redundancy)). This is similar to the model with blindfolding redundancy > 0, indicating that the variable has a predictive relevance in the model. The construct cross redundancy Q² (=1-SSE/SSO) = 0.63 in retention was calculated, and it was a good fit in this model. Third, the R 2 (0.767) was higher than 0.67, indicating that it was valuable in real application. Fourth, the effect size f 2 (0.79) was higher than 0.35. This indicated that the outer variable (Teaching) significantly influenced the inner variable (Retention) and the R 2 would increase. The path coefficients of the prediction model were positive in teaching, counseling, and administration; however, they were negative in equipment to the latent variable of retention. The model also presented innovation (0.915) and passion (0.921), which had positive path coefficients to retention. In path coefficient analysis, the factor of teaching coefficient was higher than 0.7, the total effect p-value was smaller than 0.05, and t > 3.29 (Table 5). This indicates that the formative construct in teaching can explain at least 50% of the variance of retention. First, from Table 6, all the composite reliability values are around 0.8-0.9, supporting the internal consistency and reliability of the measures. Second, the values of convergence validity (AVE) were all higher than 0.5, and the values of composite reliability (CR and Cronbach's alpha CA) were all higher than 0.7. In discriminate validity, the coefficient was smaller than 0.7. Although the teaching coefficient was 0.873, it is smaller than 0.918. Therefore, it is still in the reasonable range. Considering redundancy, if it is higher, the model is a good fit (goodness of fit = Sqrt (redundancy)). This is similar to the model with blindfolding redundancy > 0, indicating that the variable has a predictive relevance in the model. The construct cross redundancy Q 2 (=1-SSE/SSO) = 0.63 in retention was calculated, and it was a good fit in this model. Third, the R 2 (0.767) was higher than 0.67, indicating that it was valuable in real application. Fourth, the effect size f 2 (0.79) was higher than 0.35. This indicated that the outer variable (Teaching) significantly influenced the inner variable (Retention) and the R 2 would increase. Self-Efficacy and Retention In the PLS model, the teaching self-efficacy significantly influenced the attitude of retention. Some studies discussed self-efficacy and retention. Gore found that a self-efficacious belief is an important predictor of the academic performances and achievement of university students [68]. The role of motivation, perceived effectiveness, and self-efficacy truly enhanced learner retention [69]. People with low self-efficacy are also timid against technological innovations and may display resistance against using computers as well [70,71]. In addition, the retention of teachers had positive effect on achievement of students [72,73]. Thus, a good model for predicting the willingness of instructors to persist was essential and necessary. In the e-learning partner program, the grade year level, experience, and teacher education program background had a positive correlation with the teaching self-efficacy. Although the aim of this program was geared toward children, Chinese, English, and Mathematics were required in the interactive process. If institutions want to get maximum benefits on training, they must offer a supportive environment that can enhance self-efficacy and retention capabilities of their employees [74]. Without sufficient training in teaching, the undergraduates were not confident to face the students online. That was the reason willingness to persist in the e-learning program was low during each semester. Consequently, the learners in the e-learning partner program had to face new instructors and felt unstable in the learning process. This result was similar to a European survey, in which the younger generations were less willing to persist in the same organization and had lower organizational commitment [75]. Therefore, institutional managers needed to pay more attention to solve the problem of losing talents. Training Design The participants who were enrolled in the teacher education program had higher scores in teaching and counseling self-efficacy than those who were not enrolled. These results are similar to results of some studies that posited that participants in accelerated programs were older with more life and work experience and education, and the senior and experts often viewed them as being better critical thinkers, more inquisitive, and more confident [76][77][78]. The graduates from the accelerated e-learning partner programs had higher levels of confidence because they were older, and more mature. They have often already successfully completed a degree and had more life experience. For the hosts of the e-learning partner program reference, effective instruction in educational psychology, principles and methods of instruction, teaching methods, testing and evaluation, scientific research methods, management experience, and teaching practice may positively affect teaching self-efficacy of the novice undergraduates. Considering the teacher education program in the case university, the core competencies of the childhood teacher education program included educational knowledge and important educational issues [79]. The fundamental training design can include domain knowledge related to teaching knowledge, curriculum and instructional design skills, effective use of teaching strategies for effective teaching, appropriate methods for learning assessment, giving play to class management efficiency and creating a supportive learning environment, offering relevant counseling, educational professional responsibility, commitment to teacher professional growth and demonstration of collaboration and leadership. All curricula were related to teaching and counseling training, and this could improve the interactive confidence. Implications and Recommendations It was necessary for the online e-learning partner program to improve the high talent turnover and decrease the cost of human resource training. The purpose of this study was to analyze factors that could contribute to this. Although the participants in this research were limited to one case university, the model of predicting teachers' retention can be implemented at any university, especially novices in instruction in the world. If the institutional manager offered sufficient resources for teachers and learners, their interaction and communication could be stable and free of worry for a long-period in the e-learning process. The knowledge and subject area during each learning stage were consistent and systematic; therefore, the improvement of inner self-efficacy (teaching and counseling) and outer support could contribute to real or virtual classrooms anywhere Through workshops, professional seminars, instructors' communities, retention rewards, or apprenticeships held by the MOE in Taiwan, the e-learning partner program could be more successful and beneficial to the accompaniment of children in happiness and harmony. Conclusions The e-learning partners program was promoted for four semesters in the case university. However, the report of this program stated that the retention rate was a big problem. This study examined inner self-efficacy and outer support evaluation that would contribute to and predict the success and persistence of the e-learning partners' programs. The independent variables including year level, experience, and teacher education program background had significant impact on the self-evaluation survey in this study. Through PLS modeling, this study can deal with smaller sample sizes than SEM for the same effect size and model complexity and can more easily show formative constructs. After the path analysis, the teaching factor can better predict the network behavior model in this e-learning partner program. Therefore, this research could be of immense help to e-learning partner programs in improving their training strategies and it could be beneficial to the learning with technology. The model also has the potential to contribute to talent retention and to provide an optimal environment to support student learning and growth worldwide.
6,031.2
2020-07-06T00:00:00.000
[ "Education", "Computer Science" ]
KiSSim: Predicting off-targets from structural similarities in the kinome Protein kinases are among the most important drug targets because their dysregulation can cause cancer, inflammatory, and degenerative diseases. Developing selective inhibitors is challenging due to the highly conserved binding sites across the roughly 500 human kinases. Thus, detecting subtle similarities on a structural level can help to explain and predict off-targets among the kinase family. Here, we present the kinase-focused and subpocket-enhanced KiSSim fingerprint ( Ki nase S tructural Sim ilarity). The fingerprint builds on the KLIFS pocket definition, composed of 85 residues aligned across all available protein kinase structures, which enables residue-by-residue comparison without a computationally expensive alignment. The residues’ physicochemical and spatial properties are encoded within their structural context including key subpockets at the hinge region, the DFG motif, and the front pocket. Introduction Protein kinases are involved in most aspects of cell life due to their role in signal transduction. Their dysregulation can cause severe diseases such as cancer, inflammation, and neurodegeneration, 1 which makes them a frequent target of drug discovery campaigns. In 2015, 30% of FDA-approved small molecules targeted kinases. 2 The roughly 500 kinases in the human genome share a highly conserved binding site, which challenges selective drug design for a single kinase or a well-defined set of kinases (polypharmacology) avoiding binding to undesired off-targets. 3,4 Protein kinases bind adenosine triphosphate (ATP) to catalyze the transfer of its phosphate group to serine, threonine, or tyrosine residues of themselves or other proteins. ATP and most other ligands bind to the front cleft of the kinase pocket that lays between the two kinase domains, the C-and N-terminal lobes. These domains are connected via a hinge region, which is forming important hydrogen bonds to ATP as well as most studied ligands. The gate area contains the conserved DFG (aspartate-phenylalanine-glycine) motif, whose phenylalanine flips in and out of the front pocket, opening and closing a hydrophobic region in the back cleft, i.e., the DFG-in and DFG-out conformation, respectively. The back cleft also comprises the αC-helix with a conserved glutamine residue, which forms a salt bridge with a conserved lysine residue in the gate area. Such a conformation is called αC-in as opposed to αC-out. 5 Researchers have studied kinase similarity between the full -or parts of the -kinome from many different angles. Manning et al. 6 While sequence comparison -and thus, evolutionary similarity -can explain many observations from kinase profiling experiments, other more distantly related off-targets remain undetected. For example, profiling Erlotinib against 48 kinases revealed high affinity against the on-target EGFR (TK group) but also the non-TK off-targets SLK, LOK, and GAK; 8 or the chemical probe SGC-STK17B-1 binds both DRAK2 and CaMMK, 9 although they are dissimilar when judged solely by their sequence. 6 Focusing on the kinase pocket instead of the whole sequence already helps: The 50 most similar kinases to EGFR are only TK kinases when ranked by full-length sequence while listing non-TK kinases when considering the pocket sequence only. 10 The KinCore phylogenetic tree produced by a kinome-wide structureguided MSA 7,11 overall confirms the assignment from Manning et al. 6 but provides higher precision, e.g. regarding previously unassigned kinases. Schmidt et al. 12 have recently investigated the similarities between a panel of nine kinases -EGFR, ErbB2, PIK3CA, KDR, BRAF, CDK2, LCK, MET, and p38a -based on different pocket encodings, including the pocket sequence identity, pocket structure similarity, interaction fingerprint similarity, and ligand promiscuity. Individual kinase relationships differed according to these different perspectives, while some trends could be observed such as the atypical kinase PIK3CA being an outlier amongst the otherwise typical kinases in this panel. In an attempt to facilitate computer-aided kinase similarity studies, we here aim to add another perspective. Binding site comparison methods employed so far can be applied to any binding site regardless of the protein class. Kuhn et al. 13 have applied such a method, Cavbase, to the structurally resolved kinome and could detect expected and unexpected kinase relationships. Since kinases are highly conserved and have been aligned and annotated across the full structurally covered kinome, a binding site comparison method tailored to kinases may provide an extended perspective on kinase similarities. We make use of data in the KLIFS 14 database, a rich resource for kinase research that extracts protein kinasefocused information on structures from the PDB, 15 on inhibitors in clinical trials from the PKIDB, 16 on bioactivities from ChEMBL, 17 and much more. All kinase structures from the PDB are split into single chains and models and aligned with respect to sequence and structure across the full structurally covered kinome. The KLIFS authors defined the kinase pocket as a set of 85 residues that interact with co-crystallized ligands in the initial KLIFS dataset of more than 1200 structures. 5 Thanks to this structural alignment, it is possible to look up all 85 residues in any kinase structure, given the residue is structurally resolved and not in a gap position. This pocket alignment is the basis for the here introduced KiSSim fingerprint. The kinase-focused and subpocket-enhanced KiSSim (Kinase Structural Similarity) fingerprint builds on the KLIFS 14 pocket, whose alignment allows a computationally inexpensive residue-by-residue comparison. The residues' physicochemical and spatial properties are encoded within their structural context including important kinase subpockets -the hinge region, DFG region, and front pocket -building on features from previously published methods such as SiteAlign, 18 KinFragLib, 19 and Ultrafast Shape Recognition (USR). 20 We used the fingerprint to calculate all-against-all similarities within the structurally covered kinome and to generate a KiSSim-based kinome tree. Detected similarities can be used to predict off-targets or guide polypharmacology studies and to rationalize profiling observations on a structural level. We distribute the method as an open source Python package at https://github.com/volkamerlab/kissim and as conda package, alongside the data and analyses notebooks at https://github.com/volkamerlab/kissim_app to support FAIR 21 science. Methods & Data In the following, we outline the KiSSim methodology and implementation, the datasets used, and the method's evaluation. All data, fingerprints, and analyses are available at https://github.com/volkamerlab/kissim_app. KiSSim methodology The KiSSim methodology consists of three steps: the encoding of a set of kinase binding sites as KiSSim fingerprints (Figure 1), the all-against-all comparison of these structures using their fingerprints, and -since one kinase can be represented by multiple structures -the mapping of multiple structure/fingerprint pairs to one kinase pair. Encoding: From structure to fingerprint The KiSSim fingerprint encodes the 85 KLIFS pocket residues in the form of physicochemical and spatial properties as illustrated in Figure 1. We summarize the encoding procedure in the following; for a detailed description please refer to the Supplementary methods section. Figure 1: KiSSim fingerprint encodes physicochemical and spatial properties of kinase pockets. The fingerprint builds on the KLIFS 14 pocket definition, i.e. 85 residues aligned across all available protein kinase structures, which enables residue-by-residue comparison without a computationally expensive alignment. Each residue is encoded physicochemically and spatially. Physicochemical properties include the following features per residue (example: phenylalanine/PHE): (a) Pharmacophoric features and size categories are taken from the SiteAlign 18 binding site comparison methodology. (b) Side chain orientation is adapted from SiteAlign and defined as inward-facing, intermediate, or outwards-facing depending on the vertex angle between the pocket centroid, the residue's side chain representative (Table S3), and CA atom. (c) Solvent exposure is defined as high, intermediate, or low, depending on the ratio of CA atoms in the upper half of a sphere cut in half by a normal plane spanned by the residue's CA-CB vector. The implementation is based on BioPython's HSExposure. 22,23 Spatial properties are defined as follows: (d) Each residue's distance to the pocket center and important kinase subpockets, i.e., the hinge region, DFG region, and the front pocket. On the right, example locations are shown in the 3D representation of kinase EGFR (PDB ID: 2ITO, KLIFS structure ID: 783). (e) The distance distributions per pocket center and subpocket are furthermore described by their first three moments, i.e. the mean, standard deviation, and skewness. Pharmacophoric and size features are taken from the SiteAlign categories for standard amino acids. 18 They encode the size based on the number of heavy atoms, the number of hydrogen bond donors (HBD) and hydrogen bond acceptors (HBA), the charge (negative, neutral, or positive), and aromatic and aliphatic properties (present or not present) of a residue (Table S1). The side chain orientation (inward-facing, intermediate, or outward-facing) is based on the vertex angle from the residue's CA atom (vertex) to the pocket center and to the residue's outermost side chain atom, the side chain representative (Table S3). The solvent exposure of a residue (high, intermediate, or low) is based on the ratio of CA atoms in the upper half of a sphere that is placed around the residue's CA atom (radius 12Å) and cut in half by a normal plane spanned by the residue's CA-CB vector, as implemented in BioPython's HSExposure module. 22,23 Spatial properties are described by discrete values, i.e., distances and moments. Spatial distances are calculated from each residue's CA atom to the pocket's geometric center and to prominent subpocket centers. The pocket center is the centroid of all pocket CA atoms. The selected subpocket centers include functionally well-characterized kinase regions such as the hinge region, DFG region, and front pocket. Each subpocket center is calculated based on the centroid of three anchor residues' CA atoms (Table S4), following the idea described in the KinFragLib methodology. 19 We added the code to calculate the subpocket centers to the structural cheminformatics library OpenCADD (module opencadd.structure.pocket) 24 to allow for easy access in other projects. Spatial moments describe each of the four distributions of distances to the pocket center, hinge region, DFG region, and front pocket. In KiSSim, the first three moments are used: the mean, the standard deviation, and the cube root of the skewness. This procedure is inspired and adapted from the ligand-based Ultrafast Shape Recognition (USR) 20 method. Fingerprint length. The final full-length fingerprint encompasses eight discrete physicochemical features (8 features x 85 residues), four continuous spatial distance features (4 features x 85 residues), and three continuous spatial moment features (3 moments x 4 distributions), resulting in a 1032 bit vector. Optionally, a subset of residues can be selected to generate a subset fingerprint emphasizing certain residues. We offer a subset of residues that is based on frequently interacting co-crystallized ligands, 25 Pairwise structure comparison Two kinase pocket structures -encoded as two fingerprints -can be compared in two steps ( Figure 2). First, we calculate for each feature the distance between the corresponding two feature vectors across the 85 residue entries, producing a feature distances vector of length 15 (i.e., aggregating over the columns in Figure 2 a). For example, the two fingerprints' 85-bit size feature vectors -representing the size of each of the 85 pocket residues -will be reduced to a single size feature distance. The distance between discrete features is defined as the scaled L1 norm x 1 = 1 n n i=1 |x i | (scaled Manhattan distance), whereas the distance between continuous features is defined as the scaled L2 norm , where x is a vector of length n. 27 (1) Kinome-wide comparison The kinome-wide comparison is based on an all-against-all comparison of all available structures. Note that a kinase can be represented by multiple structures (see KLIFS data section), thus, a kinase pair can be represented by multiple structure pairs with multiple distance values. Our final goal is to assign one distance value to each kinase pair as a measure of the similarity between these two kinases (Figure 2 b). The structural coverage of kinases is highly imbalanced: Some kinases are represented by one structure only, others like EGFR or CDK2 by more than 100. We select the structure pair with the lowest distance as representative for the kinase pair, hence always picking the two closest structures in the dataset. For example, if a dataset consists of ten structures representing three kinases, the 10 × 10 all-against-all structure distance matrix will be reduced to a 3 × 3 all-against-all kinase distance matrix, consisting of the lowest distance values only after mapping structure pairs to kinase pairs. Fingerprint and similarity visualization in 3D Fingerprint features can be visualized in 3D using the NGLviewer 28,29 and IPyWidgets 30 for the following applications: (a) Fingerprint features of a structure can be visualized in 3D by coloring the residues by different feature values. (b) The difference between two structures can be highlighted to spot positions of high or low similarity between two structures. The differences are shown for each feature type individually. (c) The standard deviation of spatial features between all structures available for one kinase can be mapped onto an example structure in 3D to show regions of high or low variability between different kinase conformations. KiSSim tree The kinase distance matrix produced as described in the Kinome-wide comparison section is submitted to a hierarchical clustering as implemented in SciPy 31 using as metric the Euclidean distance and as linkage Ward's criterion. We generate a phylogenetic tree in the Newick format based on this KiSSim kinase clustering. The tree branches are labeled with the mean of all distances belonging to that branch; the tree leaves are annotated with the kinase names and their assigned Manning kinase groups. We visualize the tree in an automatized way using BioPython's Phylo 22,32 module to be used in Jupyter Notebooks, and in a manual way using the freely available FigTree 33 software to produce publication-ready circular trees. KiSSim implementation The kissim library is implemented as an open-source Python package, which is available on GitHub at https://github.com/volkamerlab/kissim and as conda package at condaforge. 34,35 Structures are retrieved via the OpenCADD-KLIFS module 24 and are encoded as fingerprints using the FingerprintGenerator class; fingerprints can be compared using the FingerprintDistanceGenerator class. We also offer quick access encode and compare functionalities as Python API and as command-line interface (CLI), see Figure 3. Lastly, the kissim.encoding.tree module offers an automatized all-against-all clustering and phylogenetic tree generation, while the 3D visualization of fingerprints and pairwise comparisons is implemented in the kissim.viewer module. Structural data is read and processed with BioPython 22 and BioPandas; 36 computation is performed with NumPy, 37 Pandas, 38 SciPy, 39 and Scikit-learn. 40 The code for operations that are of use outside of the KiSSim project has been added to the OpenCADD library: 24 KLIFS queries are implemented in the OpenCADD-KLIFS module and subpocket centers can be defined and visualized with the OpenCADD-pocket module. All code is written in Python 3 41 Figure 3: The kissim library's Python API and CLI. Structures from the KLIFS database can be encoded as fingerprints using the FingerprintGenerator class (details in Figure 1) and compared using the FeatureDistancesGenerator and FingerprintDistanceGenerator class (details in Figure 2). The package offers the wrappers encode and compare for quick and easy access from within a Python script (Python API) or from the command line (CLI). Please also refer to the kissim library's documentation at https://kissim.readthedocs.io. Data We are using the following sources of external data: KLIFS kinase structures 14 and the profiling datasets by Karaman et al. 8 and Davis et al. 53 , filtered and processed as described in the following. All prepared datasets described here are accessible via the src.data module at https://github.com/volkamerlab/kissim_app. KLIFS data We downloaded the human structural kinase dataset from the KLIFS database version 3.2 14 on 2021-09-02. This dataset contained 11806 human monomeric structures, i.e., PDB entries split into monomeric structures if consisting of multiple chains and alternate models. We filtered the dataset for human kinases with a resolution ≤ 3Å and a KLIFS quality score ≥ 6. The KLIFS quality score ranges from 0 (bad) to 10 (flawless) and describes the quality of the structural alignment and resolution regarding missing residues and atoms. In addition, we excluded structures with more than three pocket mutations or with more than eight missing pocket residues. In order to reduce computational costs, we selected the best structure per kinase in each PDB entry (kinase-PDB pair); the best structure per kinase-PDB pair is defined as the structure with the least missing pocket residues, the least missing pocket atoms, the lowest alternate model identifier, and the lowest chain identifier (in that order). Structures were excluded if they are flagged as problematic structures in KLIFS and if they could not be encoded as KiSSim fingerprint. We produced three final datasets of structures for KiSSim fingerprint generation and all-against-all comparison: structures in any DFG conformation, DFG-in conformation only, and DFG-out conformation only. Table 1 lists the number of structures remaining after each filtering step. Bioactivity profiling data To compare predicted and measured on-and off-targets, we use two kinase bioactivity datasets available through KinMap: 56 The Karaman et al. 8 and 442 kinases, respectively. The lower the K d value, the higher the binding affinity, which is used as a proxy for activity. We pooled data from both datasets by taking the union of all kinase-ligand pairs. If kinase-ligand pairs have bioactivity values in both datasets, Evaluation prints (IFPs), and SiteAlign. 18 All prepared datasets and evaluation strategies described here are accessible via the src.data and src.evaluation modules at https://github. com/volkamerlab/kissim_app. KiSSim evaluation using profiling data To evaluate how well KiSSim detects kinase similarities, we need to define a ground truth of kinase similarities. We use profiling data as a surrogate for this, since it is safe to assume that kinases that are targeted by the same ligand share similar binding sites. To this end, we use the profiling Karaman-Davis dataset, which describes the activity of ligands against a panel of kinases. We assign each ligand l i in the profiling dataset to their reported key target(s) k j (l i ) in the PKIDB, 16 ranging from one target to multiple targets, e.g. Erlotinib is assigned to EGFR only while Imatinib binds to ABL1, KIT, RET, TRKA, 2. We rank all kinases by their KiSSim distance to EGFR. These are our KiSSim-based kinase similarities. 3. We calculate ROC curves to demonstrate how well the profiling data is predicted by our KiSSim-based kinase similarities. Some kinase activities measured in the profiling dataset are rather unexpected from a sequence-based similarity point of view. For the EGFR-Erlotinib example, we use the KinMap server to plot the profiling-based and KiSSim-based ranked kinases onto the kinome tree by Manning et al. 6 . For example, we highlight kinases with measured activities against Erlotinib as well as the 50 most similar kinases to EGFR as detected by KiSSim. All kinases that are part of the KiSSim dataset are shown as well to define which data points are available for similarity predictions. KiSSim comparison to other methods We outline here the preparation of all-against-all kinase distance matrices based on different similarity measures to be compared to the KiSSim kinase distance matrix (KiSSim Jaccard distance is used to compare the IFPs. If multiple IFP pairs describe the same kinase pair, we selected the minimum distance as the representative measure for the kinase pair, following the same procedure as described for the KiSSim methodology. SiteAlign. We performed an all-against-all comparison using the pocket comparison method SiteAlign 18 (version 4.0). In this approach, properties of a binding site are projected to a triangulated sphere positioned at the pocket center, stored as a fingerprint to be compared and aligned to another binding site fingerprint iteratively. Since we used the existing KLIFS alignment, a few SiteAlign parameters were adapted to reduce runtime: we decreased the number of alignment steps in SiteAlign from 3 to 1, the translational steps from 5 to 3, and reduced the rotational and translational intensity from 2π to 1 4 π and from 4 to 1, respectively. Comparison of the SiteAlign performance for > 4000 structure pairs with the default and adjusted settings, showed that the adjusted settings resulted in lower distances (average decrease of 6%), while matching a higher number of triangles (average increase of 15%). Pocket residues with modifications (e.g. phosphorylated threonines) were excluded to avoid segmentation faults. Results and Discussion We present here the generated KiSSim dataset and the resulting KiSSim-based kinome tree. Furthermore, we evaluate the KiSSim results in comparison to profiling data (KiSSim evaluation using profiling data section) and other pocket encoding methods (KiSSim comparison to other methods section). KiSSim dataset KLIFS structures are filtered as described in detail in the KLIFS data section (Table 1), then encoded and compared as described in the KiSSim methodology section. When considering structures in DFG-in conformations only, 4112 fingerprints representing 257 kinases result in a 4112 × 4112 structure distance matrix and -after mapping structure to kinase pairs as described in the Kinome-wide comparison section -in a 257 × 257 kinase distance matrix (Table 1). Fingerprint feature value distribution The KiSSim fingerprint encodes the 85 KLIFS pocket residues in the form of physicochemical and spatial properties. Physicochemical properties include pharmacophoric and size features, side chain orientation, and solvent exposure; spatial properties include each residue's distance to the pocket center as well as to three subpockets and the first three moments of the resulting distance distributions (Figure 1). We investigate here the fingerprint feature value distribution across all KiSSim fingerprints. Figure S3). Distances from subpocket centers to regions such as the G-rich loop (residues 4-9), the αC-helix (residues 20-30), and the DFG motif vary more than for example to the hinge region, which agrees with knowledge on more flexible vs. more stable regions in the kinase pocket. The spatial moment features describe the distance distributions between the pocket residues to the subpocket centers. They show lower variability for the mean and the standard deviation but high variability for the skewness (Figure 4a, right). The spatial features are based on the KiSSim subpockets as described in the Encoding: From structure to fingerprint section. These subpockets are calculated for each structure individually, however, show robustness over the structural kinome. The subpocket centers occupy the same space across the aligned KLIFS structures, while the front pocket and DFG region center show higher variability than the hinge region and pocket center (Figure 4b), as to be expected. Therefore, the subpocket definition procedure seems to be robust enough to span comparable subpocket centers while fine-grained enough to encode structural differences. In conclusion, the feature space encoded in the KiSSim fingerprint, on the one hand, reflects sequence-related similarities between kinases on a generalized level through the defined physicochemical properties and, on the other hand, incorporates information on flexible and stable regions through the defined spatial properties. Fingerprint distances to compare structures Moving on from the structure encoding (fingerprints) to the structure comparison (fingerprint distances), we aimed to explore if the KiSSim fingerprint can be used to discriminate between kinases and between DFG-in and DFG-out conformations. First, we measured the discriminating power between kinases by comparing KiSSim fingerprint distances between DFG-in structures of the same kinase and of different kinases, i.e. intra-kinase and inter-kinase distances, respectively. With a median of 0.02 compared to 0.11, the (about 200000) intra-kinase distances are significantly lower than the (about 8.2 million) inter-kinase distances as shown in Figure 5a, indicating that the fingerprint can discriminate between kinases. Note that the distances between structure pairs describing the KiSSim-based kinome tree Structure is known to be more conserved than sequence, 64 and previous studies have shown that including structural information adds orthogonal information to shed light on unexpected similarities between kinases and off-target effects. 7,12 To help detect such relationships between more distantly related kinases, we generated KiSSim kinome trees based on the DFG-in conformations, as described in detail in the KiSSim tree section, to investigate all-against-all relationships between kinases compared to the sequence-based kinome tree by Manning et al. 6 . Note that we can base the comparison on structurally resolved Kinases from the STE group are assigned mostly to a single cluster that is, however, shared with kinases from many other kinase groups. The STE kinases MAP2K [1,4,6,7] and OSR1 are separated from the other STE kinases. Kinases from the CMGC group are clustered in two subgroups: kinases from the CDK, CDKL, and MAPK families build one cluster, while kinases from the DYRK, SRPK, and CLK family build another. The CK2a2 kinase (CK2 family) is an outlier. Kinases from the TKL group are mainly clustered together with kinases from the Other group but some are separated from the rest (DLK, BRAK, IRAK2, and LIMK1). Kinases from the CK1 group build one group except for TTBK1 and TTBK2. Kinases from the AGC group cluster together as well; MSK1 is the only outlier that is found closer to the CAMK kinases. Lastly, only three atypical kinases are included in the KiSSim dataset (ADCK3, RIOK1, and RIOK2) and build their own cluster, neighboring to the CK1 kinases. Overall, the KiSSim dataset retrieves the sequence-based kinome tree by Manning et al. 6 , including subbranches as discussed for the kinases assigned to the TK and CMGC groups. This is not surprising because we do encode the sequence in an abstracted manner in the physicochemical KiSSim fingerprint bits. However, some kinases show deviating relationships, of which some can be rationalized such as the CaMKK2 and DRAK2 relationship shown also in profiling data. Thus, the addition of structural information in the KiSSim fingerprint allows us to cluster more distantly related kinases. This aspect of the KiSSim tree is of interest because it predicts novel information on kinase similarities. KiSSim evaluation using profiling data As discussed, the KiSSim tree shows expected and unexpected kinase (dis)similarities. In order to evaluate the specificity and sensitivity of our method, we use profiling data as a surrogate for (real) expected kinase (dis)similarities: if a ligand targets a set of kinases with high activity, these kinases have similar binding sites and are therefore treated as similar kinases. To this end, we pooled the Karaman et al. 8 and Davis et al. 53 datasets and filtered for FDA-approved inhibitors and their targets as listed in the PKIDB. 16 The dataset preparation is described in detail in the Bioactivity profiling data section. We show the KiSSim method's performance in the form of ROC curves for each inhibitor's listed targets. For example, Imatinib has three reported on-targets (assigned in PKIDB) and two offtargets (based on activity data in the Karaman-Davis dataset); KiSSim's performance is Comparison of KiSSim to other methods In the next step, we investigated all-against-all comparisons based on the KiSSim fingerprints, the KLIFS pocket sequence, KLIFS ligand-pocket interaction fingerprints (IFP), and the SiteAlign scores. The data preparation steps are described in detail in the KiSSim comparison to other methods section. The KiSSim fingerprint contains physicochemical bits, which generalize the pocket sequence, and spatial bits, which consider the individual atom/residue positions in the under- (a) Highlight residues with at least one large difference in their physicochemical bits (∆d normalized = 0.6, blue), spatial bits (∆d normalized = 0.2, yellow), or both (green). Color residues by their differences in their (b) HBA, (c) aliphatic, and (d) hinge region feature, ranging from no difference (white) to highest difference (blue). See notebook for more details. 72 lying kinase conformations. First, we use the KLIFS pocket sequence (KLIFS seq) to probe if the KiSSim fingerprint's generalized sequence and spatial information improve predictions compared to sequence information only. Second, we use the KLIFS pocket IFP (KLIFS IFP ) to probe if the KiSSim fingerprint, which does not contain any information about interactions, improves kinase similarity predictions compared to interaction-based fingerprints. The advantage of IFPs is that they emphasize important residues and interactions as seen based on one or more ligands; the disadvantage is that not all possibly relevant interactions have been seen, yet. Note that combining the IFP information with KiSSim -using only interacting residues in the KiSSim fingerprint -can improve the KiSSim performance as discussed in the KiSSim evaluation using profiling data section. Third, we use kinase similarities calculated with the SiteAlign methodology (SiteAlign), from which we adapted some of the physicochemical KiSSim features, to confirm that the KiSSim fingerprint adds relevant kinase-focused information. Correlation. We compared the pairwise kinase distances between the four different method setups ( Figure S9). We observed a rather strong correlation between the KiSSim Performance. We performed the same profiling analysis, which we discussed for KiSSim (mean AUC 0.75±0.12) in the KiSSim evaluation using profiling data section, for the KLIFS seq (mean AUC 0.78 ± 0.15), KLIFS IFP (mean AUC 0.63 ± 0.12), and SiteAlign (mean AUC 0.71 ± 0.12) datasets, see Figure 7. The KiSSim approach performs slightly worse compared to the KLIFS pocket sequence comparison in case of ligands like Imatinib, whose reported on-targets all belong to the TK group, but shows better performance for Erlotinib, Bosutinib, and Doramapimod, which have known kinase targets belonging to different kinase groups. Hence, while the sequencebased approach picks up kinase group assignments as to be expected, KiSSim picks up more distant and less obvious off-targets. The KLIFS pocket IFP comparison performs similarly to the KiSSim comparison in the case of Erlotinib, however, worse for the other three ligands. In contrast to the KiSSim approach, pocket similarities can only be detected by the IFP approach if the respective kinases have been co-crystallized with ligands that form similar interaction patterns. Such an IFP-based comparison probably can be more successful for a defined kinase set with high coverage of co-crystallized ligands in contrast to a kinome-wide comparison as performed here. The SiteAlign methodology projects topological and chemical properties onto a sphere that sits in the center of a protein pocket. The spheres are aligned based on these projections and a similarity score is calculated between the aligned fingerprints. Finding the right alignment is a time-consuming step, hence we offered SiteAlign already the KLIFS-aligned structures as a starting point and reduced the iterations as described in the KiSSim comparison to other methods section. KiSSim outperforms the SiteAlign results in most cases, however, often not considerably much. Taking all these findings together, the KiSSim methodology compares well with established methods while often improving predictions between kinase pairs without an obvious relationship based on the sequence. The pocket sequence and IFP based methods are much faster than the structure-based methods KiSSim and SiteAlign, however, the overall kinase similarity assessment benefits from the added structural pocket information. KiSSim's setup and runtime are more convenient than for the SiteAlign method, however, KiSSim does rely on the KLIFS 85-residue pocket alignment. Conclusion We presented here the KiSSim (Kinase Structural Similarity) fingerprint as a novel structure-enabled pocket encoding tailored to kinase pockets. The fingerprint encodes physicochemical and spatial properties of the 85 KLIFS residues, which are aligned across the structurally covered kinome. On the one hand, the majority of physicochemical bits -size, HBD, HBA, charge, aromatic, and aliphatic, which are adapted from the SiteAlign method -encode the pocket sequence in a generalized, pharmacophoric way. On the other hand, the side chain orientation, solvent exposure, and the spatial bits -the distances to the pocket center and key subpocket centers and the distance distributions' moments -account for the structural conformation. Across all fingerprints, we saw that the fingerprint captures the physicochemical property variability (e.g., most residues are uncharged, whereas HBD/HBA features vary) and the conserved residue positions (e.g., distances to DFG region are more widely spread than to the hinge region). We used the fingerprint to calculate all-against-all distances -small distances refer to high similarity, large distances to low similarity -within the structurally covered kinome: the DFG-in and DFG-out dataset consist of 4112 and 406 structures, representing 257 and 71 kinases, respectively. We found that the fingerprint can distinguish between intra-and inter-kinase similarities and between DFG-in and DFG-out structures. Some kinases are represented by multiple structures, hence some kinase pairs are represented by multiple structure pairs. The distribution of structure distances for one kinase pair can be broad; we selected per kinase pair the closest structure pair that is experimentally observed. We clustered the resulting kinase distance matrix to produce a KiSSim-based kinome tree. While the tree reproduced large parts of the sequence-based Manning tree, some relationships could be observed that are unexpected from a sequence perspective only. For example, we found similarities between CaMKK2 (STE) and DRAK2 (CAMK), which are targeted by the same chemical probe SGC-STK17B-1; 9 we also could confirm the reassignment of AurA, AurC, PLK4, and CaMMK2 from the Other to the CAMK group as proposed by Modi and Dunbrack 7 . Besides the averaged tree view, we also investigated the top-ranked kinases given a query kinase to show that KiSSim can partially explain profiling data. While some ligand profiles are reflected completely in the KiSSim dataset (e.g., Imatinib), other ligand profiles are covered partially (e.g., Erlotinib's off-targets LOK and SLK are detected while GAK is not). In comparison with other similarity measures -focusing on the pocket sequence (KLIFS seq), interaction profiles (KLIFS IFP ), or topological-and chemical pocket properties (SiteAlign) -KiSSim performs equally or slightly better in most cases. The sequence-and IFP-based measures are easy and fast to compute thanks to the preprocessed kinase pockets available at KLIFS; we recommend to include these datasets in any case when investigating kinase similarities. SiteAlign is a powerful tool to compare pockets across all protein classes; if interested only in kinases, KiSSim is a kinase-focused and faster alternative with slightly better results in most of the investigated cases. As for all structure-based methods, the imbalanced dataset of kinase structures is a challenge. Some kinases are structurally well represented (e.g., EGFR or CDK2), while others have only few structures available. And unfortunately still roughly half of the humane kinome has no structural information available at all. The recent breakthrough of AlphaFold2 75 could help here; predicted structures for almost all human kinases are available now on the AlphaFold DB. 76 Modi and Dunbrack 77 have already classified the structures' conformations and found most structures in the DFG-in conformation. An AlphaFold-enhanced KiSSim tree may further increase the usefulness of the KiSSim methodology for kinome-wide similarity studies. Furthermore, the KiSSim fingerprint can be applied in machine learning, e.g. to extract the most important features in the kinase pocket. We believe that the KiSSim fingerprint is a valuable tool for kinase research to explain and predict off-targets and polypharmacology. Since the code is open sourced and available as Python package, the KiSSim fingerprint can easily be integrated in other larger-scale workflows. Code and data availability
7,980
2021-12-16T00:00:00.000
[ "Computer Science", "Biology" ]
Research Article Experimental Study of Hollow RC Beams Strengthened by Steel Fiber under Pure Torsion This paper examines the effectiveness of pure torsional loads on hollow reinforced concrete high-strengthened beams. Engineers need to know how much twist a structural member generates when exposed to torsional loads to design it properly. This is done through an experimental investigation of the torsional behavior of reinforced concrete (RC) beams using twelve hollow rectangular beams with varying parameters, such as the spacing of the stirrups, the influence of the steel fiber fraction, and the main reinforcement amount. Four values of fiber volume fractions (0, 0.5%, 0.75%, and 1%), three spacings of transverse reinforcements (60,100, and 150 mm), and various longitudinal reinforcements (8 V 12mm, 6 V 12mm, and 4 V 12mm) have been used. The tested beams had the same length (1000mm), cross-sections, concrete mixture, and quality control. In the hollow beams, the interior dimensions were 180 mm × 180mm, while the exterior dimensions were 300mm × 300mm. Torsional loads were applied to all the beams using custom-built test equipment. This study highlighted that the structural characteristics of hollow RC beams could be improved by increasing the fiber volume, lowering the stirrup spacing, and increasing the longitudinal reinforcement. Torsion moments rose by 132% when the fractional volume of fiber was increased from 0% to 1%, while they rose by 71.27% when the longitudinal reinforcement was increased from 4 to 8bars for beams with fractional volumes of fiber of 0.5 percent and the same transverse reinforcement ratios. Introduction Hollow cross-section (HCS) models are being used more frequently in structures nowadays, including bridges and buildings. is is primarily because they have advantages over traditional open section members in terms of both structural and aesthetically pleasing design elements. e most well-known use of hollow cross sections is to provide an economic, lightweight, and long-span member. For RC beams, premature torsion failure may occur if a torsional moment is supplied to a reinforced concrete beam without transverse reinforcing before its exural strength reaches its limit. As this failure occurs suddenly and without prewarning, it is generally catastrophic; therefore, stirrups and steel bers have traditionally been used to prevent the torsional failure of concrete beams. Since steel bers' e ects on hollow beam torsion behavior with stirrup reinforcement are not well understood, it is di cult to design properly. is research addresses the use of steel bers in hollow concrete beams under pure torsion. is research is looking for ways to improve the torsional strength of hollow reinforced concrete beams by altering the stirrup spacing, adding reinforcement along the longitudinal axis, and adding steel ber. is experimental study tests eleven beams with di erent steel ber aspect ratios, stirrup spacing, and various numbers of longitudinal reinforcements. Background In most structures, the torsion action occurs more frequently, but it rarely occurs by itself. Torsion, on the other hand, is regarded as one of the crucial structural activities, alongside shear, flexure, and axial tension compression. Torsion causes the failure of the concrete member, which is caused by tensile stress. is failure was caused by a pure shear state. e model's tensile strength was significantly increased with the inclusion of steel fibers. is property of reinforced concrete with steel fibers led to various examinations of it under various loading techniques. Limited data were provided about the performance of steel fiber reinforced concrete members with hollow sections under pure torsion. Prior tests demonstrated that the use of steel fiber increased the torsional strength of members. Chalioris and Karayannis [1] studied the behavior of reinforced concrete beams with steel fibers under torsion. 35 beams with T-shaped, L-shaped, and rectangular crosssections with steel fibers with an aspect ratio of lf/df � 37.5 are presented and discussed. To assess the efficacy of fibers as a prospective stirrup replacement, steel fibers were used as the only shear torsional reinforcement the results showed that fibrous concrete beams had better torsional performance than the corresponding non-fibrous control beams. Okay and Engin [2] found that adding steel fiber reinforcement to RC beams changed their torque capability. Chalioris and Karayannis [3] reported an experimental study using eleven RC beams with rectangular spiral reinforcement subjected to torsion; according to test results, torsional capacity was enhanced for beams with rectangular spiral reinforcement. Lopes and Bernardo [4] examined sixteen hollow beams with concrete compressive strengths ranging from 46.2 to 96.7 MPa and torsional reinforcement ratios ranging from 0.3 to 2.68%. ey found a novel failure type where the beam corners break off at a specified reinforcement ratio, which prevents the beam from reaching its predicted maximum strength and ductility. Enthuran and Sattainathan [5] reported that crimped steel fibers with 1.5% and 2.0% volume fractions resulted in increased torque and twist angles. erefore, the results suggested that RC beams with a greater volume proportion of steel fibers exhibit superior torsional performance. Sudhir and Keshav [6] investigated the effect of adding 1.5% steel fibers on improving concrete torsional strength. e inclusion of steel fibers enhanced the torsional strength, concrete crack resistance, and the combined torsional shear bending strength while decreasing the deflection. Kandekar and Talikotiþ [7] investigated the torsional behavior of RC beams strengthened using aramid fiber strips. ey constructed twenty-one RC beams: three with normal reinforcement, three with torsional reinforcement, and the remaining fifteen with normal reinforcement and with aramid fiber strips of 150 mm width and varied spacings of 100, 125, 150, 175, and 200 mm. All beams with aramid fiber strips were found to have increased torsional moment bearing capability. With modest changes in the twist angle, torsional moment carrying capacity improves as strip spacing decreases. Hameed and Al-Sherrawi [8] found that under pure torsion tests, adding steel fibers to RC beams improves the ultimate torsion strength for three specimens up to 28.55%, 38.09%, and 49.46% compared to RC beams without fibers. ese enhancements are dependent on the increment in fiber content. Facconi et al. [9] showed that steel fiber reinforced concrete beams exhibit stable torsional behavior after cracking in terms of improved crack control, increased torsional resistance, and cracked stiffness. AlKhuzaie and Atea [10] evaluated the impact of adding steel fiber on the behavior of reactive powder concrete beams with hollow T-sections under pure torsion. e researchers determined that a beam with a 2% fiber volume fraction raised the cracking torsional moment by 184% and the final torsional moment by 66%. Nitesh et al. [11] studied the effect of adding 0.5% hook steel fiber to self-compacting concrete beams with recycled coarse aggregate. To test the strength of the concrete using natural and recycled coarse material, 32 beams were constructed. e results showed a large increase in the ultimate torque, torsional stiffness, angle of twist, and torsional toughness in self-compacting concrete compared with vibrated concrete for natural and recycled coarse aggregates with steel fibers. Ibrahim et al. [12] studied the effects of spacing and type of stirrup. e investigation comprised ten reinforced concrete beam specimens: seven hollow sections with various ratios of rectangular spiral stirrups, two solid beams with spiral and closed rectangular stirrups, and one hollow beam with closed rectangular stirrups. Compared with standard closed stirrups, the findings revealed that inclined spiral rectangular stirrups in beam reinforcement enhanced the torsional capacity and strained energy by 16% and 27%, respectively, for solid beams, and 18% and 16%, respectively, for hollow beams. Kim et al. tested eleven RC beams with various torsional reinforcement amounts and different cross-sectional properties. e results indicated that solid and hollow sections have the same levels of torsional strength. Furthermore, regardless of cross-sectional properties, specimens with less arranged torsional reinforcement exhibited ductile behavior compared with the ACI 318 ̶ 19 building code [13]. Kim et al. [13] investigated six steel fiber reinforced concrete (SFRC) beams under torsion. e tested beams were divided into three groups; beams with no stirrups, beams with a minimum transverse reinforcement amount (according to Euro code 2), and beams with hooked steel fibers (25 or 50 kg/m 3 ). e results indicate that the addition of steel fibers increases the maximum resisting torque and maximum angle of twist compared with the same specimen without fibers. Moreover, SFRC has a relatively high postcracking stiffness compared to the RC elements [14]. Hadi and Mohammed [15] studied the behavior of reinforced concrete beams with straight and hooked steel fibers under combined torsional-flexural load. e experimental study involved three fixed supported beams with dimensions of 250 mm * 300 mm * 1800 mm and different types of fibers with a volume percentage of 1.5%. e beam with hooked steel fibers has a 33.37% increase in compressive strength and a 55.08% increase in tensile strength. It was also concluded that the use of hooked fiber had the greatest influence on improving the cracking behavior of beams. Using hooked and straight fibers, beams are able to sustain larger loads at the same rate of deflection/twisting, with a 128.13% and 74.76% increase in ultimate load, respectively. Despite the fact that the applied load was torsional-flexural, all tested beams failed due to excessive twisting. Abdulkadir et al. [16] experimented with RC members with 0, 30, and 60 kg/ m 3 steel fibers under shear, torsion, and axial load. e results indicate that increasing the ratio of steel fibers increases the torsional moment capacity and decreases the shear strength capacity. Moreover, increasing the steel fiber content increases the moment capacity and axial load of RC columns. Hussain et al. [17] experimented with the structural performance of ten flat slabs with and without a square opening using four types of fibers to gain a better understanding of how the variance of fiber type and shape affects the flexural behavior of two-way slabs. Results revealed that the existing fiber in concrete improved the mechanical properties of the hardened concrete mix, the compressive strength, flexural behavior of the reinforced concrete slab, and flexural strength capacity. Most of the experimental and theoretical studies that were mentioned above corresponded to concrete beams with solid sections under pure torsion. A few studies on hollow reinforced concrete beams under pure torsion are available in the literature. e present study attempts to investigate the behavior and load-carrying capacity in torsion for hollow RC beams to show the effects of adding different steel fiber ratios, different longitudinal reinforcements, and stirrups that influence the torsional strength capacity and the behavior of the beam. e angle of twist, cracking torsional moment, and ultimate torsional moment were measured. Materials and Methods e beam specimens in this study were cast using plywood molds constituted from a single part (external parts). e fallen were used to make the tested beams' hollow shape, as shown in Figure 1. A 1 cm square stock was placed inside the molds to maintain the proper concrete cover to hold the reinforcement throughout the construction process. A typical poker vibrator was employed during the concrete casting to facilitate consolidation and precise concrete placement within and around the reinforcement. Portland cement, natural sand, and aggregate were used in the concrete mixture to meet the IQS (5/1984) [18,19] and ASTM 33-03 [20] specifications. Tables 1 and 2 represent the cement's chemical and physical characteristics, whereas Tables 3 and 4 provide the properties of sand and aggregate, respectively. e maximum size of the used aggregate was 10 mm, and the percentages used in the concrete mix design were (1 : 1.31 : 2.8/0.32 by weight) for (cement: sand: gravel/ water), respectively, Test beams' compressive strength was determined using three 150-millimeter concrete cylinders, each 300 mm high. e compressive strength of the cylinder in 28 days was designed to be 65 MPa according to ACI 218 [21]. Steel with a yield strength of 547 MPa was used. Steel fiber ratios of 0.5%, 0.75%, and 1% of the concrete weight were used. e design of RC beams was done by using ACI 318 [21]. Specimen Details. e factors evaluated in this work include the fiber volume percentage, the main reinforcement quantity, and the spacing of the stirrups, all of which impact the torsional capacity of the beam. Twelve hollow reinforced concrete beam specimens with an overall length of 1000 mm are shown in Figure 2 with exterior and interior dimensions of 300 × 300 mm and 180 × 180 mm, respectively. As shown in Table 5, this work consists of three groups: Group A is used to study the influence of stirrup spacing and fiber volume fraction (Vf ) on the torsional strength of beams. e impact of adjusting the stirrup spacing on the torsional strength of these beams under pure torsion was investigated using six reinforced hollow concrete beams. e stirrup spacing for these beams is 60, 100, and 150 mm, and the VF range is 0.5% to 0.75%. Group B deals with the effect of fraction volume of fiber on the behavior of hollow reinforced concrete beams. Four RC beams were presented to find the fiber's ratio (Vf ) impact on the beam's torsional strength. ese beams have a varied volume fraction of fibers (0, 0.5%, 0.75%, and 1%). Group C deals with calculation of torsional displacement and studies the effect of changing the amount of main reinforcement and fiber volume fraction (Vf ) for the same reinforcement. Six reinforced concrete beams were used to find the effect of the longitudinal reinforcement amount on the beam torsional strength. e main reinforcement includes 8V12 mm, 6V12 mm, and 4V12 mm, and Vf ranged between 0.5 and 0.75%. Figure 3 shows the torsional testing machine used to test the hollow RC beams. is machine has been enhanced by adding an arm to apply pure torsion. A heavy steel plate of 45 mm in thickness in a wedge shape was used to make a torsion arm with a net length equal to 0.65 m. Four bolts were utilized to fasten two steel plates on the top and bottom sides of the tested beam to secure the test arm. e RC beams were built to be simply supported at two bearings, with the roller support located underneath the bearing to facilitate the movement of the beam specimens, allowing them to be readily rotated under the supplied torque, as shown in Figure 4. Figure 5 shows a schematic of applied loading. Figure 6 shows the addition of linear variable differential transformers (LVDTs) at the beam ends to determine the twist angle. By averaging the deflections from the LVDTs on both sides of the tested beam, the twist angle was computed. Results and Discussion e load was applied at the ends of a 650 mm torsion lever arm from the beam center to achieve a pure torsional moment in the present investigation. e torsion moment was applied to the beam using a hydraulic testing machine with five kN increments, and the test continued until the beams failed. e torque produced after the appearance of the first crack is indicated as the cracking torsional moment (Tcr), whereas the torque that causes beam failure is known Journal of Engineering as the ultimate torsional moment (Tu). Two LDVTs are positioned at the maximum torsional moment sites to measure the twist angle. Table 6 shows the twist angle, cracking, and ultimate torsional moment. Effect of Spacing of Stirrups. Tcr is the torque at which the applied stresses exceed the section's tensile strength and the cracks begin to appear. After the appearance of these cracks, rapid deformations, and a drop in the reading of the tested machine, the load-carrying capacity of the beam will decrease. In this case, it is referred to as the ultimate torsional moment (Tu). Two beams, H11 and H12, were selected to represent the control beams to study the effect of changing the spacing of the stirrups. Figure 7 indicates that the values of Tcr and Tu are improved by minimizing the spacing of stirrups and increasing the fiber's ratio for the same stirrup spacing. e increments in the cracking moment of beams H2 and H9, which have a steel fiber ratio of 0.5% and stirrup spacing of 100 mm and 60 mm, were equal to 85.53% and 116.7%, respectively, compared to the control beam (H11). e increases in the ultimate torsional moment for the two beams (H2 and H9) are 29.26% and 43.25%, respectively. It was also discovered that increasing the steel fiber ratio from 0.5 percent in beam H11 to 0.75 percent in beam H12 enhanced the cracking torsional moment and ultimate torsional moment by 14.12% and 6.16%, respectively. e effect of the fiber fraction on both Tcr and Tu is decreased with the reduction of stirrup spacing. erefore, to improve the behavior of the beams, the stirrups should be increased. e angle of twist (Ø) is measured by the deflection rate on both sides of the beam, which was measured using the previously described LDVTs. Increasing the number of stirrups with the corresponding fiber fraction and main reinforcement improves the beam stiffness and the twist angle. Figure 8 depicts the relationship between the difference in the twist angle and the Tu for each beam compared to the control beams. It can be shown that increasing the stirrups improves Tcr, Tu, and Ø. e rate of increment in the torsional resistance of the tested beams to the corresponding spacing of stirrups was not proportional, as shown in Tables 7 and 8. Figures 9(a)-9(c) demonstrate that the cracking torsional moment increased by 116.7%, the twist angle improved by 48.82%, and the ultimate torsional moment improved by 48.31%. Table 9 and Figure 10, increasing the fiber percentage increases the ultimate and cracking torque. e crack torsional moment of the beam increases by 90%, 112%, and 132%, respectively, as it increases from zero to 0.5, 0.75, and 1%, while the ultimate torque increases by 24.4, 45.6, and 62.7%, respectively, in comparison to the beam without steel fiber (H1). e addition of steel fibers enhances the concrete tensile strength and improves the ductility of the beams. e beam fails when the tensile stresses on the concrete raise and exceed the concrete tensile strength, the beam cracks, but the fibers continue to resist the raising tensile stresses until the steel fibers completely pull out at a critical crack. All of the tested beams failed owing to excessive torsional shear stress, resulting in a large diagonal torsional fracture. Figure 11 depicts the relationship between the twist angle difference and the torsional moment of the tested beams. Increasing the fiber ratio while maintaining the same stirrup spacing and the number of main reinforcements increases the beam stiffness and twist angle. ese results reveal that Tcr, Tu, and Ø are improved by increasing the fiber fraction. As evident in Tables 9 and 10, the rate of increment in the torsional properties to the steel fiber ratio was not the same for all tested beams. Figures 12(a)-12(c) demonstrate that the cracking torsional moment increased by 132%, the twist angle improved by 69.04%, and the ultimate torsional moment improved by 62.1%. Effect of Main Reinforcement. e impact of modifying the number of longitudinal reinforcements with the two distinct fiber ratios was investigated in this experimental study section by comparing two beams (H5 and H7) with the control beam (H2). e cracking and ultimate torsional moments rise by 3.68% and 69.09%, respectively, for beam H5, and by 8.94% and 71.27%, respectively, for beam H7, as the number of longitudinal reinforcement increases (see Table 11 and Figure 13). e increase in the fiber ratio from 0.5% to 0.75% resulted in an 11.5% increase in the cracking load and a 12.46% rise in the ultimate torsional moment for beams with the same longitudinal reinforcement (4 bars). Beam H8, which contains the largest main reinforcements (8 bares) and the Figure 10: Torsional moment of the tested beams. ultimate torsional moment will decrease by increasing the number of main reinforcements from 6 to 8 bars. Figure 14 depicts the relationship between the torsional moment and the average twist angles for the examined beams. e stiffness and twist angle of the beams could well be improved by increasing the main reinforcements while maintaining the same stirrup and fiber spacing ratio. Figure 14 shows the relationship between the average of two twist angles and the torsional moment for the tested beams. Increasing the number of main reinforcements with the same spacing of stirrups and fiber ratio leads to an improvement in the beam stiffness and angle of twist (see Table 12). Journal of Engineering According to current observations of tested beams, increasing the longitudinal reinforcement number improves Tcr, Tu, and Ø. As evident in Tables 11 and 12, the increment rate in the torsional properties to the related number of longitudinal reinforcement was not the same for all tested beams. Figures 15(a)-15(c) show that the cracking torsional moment increased by 81.71%, the twist angle improved by 31.98%, and the ultimate torsional moment improved by 8.98%. Figure 15 shows the failure modes of the tested beams. e control beam (H1) demonstrates typical torsion failure behavior, with spiral diagonal fractures seen throughout the beam's cross section, with an angle of around 45 degrees. By increasing the applied load, larger cracks will appear until the concrete fails by crushing in the center of the beam. is mode of failure is the typical mode of beams without steel fiber. On the other hand, beams with steel fibers fail differently, especially those with a high steel fiber fraction, where the presence of fiber makes the crack control system resist pseudo-ductility in postcracking action. To be more precise, the primary cracks started to show up on the beam's surface before the postpeak falling branch of the torque vs. twist response started to develop (Figures 8, 11, and 14). As the torque deducted after the peak, the damage gradually summarized in a single crack, whose width was larger than the other and it increased until the test ending. From Figure 16, it can be seen that the hollow beam H10, with the largest fiber volume fraction and less spacing of stirrups, is best at resisting cracks. Summery and Conclusion e experimental results concerning the torsion behavior of full-scale hollow reinforced concrete beams under pure torsional loading are presented and discussed. e main variables were longitudinal reinforcement, stirrups, and steel fiber ratio. e angle of twist, cracking torsional moment, and ultimate torsional moment for HCS beams were measured. It is possible to draw the following conclusions based on the test results: (1) e deformations and torsional stresses of hollow RC beams were improved by increasing the number of stirrups. (2) e deformations and torsional stresses for hollow RC beams were improved using a larger number of longitudinal reinforcements. (3) e insertion of steel fibers to hollow RC beams subjected to a pure torsion load enhanced the principal tensile stress resistance following crack formation until total fiber pullout occurred at the critical fracture. (4) e percentage of improvement for Tcr, Tu, and Ø for beams H1 to H12 is as follows: (i) Tcr has a range of 3.68%-132% (ii) Tu ranges from 29.26% to 71.27% (iii) Ø ranges from 2.87% to 69.1% (5) It is preferable to increase the torsional strength (Tcr and Tu) of the hollow beam rather than its stiffness. Before the failure, the significant increase in Ø makes adding the main reinforcement and stirrups better to save lives and draw attention to the situation. (6) For beams H1 through H12, the relationship between the twist angles and torsional moments is the same. e amount of stress energy in the beam could be increased by including additional stirrups and main reinforcement. (7) With increasing main reinforcements, the rise in Tu is more than the increase in Tcr in hollow RC beams, whereas the rise in Tcr is greater than the increase in Tu with increasing VF and the number of stirrups. (8) e effect of steel fiber decreases with increasing main reinforcement and decreases in the spacing of stirrups for Tcr and Tu. Disclosure e experimental work was carried out at the University of Basrah Laboratory. Data Availability e data results of this study are based on our experimental work. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,609.8
2022-10-11T00:00:00.000
[ "Engineering", "Materials Science" ]
The Role of Endocrine-Disrupting Chemicals in Male Fertility Decline Endocrine-disrupting chemicals (EDCs) are exogenous compounds with natural or anthropogenic origin omnipresent in the environment. These compounds disrupt endocrine function through interaction with hormone receptor or alteration of hormone synthesis. Humans are environmentally exposed to EDCs through the air, water, food and occupation. During the last decades, there has been a concern that exposure to EDCs may contribute to an impairment of human reproductive function. EDCs affect male fertility at multiple levels, from sperm production and quality to the morphology and histology of the male reproductive system. It has been proposed that exposure to EDCs may contribute to an impairment of sperm motility, concentration, volume and morphology and an increase in the sperm DNA damage. Moreover, EDCs exert reproductive toxicity inducing structural damage on the testis vasculature and blood-testis barrier and cytotoxicity on Sertoli and Leydig cells. This chapter will explore the effects of EDCs in male reproductive system and in the decline of male fertility. Introduction Endocrine-disrupting chemicals (EDCs) are exogenous substances or mixtures of chemicals that can disrupt male and female endocrine function through the interaction with hormone receptors. They lead to alterations in hormone action, synthesis, transport and metabolic processes [1]. Several compounds such as dioxins, plastic contaminants (e.g., bisphenols (BP)), triclosan (TCS), pesticides and herbicides (e.g., diphenyl-dichloro-trichloroethane (DDT)), metals and others are known EDCs [2]. Humans may be exposed to EDCs due to contamination of water and food chain, inhalation of contaminated house dust and through occupational exposure [2]. Although, in some westernized countries the use of certain EDCs has been banned, there are cases that human exposure to these chemicals is inevitable. Thus, during the past decades, human exposure to EDCs has received increased attention, and particular focus has been given to the harmful effects of EDCs to the male reproductive system. Evidences suggest that EDCs may have significant adverse effects on human health and are contributing to the trends in occurrence of male reproductive health problems and the decline in male fertility [3]. According to the literature, male reproductive decline may result from a combination of morphological, functional and molecular alterations in the reproductive organs, often due to exposure to EDCs. Most studies are focused either on the evaluation of basic seminal parameters or reproductive outcomes, but there are evidences that EDCs may impact at the level of the reproductive and endocrine systems. For example, there are evidences that TCS has a tendency to bioaccumulate in the epididymis [4]. Bisphenol A (BPA) has been reported to have both estrogenic and antiandrogenic effects [5][6][7]. It has been also negatively associated with sperm quality [8][9][10]. Toxicological studies showed that BPA caused adverse reproductive outcomes, namely, decreased epididymal weight, daily sperm production and testosterone (T) levels in rodents [11][12][13]. Recently, our group performed a systematic review regarding the effect of exposure to mercury (Hg) on human fertility [14]. Results revealed that higher levels of Hg in blood and hair were associated with male subfertility or infertility status. This chapter summarizes the effects of male exposure to EDCs on markers of male fertility. The agents discussed here, which include TCS, BPA, metals (such as cadmium (Cd) and Hg), polychlorinated biphenyls (PCBs) and others were chosen based on their human exposure prevalence and adverse effects on human reproductive health. EDCs induce reproductive system toxicity: ultrastructural, cellular and molecular changes The male reproductive system is composed by two testes, a system of genital ducts, the accessory glands (seminal vesicles, prostate, Cowper and Littre glands) and the penis [15]. Testes, the male sexual glands, are ovoid organs localized outside the abdominal cavity within the scrotum. This localization maintains the temperature at 2-4°C lower than the body temperature, optimal for the testes function. Testes are surrounded by two different layers of protective tissue, the tunica albuginea and the tunica vaginalis. The testicular parenchyma is composed of one to three seminiferous tubules, the functional unit of the testis, and of interstitial tissue surrounding the tubules that contain the Leydig cells (LC), which are responsible for the production of T in the presence of luteinizing hormone (LH) (Figure 1) [16]. The seminiferous tubules are composed of male germ cells (spermatogonia, spermatocytes and spermatids) and Sertoli cells (SC). SC are involved in the mechanical support and nutrition of germ cells, regulation of male germ cell proliferation and differentiation, phagocytosis, steroid hormone synthesis and metabolism and maintenance of the integrity of seminiferous epithelium. The male reproductive system is responsible for the production of spermatozoa, for the synthesis and secretion of male sex hormones and for the delivery of male gametes into the female reproductive tract. The process of spermatogenesis is highly regulated by the hypothalamic-pituitary-gonadal (HPG) axis. Evidences suggest that the normal morphology and function of the male reproductive system are affected by several factors including environmental pollutants (Figure 1) (e.g., EDCs). In addition to altered testicular morphology and dysfunction, exposure to EDCs also increased the incidence of testicular pathologies. For instance, exposure to phthalates was associated with the development of testicular cancer, cryptorchidism and hypospadias [17]. This section discusses the current knowledge on reproductive system EDC toxicity in humans and other animals. Changes in volume/weight of reproductive organs The volume/weight of the male reproductive organs is an important indicator of the integrity of this system. Several animal studies showed a significant decrease in the weight of the testes and sex accessory tissues in animals exposed to EDCs [4, 18-23]. For instance, male rats treated with 10 and 20 mg/(kg day) of TCS revealed a significant decrease in the weight of the testes, epididymis, ventral prostate, vas deferens and seminal vesicles [18]. However, an administration of 5 mg/(kg day) of TCS did not cause significant change in the testes and sex accessory tissues [18]. Recently, Lan et al. [4] showed that the absolute weights of testes and epididymis of rats treated with 10, 50 or 200 mg/kg of TCS were not significantly affected. Rodents were exposed to BPA by the oral route or subcutaneous injections [24,25]. A dose of 2 ng/g body weight induced a decrease in epididymal weight and an increase in prostate weight. Bisphenol S (BPS), considered a safe substitute for BPA, has chemical similarities with BPA and may act as an EDC. Thus, a recent work compared the effects of BPA and BPS on the morphology and physiology of the ventral prostate of adult gerbils [26]. Animals treated with BPA and BPS showed no alterations in prostate weight. Regarding histopathology, BPS-treated animals showed intense prostatic hyperplasia; increased relative frequency of epithelium, muscular stroma and non-muscular stroma; and decreased luminal compartment, and BPA-treated animals showed increased occurrence of hyperplastic growth. But, in general the authors found that BPS promoted more structural and histopathological changes than BPA. Exposure to metals also induced effects on testes size. A dose of 5 mg/kg body weight of cadmium chloride (CdCl 2 ) administered to rats by oral gavage caused a significant decrease in testes and epididymis weight [19]. Moreover, Hg and zinc (Zn) significantly decreased the absolute and relative testicular weights in murine, with Hg producing the highest reduction in weight [27]. Similar results were obtained by Narayana et al. [22] and Geng et al. [23] that showed a decrease in the weights of reproductive organs of rats exposed to pesticides. Rats exposed to phthalates demonstrated reduced testicular weights and histologic changes in the seminiferous tubules [20,21]. Moreover, rats exposed to phthalates during the prenatal period developed reproductive anomalies, namely, smaller testes and penis size [28]. Human studies related to the effects of exposure to EDCs on testicular volume/ weight are limited but in accordance with animal studies. For instance, in a study in Croatian men, no occupational exposures were exposed to metals, and blood Cd was negatively correlated with testes size, suggesting that this metal exerts toxicity on human testes [29]. Alterations in testicular morphology Experimental studies showed that exposure to EDCs had adverse effects on testes, resulting in testicular damage at structural and consequently functional level. Male rats treated with 20 mg/(kg day) of TCS exhibited several histopathological malformations in the testes and sex accessory tissues [18]. Lumen of vas deferens from the treated rats exhibited the presence of stereocilia detached from the epithelium and the presence of eosinophilic bodies. Moreover, the stereocilia were found to be thin, few or absent in the epithelium of TCS-treated rats. Rats treated with a high dose of TCS (200 mg/kg) showed changes in the cauda epididymis and in the testis compared with the control group [4]. In the cauda epididymis, the alterations included vacuolated and exfoliated epithelial cells. Moreover, these authors identified the absence of sperm tails in the seminiferous tubules in the TCS-treated groups. Mice exposed to BPA showed the formation of morphologically multinucleated giant cells in testicular seminiferous tubules [30], disruption of the blood-testis barrier (BTB) and impaired spermatogenesis [31,32]. Similar results were obtained by other study using pesticides that induced severe degenerative changes in seminiferous tubules [23]. Metals, such as Cd and Hg, also induced structural alterations in the testis structure, including damage in the vascular endothelium and in the BTB integrity and necrosis and disintegration of spermatocytes [27,33]. In general, these animal studies showed that EDCs induced changes in testicular morphology, which may be a reason for the decline of male fertility. For instance, damage in epididymis compromise the transport of testicular sperm out of the testis, the acquisition of progressive spermatozoa motility and the sperm storage. Moreover, damage at SC and LC levels compromise the structure of the BTB and seminiferous tubules. Testicular dysfunction due to EDC exposure The two main functions of the testes are spermatogenesis (exocrine function) and steroidogenesis (endocrine function). In normal conditions the gonadotrophinreleasing hormone (GnRH) is secreted by the hypothalamus, stimulating the synthesis of LH and the follicle-stimulating hormone (FSH) [34]. LH is recognized by LH receptors in LC stimulating T biosynthesis (steroidogenesis). FSH is recognized by FSH receptors in SC having an important role in spermatozoa production (spermatogenesis). Several studies showed that these functions are affected by exposure to EDCs (Figure 1) [10, 18, [35][36][37][38][39]. Prenatal exposure to EDCs was associated with testicular anomalies later in life, which includes reduced semen volume and quality, increased incidence of cryptorchidism and hypospadias and increased incidence of testicular cancer [40]. EDCs reduced SC number and impaired LC development, inducing testicular anomalies at morphological and functional level [39]. This section presents the studies that assessed the relationship between animal and human exposure to EDCs and testicular dysfunction, including alterations in reproductive hormone levels. Evidences from animal studies suggest that TCS reduces the production of T in LC and disturbs the function of major steroidogenic enzymes [41,42]. Male rats treated with TCS or pesticides showed a significant decrease in the levels of serum LH, FSH, cholesterol, pregnenolone and T compared to control [18,23]. Regarding human studies, a case-control study showed that urinary levels of phthalates and TCS were negatively associated with inhibin B and positively with LH [39]. Additionally, an inverse association was found between urinary levels of phthalates or BPA and testosterone and estradiol (E 2 ) [38,39]. Similar results were obtained by Meeker et al. [35] that showed an inverse association between BPA concentrations in urine and serum levels of inhibin B and E 2 :T ratio in men recruited through an infertility clinic. Moreover, a positive association between BPA concentrations in urine and FSH and FSH:inhibin B ratio was found. Hanoaka et al. [36] did not found an association between exposure to BPA and free T and LH concentrations in men. However, a significant decrease in FSH concentrations was found in the BPA exposed men. Urinary levels of BPA were not associated with sperm quality in fertile men but were associated with markers of androgenic action [37]. A significant inverse association was found between urinary levels of BPA and free androgen index (FAI) levels and the FAI:LH ratio. Further, a significant positive association between BPA and sex hormone-binding globulin (SHBG) was found in fertile men. Recently, Lassen et al. [10] examined associations between urinary BPA concentration and reproductive hormones in young men from the general population. The authors found positive associations between urinary BPA concentrations and T, E 2 , LH and free T levels. BPA and BPS induced significant changes in T and estradiol [26]. Meeker et al. [38] demonstrated that exposure to phthalates may be associated with altered male endocrine function. Urinary concentrations of some phthalates were inversely associated with T, E 2 and FAI. Metals, namely, Cd, also affect the development of the male reproductive system and testis function. Mice prenatal exposed to Cd showed defects on the development of gonads, depletion of germ cells and impairment of spermatozoa maturation [43]. Cd also induces testicular dysfunction, which results of the functional impairment of SC and LC. Regarding human studies, the effect of Cd exposure to male endocrine function was assessed by several authors (as reviewed by de Angelis et al. [33]). The results obtained are controversial; some authors found that Cd concentrations were positively correlated with FSH, T, E 2 , LH and inhibin B and negatively correlated with prolactin [29,44]. However, other authors did not find significant correlations between Cd concentrations and serum hormone levels [45,46]. In general, these results suggest that exposure to EDCs may be associated with alterations in circulating hormone levels in men. Additionally, Yang et al. [47] showed that levels of GnRH and LH were significantly higher in occupationally manganese (Mn)exposed group compared with the non-exposed men. The levels of T were lower in the exposed group. However, this study demonstrated that there was no association between exposure to Mn and E 2 and FSH and prolactin levels. Molecular effects of EDCs The effects of EDCs on the morphology and function of the male reproductive system may be attributed to the interactions of these chemicals with several molecules. Male rats treated with 20 mg/(kg day) of TCS showed a significant reduction in the testicular levels of mRNA for cholesterol side-chain cleavage enzyme (Cyp11a1), 25-hydroxyvitamin D-1 alpha hydroxylase (Cyp27b1), 3β-hydroxysteroid dehydrogenase (Hsd3b1), 17β-hydroxysteroid dehydrogenase (Hsd17b6), steroidogenic acute regulatory protein (Star) and androgen receptor (Ar) as compared to control [18]. Moreover, the authors found that there was a decreased localization of StAR protein in testicular LC as determined by immunolocalization indicating a reduced expression of this protein in animals treated with TCS as compared to control. These results could be correlated to the reduction in LC number. In vitro studies investigated the effect of BPA on steroidogenesis [48,49]. The authors found that BPA inhibited the production of testosterone in a concentrationdependent manner over the course of the 24 h incubation [48]. Moreover, the concentrations of E 2 were greater in the presence of BPA. The decrease in the concentrations of T is related with the inhibition of activities of some enzymes, such as 3β-hydroxysteroid dehydrogenase (HSD3B1) and 17α-hydroxylase (CYP17A). However, the activity of aromatase was not altered by BPA treatment. More recently, additional results in MA-10 Leydig cell line showed that BPA affects steroidogenic genes, for instance, induces the upregulation of CYP11A1 and CYP19 genes [49]. Moreover, the authors found that BPA treatment induced the phosphorylation levels of c-Jun and the levels of protein expression of SF-1, suggesting that the JNK/c-Jun pathway may be involved in BPA toxicity. Similar results were observed in an animal study [49]. The testes from male Sprague-Dawley rats treated with CdCl 2 showed a significant increase in the activities of aspartate aminotransferase (AST) and alanine aminotransferase (ALT) [19]. Geng et al. [23] found that pesticides altered the testicular protein expression of B-cell lymphoma 2 (Bcl-2) and Bcl-2-associated X protein (Bax). Moreover, these authors showed that the activities of testicular enzymes including acyl carrier protein (ACP), lactate dehydrogenase (LDH) and gamma-glutamyltransferase (γ-GT) were significantly altered by exposure to pesticides. Spermatozoa Sperm motility, together with concentration and morphology, is considered as one of the important predictors of male fertility in vivo. Declining human sperm quality has been demonstrated in several recent studies. Age, lifestyle, environmental pollutants and nutritional factors can affect semen quality [14, [50][51][52]. The present section focuses on studies of environmental exposure to EDCs and male reproductive function, as measured by declines in semen quality parameters or increased sperm DNA damage/fragmentation. Effects of EDCs on sperm production, morphology, motility and velocity Several studies have been published regarding the association of exposure to phenols and human semen quality [53][54][55]. A case-control study was conducted to evaluate the association between exposure to phenols and idiopathic male infertility [55]. For that, the authors recruited idiopathic infertile men and fertile controls and measured urinary levels of BPA, benzophenone-3, pentachlorophenol, TCS, 4-tertoctylphenol (4-t-OP), 4-n-octylphenol (4-n-OP) and 4-n-nonylphenol (4-n-NP) and semen parameters. The authors found that exposure to 4-t-OP, 4-n-OP and 4-n-NP was associated with idiopathic male infertility, and exposure to 4-t-OP and 4-n-NP was also associated with abnormal semen quality parameters. However, in this study the authors did not find more relationships between exposure to other phenols and idiopathic male infertility. In another study, urinary BPA concentrations were associated with declines in sperm concentration, motility and morphology [53]. An increasing urine BPA level was associated with lower semen concentration, lower total sperm count, lower sperm vitality and lower sperm motility [54]. Moreover, the authors demonstrated a dose-response relationship between increasing urine BPA level and reduction in semen quality. Lassen et al. [10] also found an inverse association between BPA concentrations and progressive motility, but in this study, BPA excretion was not associated with semen volume, sperm concentration, total sperm count or percentage morphologically normal forms. However, some authors did not find any association between urinary BPA concentrations and some semen parameters, such as semen volume or sperm morphology [8,54]. TCS has been shown to decrease sperm density probably due to reduced testicular spermatogenesis [18]. A reduced sperm density was observed in the lumina of epididymal tubule from the treated rats. Rats treated with high doses of TCS (50 and 200 mg/kg) showed a significant decrease in the daily sperm production and an increase in the percentage of sperm abnormalities, which included elevated ratios of abnormal sperm head and tails [4]. Zhu et al. [56] performed a cross-sectional study to evaluate the association between exposure to TCS measured by urinary TCS concentration and semen quality in humans. The authors found an association between urinary TCS concentrations and poor semen quality parameters; namely, the authors found an inverse association between urinary TCS concentrations and percentage of sperm motility, sperm count, sperm concentration and percentage of normal morphology, suggesting that environmental exposure to TCS may have impact on semen quality. Regarding exposure to PCBs, several studies showed an inverse association between exposure to PCB 153 and sperm motility, while relationships with sperm concentration or total sperm count were inconsistent [57][58][59]. Additionally, Hauser et al. [60] found an inverse dose-response relationship between PCB 138 and sperm concentration, motility and morphology. The correlation between exposure to metals and adverse consequences for human and animal fertility is not completely established. Several studies determined the effects of exposure to metals on male gametes. In vitro studies, using bovine sperm, determined the effect of direct exposure to Hg on male gametes [61,62]. Arabi et al. [61] showed that exposure to Hg (50,100,200, and 300 μmol/l) induced LPO (lipid peroxidation), decreased the glutathione (GSH) content and decreased the percentage of viable spermatozoa. Additionally, a more recent study showed that bovine sperm exposed to Hg at 8 nM and 8 μM have less motility and have impaired sperm membrane integrity, increasing levels of reactive oxygen species (ROS) and LPO and decreasing the antioxidant activity and diminished fertility ability [62]. Regarding human fertility, in a cross-sectional study, participants with high blood Hg level had lower sperm with a normal morphology [63]. Cd is another male reproductive toxicant that exerts effects even at low levels of exposure by several mechanisms [64]. In vitro studies on human spermatozoa obtained through ejaculation allow to evaluate the effect of Cd treatment in semen parameters [65,66]. Cd decreased sperm motility and sperm viability and induced detrimental effects on spermatozoa metabolism by inhibition of the activity of glycogen phosphorylase, glucose-6-phosphatase, fructose-1,6-diphosphatase, glucose-6-phosphate isomerase, amylase, Mg 2+ − dependent ATPase and lactic and succinic acid dehydrogenases. As reviewed by de Angelis et al. [33], significant negative correlations were found between Cd levels and semen parameters, including total sperm count, concentration, motility and morphology. Results from a meta-analysis indicate that men with low fertility had higher semen Pb and Cd levels and lower semen Zn levels [67]. Sperm motility was significantly decreased in men occupationally exposed to Mn [47]. Occupational exposure to pesticides increased the risk of morphological abnormalities in sperm in addition with a decline in sperm count and a decreased percentage of viable spermatozoa. For instance, the exposure to pesticides reduced the seminal volume, sperm motility and concentration and increased the seminal pH and the abnormal sperm head morphology [68][69][70]. A study showed that young Swedish men exposed to phthalates presented a decrease in progressive sperm motility [71]. Additionally, levels of urinary phthalates and insecticides were also associated with lower sperm concentration, lower motility and increased percentage of sperm with abnormal morphology [72][73][74][75]. These results confirmed the results obtained by in vitro and in vivo studies [76,77]. Sperm DNA damage Sperm DNA integrity is essential for the correct transmission of genetic information [78]. Damage at sperm DNA level may result in male infertility. Sperm DNA damage is caused by oxidative stress that causes impairment in the sperm membrane [79]. It is well-known that some EDCs may induce oxidative stress and decrease the cellular levels of GSH and protein-sulfhydryl groups. Preclinical studies with male rats showed that exposure to BPA was associated with a significant increase in sperm DNA damage [80]. A statistically significant positive association between urinary concentrations of parabens and BPA and sperm DNA damage was found in male partners of subfertile couples [53,81]. Contrary results were obtained by Goldstone et al. [8] that found a negative relationship between BPA and DNA fragmentations. Additionally, other EDCs such as heavy metals (e.g., Hg), PCBs and insecticides induce sperm DNA damage [59,61,73,75,[82][83][84]. Urinary levels of Hg and nickel in infertile men were associated with increasing trends for tail length, and the levels of Mn were associated with increasing trend for tail distributed moment [82]. The adverse effects of phthalates on sperm DNA were assessed by several studies among infertile men [75,84]. Urinary concentrations of phthalate metabolites were associated with sperm DNA damage. These studies suggest that environmental and occupational exposure to EDCs may be associated with increased sperm DNA damage. Conclusions The results yielded in this chapter showed that both environmental and occupational exposures to EDCs affect male reproductive function at multiple levels. In human populations, the majority of studies point toward an association between exposure to EDCs and male reproduction system disorders, such as infertility, testicular cancer, poor sperm quality and/or function. Exposure to EDCs was associated with declined semen quality, increased sperm DNA damage, alterations in testis morphology and endocrine function. However, there are studies exploring the effect of EDCs on male reproductive health including semen quality, reproductive hormones and male fertility that produced inconsistent results probably due to small-sized study populations and lack of control for potential confounding variables. These contrary results highlight the need to discuss and investigate the effect of environmental pollutants in the male reproductive health. Moreover, the identification of the sequence of events and mechanisms might be important to better understand the effect of exposure to EDCs on male reproductive system and their contribution to male fertility decline. Conflict of interest The authors declare no conflicts of interest.
5,504.8
2019-12-13T00:00:00.000
[ "Biology" ]
PARP inhibition by olaparib or gene knockout blocks asthma-like manifestation in mice by modulating CD4+ T cell function An important portion of asthmatics do not respond to current therapies. Thus, the need for new therapeutic drugs is urgent. We have demonstrated a critical role for PARP in experimental asthma. Olaparib, a PARP inhibitor, was recently introduced in clinical trials against cancer. The objective of the present study was to examine the efficacy of olaparib in blocking established allergic airway inflammation and hyperresponsiveness similar to those observed in human asthma in animal models of the disease. We used ovalbumin (OVA)-based mouse models of asthma and primary CD4+ T cells. C57BL/6J WT or PARP-1−/− mice were subjected to OVA sensitization followed by a single or multiple challenges to aerosolized OVA or left unchallenged. WT mice were administered, i.p., 1 mg/kg, 5 or 10 mg/kg of olaparib or saline 30 min after each OVA challenge. Administration of olaparib in mice 30 min post-challenge promoted a robust reduction in airway eosinophilia, mucus production and hyperresponsiveness even after repeated challenges with ovalbumin. The protective effects of olaparib were linked to a suppression of Th2 cytokines eotaxin, IL-4, IL-5, IL-6, IL-13, and M-CSF, and ovalbumin-specific IgE with an increase in the Th1 cytokine IFN-γ. These traits were associated with a decrease in splenic CD4+ T cells and concomitant increase in T-regulatory cells. The aforementioned traits conferred by olaparib administration were consistent with those observed in OVA-challenged PARP-1−/− mice. Adoptive transfer of Th2-skewed OT-II-WT CD4+ T cells reversed the Th2 cytokines IL-4, IL-5, and IL-10, the chemokine GM-CSF, the Th1 cytokines IL-2 and IFN-γ, and ovalbumin-specific IgE production in ovalbumin-challenged PARP-1−/−mice suggesting a role for PARP-1 in CD4+ T but not B cells. In ex vivo studies, PARP inhibition by olaparib or PARP-1 gene knockout markedly reduced CD3/CD28-stimulated gata-3 and il4 expression in Th2-skewed CD4+ T cells while causing a moderate elevation in t-bet and ifn-γ expression in Th1-skewed CD4+ T cells. Our findings show the potential of PARP inhibition as a viable therapeutic strategy and olaparib as a likely candidate to be tested in human asthma clinical trials. Background Contrary to a number of chronic diseases, asthma incidence is on the rise [1]. In the United States alone, more than 20 million individuals suffer from the disease. A sizable portion of these asthmatics do not respond to the existing drugs [2]. Accordingly, the need for new drugs as mono or adjuvant therapies is immediate. The pathogenesis of asthma involves several cellular and non-cellular factors including Th2 and Th17 CD4 + T cells as well as B cells in addition to circulating factors such as IL-4, IL-5, IL-13 and many others [3]. Targeting the function of these cells and the ensuing production of Th2 cytokines and IgE has been a critical objective both in the clinic and in the laboratory. Our laboratory pioneered the studies demonstrating the involvement of poly(ADP-ribose)polymerase (PARP)-1 in asthma [4][5][6][7][8]. Our studies as well as those of others [9][10][11][12][13] suggest that the protein may constitute a viable target for the treatment of the disease. PARP-1, a member of a large family of proteins, is a DNA repair-associated enzyme that participates in the recruitment and trafficking processes of DNA repair proteins and histones to the DNA lesions primarily through base excision repair [14]. However, our laboratory and many others have suggested a role for the enzyme in a number of inflammatory conditions and regulation of transcription. We have shown that it controls NF-κB nuclear trafficking and thus transcription of NF-κB-dependent genes including those critical for asthma manifestation [15][16][17]. We have also shown that PARP-1 controls the fate of STAT-6 upon IL-4 or allergen exposure both in vitro and in an animal model of the disease through a calpain-dependent mechanism [8]. An ultimate goal of our studies is to explore the possibility that PARP can be targeted for therapy to treat asthma in human subjects. A great deal of effort has been made to generate potent inhibitors of the enzyme targeting cancer and inflammatory diseases [18]. Recently, olaparib (AZD2281), a small molecule inhibitor of PARP-1 and PARP-2 showed great potential for the treatment of BRCA-negative breast and ovarian cancer [19]. These neoplastic conditions were specifically targeted because the cancer cells accumulate fatal dsDNA breaks when exposed to DNA damaging agents in the absence of PARP activity leading a synthetic lethality phenotype [20]. Because this process occurs only in BRCA-mutant cancer cells, PARP inhibition is not expected to affect normal cells. In several clinical trials, the drug showed a remarkable therapeutic efficacy with an acceptable safety index in cancer patients [21]. It is noteworthy that other PARP inhibitors have also been developed and are currently tested in more than 20 clinical trials. In the current study, we aimed to test the efficacy of olaparib in experimental asthma. We specifically examined whether olaparib administration at doses that can be translated to human therapy blocks some or all asthma-like traits. We also examined whether the drug blocks already established disease to mimic what actually occurs in human asthmatics. Animals C57Bl/6J wild type (WT) and OT-II mice (6-8 weeks old) were purchased from Jackson Laboratories (Bar Harbor, ME, USA). C57BL/6 PARP-1 −/− mice were generated through a backcrossing with C57BL/6 WT mice for eleven generations. The last generation was interbred to generate the C57BL/6 PARP-1 −/− mice. WT mice generated through the PARP-1 +/− mice breeding were also included in the experiment. Mice were bred in a specific-pathogen free facility at LSUHSC, New Orleans, LA, and allowed unlimited access to sterilized chow and water. Maintenance, experimental protocols, and procedures were approved by the LSUHSC Animal Care & Use Committee. CD4 + T cell purification, Th1/Th2 skewing, TCR stimulation, Adoptive transfer, and RT-PCR OT-II or WT mice were sacrificed and splenic CD4 + T cells were isolated by negative selection (Stem Cell Technologies, Vancouver, Canada). Purified CD4 + T cells were stimulated on coated plates with antibodies to CD3 (1 μg/ml) and CD28 (0.5 μg/ml) (e-bioscience, San Diego, CA, USA) then skewed toward a Th1 or Th2 phenotype as described [23]. WT CD4 + T cells were skewed in the absence or presence of 5 μM olaparib. RNA was extracted using Qiagen RNA extraction kit according to the manufacturer instructions. The extracted total RNA was used for the generation of cDNA using reverse transcriptase III (Invitrogen) and quantitative PCR was conducted using primer sets (IDT, San Jose, CA, USA) specific for mouse gata-3, il-4, t-bet, ifn-γ, or β-actin as described [23,24]. Quantitative determination of gene expression levels using a 2-step cycling protocol was conducted on a MyIQ Cycler (Bio-Rad, Hercules, CA, USA). Relative expression levels were calculated using the 2[− Delta Delta C(T)] method [25]. Quantities of all targets were normalized to the mouse β-actin gene. Th2-like cells from OT-II mice were administered i.v. into the tail vein of recipient mice (1 × 10 6 cells/mouse). All mice were subjected to OVA challenge daily for 4 days. Mice were sacrificed 48 h after the last challenge. Data analysis All data are expressed as means ± SEM of values from at least five mice per group unless stated otherwise. PRISM software (GraphPad, San Diego, CA, USA) was used to analyze the differences between experimental groups by one way analysis of variance (ANOVA) followed by Tukey's multiple comparison test. Results Olaparib blocks airway eosinophilia, mucus and IgE production, and AHR upon a single or repeated challenge with OVA in a mouse model of asthma Figure 1a shows that a single administration of olaparib at the 1 mg/kg dose almost completely prevented the elevation of OVA-specific IgE production in BAL fluids (BALF) but not sera collected from OVA-sensitized and challenged mice. A slightly higher dose of 5 mg/ kg was sufficient to cause a significant reduction in the sera levels of OVA-specific IgE. As expected, PARP-1 gene deletion provided similar protection. The blockade in IgE production coincided with a significant reduction in the total number of inflammatory cells recruited to the lung of treated animals with a prominent effect on eosinophils, neutrophils, and lymphocytes ( Figure 1b). Figure 1c shows an example of the inflammatory cell infiltration into the lungs of OVA-challenged mouse and the effective protection against such infiltration by treatment with 5 mg/kg olaparib as assessed by H&E staining. Treatment with olaparib also reduced mucus production as assessed by Periodic acid-Schiff (PAS) staining ( Figure 1d). Figure 1e shows that administration of 5 mg/ kg olaparib almost completely prevented AHR manifestation to increasing doses of methacholine. The effects of olaparib administration were similar to those observed in OVA-challenged PARP-1 −/− mice. The protective effect of olaparib against a single OVA challenge does not necessarily mean that the drug would maintain its anti-inflammatory efficacy upon multiple challenges. Accordingly, mice were challenged daily for three consecutive days and received increasing doses of olaparib 30 min after every challenge. Figure 2a shows that olaparib maintained a remarkable efficacy in reducing OVA-specific IgE production with a maximal protection conferred by the 5 mg/kg dose of the drug. At this dose, the drug exerted a pronounced protection against the inflammatory burden induced by repeated OVA challenges including eosinophilia ( Figure 3a shows that both single and multiple OVA challenge induced considerable levels of several Th2 cytokines including eotaxin, IL-4, IL-5, IL-6, IL-13, and M-CSF, and that olaparib administration suppressed production of these cytokines. It is important to note that in the single OVA challenge model, olaparib at 1 mg/kg provided a remarkable reduction in the production of the aforementioned cytokines most notably eotaxin, IL-4, and M-CSF. Upon repeated OVA challenges, the lowest dose of olaparib only reduced the levels of IL-5 and IL-6. However, the 5 mg/kg dose was sufficient to almost completely block the production of all measured cytokines. It is worth mentioning that the effect of PARP inhibition either pharmacologically or by gene knockout on IL-2 production was marginal in both the single and repeated OVA challenge models (Figure 3b). Figure 3c shows that the levels of IFN-γ were reduced upon a single or repeated challenge with OVA. Such decrease was prevented by administration of the PARP inhibitor. Interestingly, the levels of IFN-γ were markedly lower in control PARP-1 −/− mice and, unlike in olaparibtreated animals, OVA challenge did not cause an elevation of the cytokine in the knockout animals. PARP inhibition by olaparib or gene knockout prevents OVA challenge-induced elevation in CD4 + T cells but increases T-reg cell population in spleen of treated mice Given the substantial effect of PARP inhibition on Th2 cytokine production, we next examined whether PARP inhibition achieved such effect by modulating CD4 + a b c d e Figure 1 C57BL/6J WT or PARP-1 −/− mice were subjected to OVA sensitization followed by a single challenge to aerosolized OVA or left unchallenged. WT mice were administered, i.p., 1 mg/kg, 5 mg/kg or 10 mg/kg of olaparib or saline thirty minutes after OVA challenge. Mice were sacrificed 48 h later and lungs were subjected to formalin fixation or BAL. a Assessment of BALF or sera collected from the different experimental groups 48 h after OVA challenge for OVA-specific IgE using sandwich ELISA. b Cells of BALF were differentially stained, and total cells, eosinophils, macrophages, lymphocytes, and neutrophils were counted. Data are expressed as total number of cells per mouse. Data are means ± SD of values from at least six mice per group. c Lung sections from OVA-challenged mice that were treated with either saline or olaparib were subjected to H&E or d PAS staining. e Mice were sensitized and challenged with OVA as described above. A group of WT mice received an injection of 5 mg/kg of olaparib. Penh was recorded 24 h later using a whole body plethysmograph system before and increased upon olaparib administration. The T-reg cell population in naïve untreated PARP-1 −/− mice was higher than that in OVA-challenged WT mice. However, single or repeated OVA challenge did not culminate in an additional increase of such population in the mutant mice. It is unclear whether PARP inhibition-associated elevation in T-reg cell population was due to changes in the number of CD4 + T cells. However, these results suggest a potentially important role for PARP-1 in CD4 + T cell function. PARP inhibition by olaparib or gene knockout modulates CD4 + T cell function by differentially affecting expression of gata-3 and t-bet in CD3/CD28-treated CD4 + T cells We next examined whether the effect of olaparib on Th2 and Th1 cytokines was by controlling mRNA expression of key transcription factors that regulate the expression of these cytokines focusing primarily on gata-3, t-bet, IL-4, and IFN-γ. To this end, CD4 + T cells were skewed toward a Th1 or Th2 phenotype and stimulated with anti-CD3/CD28 antibodies in the presence or absence of 5 μM olaparib. critical role in the function of these cells. To test the role of PARP-1 in CD4 + T cell function during an allergic response, we examined whether adoptive transfer of WT CD4 + T cells isolated from OT-II mice that were skewed in vitro toward a Th2 phenotype reverses asthma-like traits in naïve PARP-1 −/− mice upon OVA exposure. Figure 6a shows that, indeed, transfer of Th2-skewed CD4 + T cells was sufficient to reverse lung inflammation. Such effect occurred concomitantly with an elevation in OVAspecific IgE production ( Figure 6b) and production of the Th2 cytokines IL-4, IL-5, IL-10, and GM-CSF (Figure 6c) in addition to the Th1 cytokines IL-2 and IFN-γ (Figure 6d) in PARP-1 −/− mice upon exposure to aerosolized OVA to levels equivalent or close to those observed in the WT counterparts. These results clearly suggest a critical role for PARP-1 in the CD4 + T cell function. Discussion In this study, we show that olaparib administration is highly efficient in blocking established AAI and AHR, which constitute two major components of asthma. We also provide evidence for an important role for PARP-1 in CD4 + T cell function without a prominent effect on B cell function. Moreover, our results support the possibility that PARP inhibition may also influence T-reg cell accumulation as an additional mechanism in dampening allergic response in our experimental models. Lastly, the effect of olaparib on CD4 + T cell function may be strongly linked to the ability of PARP-1 to control expression of the transcription factor GATA-3. Olaparib treatment was very effective in blocking repeated challenges to OVA in mice. Remarkably, a dose as low as 1 mg/kg of the PARP inhibitor was sufficient to confer protection against the manifestation of several asthma-like traits including AHR. As very recently shown by one of us [26], olaparib is also effective in reducing lung inflammation induced by LPS and inhibits expression of several inflammatory factors including VCAM-1 and TNF-α. Our results show that a major role of PARP may be in the function of CD4 + T cells. This is supported by the finding that an adoptive transfer of OT-II CD4 + T cells was sufficient to reverse lung cellularity and production of Th2 cytokines and IgE to levels comparable to those detected in similarly treated WT mice. The Th1 cytokines were also elevated. An increase in IL-2 is expected as it is critical for CD4 + T activation [27]. However, the production in IFN-γ was surprising. Although speculative, it is possible that the increase in IFN-γ was mediated by PARP-1 −/− CD4 + T cells in the presence of IL-2 produced by the adoptively transferred WT CD4 + T cells. It is also possible that the increase in IFN-γ may be mediated by PARP-2. This is based on the observation that PARP-1 gene knockout only slightly increase the expression of the Th1 cytokine while treatment with olaparib, which inhibits both PARP-1 and PARP-2, substantially increased it (Figure 3c). It is important to acknowledge that the study, as conducted, does not cover all the aspects of asthma manifestation and it remains to be determined whether the transfer of WT CD4 + T cells is sufficient to reverse AHR and mucus production in PARP-1 −/− mice. Although more specific experimentation is required, it is tempting to conclude from the adoptive transfer study that PARP-1 may not play a direct role in B cell function. The adoptive transfer of OT-II CD4 + T cells was sufficient to induce substantial levels of OVAspecific IgE. Such immunoglobulin production could have only been produced by PARP-1 −/− B cells clearly suggesting that the function of these cells is comparable to that of WT B cells in response to OVA challenge. We speculated in our previous studies that the primary reason for the reduced production of IgE upon PARP inhibition is the effect on IL-4 production [5,6]. We cannot, however, exclude the possibility that PARP plays a role in B cell trafficking especially when considering the effect of PARP inhibition, pharmacologically or by gene knockout, on the overall recruitment of lymphocytes to the lung as shown in Figures 1b and 2b. The role of PARP-1 in GATA-3 expression may be the driving cause for the ability of PARP inhibition to reduce IL-4, IL-5, and IL-13 production. It is noteworthy that GATA-3 is the master regulator for the development of Th2 cells [28] through its ability to control the activation of the Il4/Il5/Il13 cytokine locus. The role of PARP-1 in T-reg cell accumulation has been reported in mice, which was associated with an increase in Foxp3 [29]. We confirm these results in the experimental AAI setting. Although olaparib increased the T-reg (CD4 + /CD25 + /Foxp3 + ) cells upon a single or repeated OVA challenge, T-reg cells were increased in PARP-1 −/− mice regardless of challenge with OVA. This suggests that PARP-1 moderately regulates T-reg cells but not upon an inflammatory response. Whether the slight increase in T-reg cells is a major driving force in the anti-inflammatory effect of PARP inhibition is not clear. Interestingly, a recent study demonstrated that T-reg cells isolated from PARP-1 −/− mice are as functional as those isolated from WT mice [30]. Overall, the present studies provide critical information on the role of PARP-1 upon an acute or established AAI and AHR and provide support to the notion that PARP can be targeted for the treatment of some aspects of human asthma. Almost two dozen clinical trials most of which are in phase II or III are currently examining the possibility of establishing olaparib as a mono or adjuvant therapy for some specific cancers with BRCA mutation [19]. It is noteworthy that there are additional drugs with varying potency in inhibiting PARP under clinical trials most of which focus on the synthetic lethality induced by the drugs in BRCA-mutant cancer cells [19]. This phenomenon, as stated above, spares normal cells while targeting specifically the mutant cancer cells leading to their demise as a result of the accumulation of a fatal level of dsDNA breaks. It is important to note that the overarching assumption of these clinical trials is that these drugs do not have any important negative effects on normal cells and tissues. According to a clinical trial conducted by Fong et al. [21], a total of 200 mg olaparib, daily for more than 24 weeks, did not cause any side effects. This dose represents a 2.3 mg/kg for men with an average weight of 87 and 2.69 mg/kg for women with an average weight of 74.4. These doses fall between the 1 and 5 mg/kg doses used in the current study with which we observed substantial protection against experimental asthma. It is important to note that in the aforementioned clinical study and others [31][32][33] on higher doses of olaparib for patients with breast or ovarian cancer, the most common side effects were nausea, vomiting, fatigue and anemia. Despite these effects, discontinuation of the drug due to these side effects was a rare event. Additionally, patients with advanced cancer may be more prone to adverse events than asthma patients. Nevertheless, this would need to be tested closely in any human clinical study. The likely reduced side effects associated with the use of low doses of olaparib or other PARP inhibitors is very promising for the potential use of these drugs in treatment regimens against human asthma. Furthermore, treatment regimens may be extensive and lengthy in cancer, which may not be the case in asthma predicting that the use of olaparib in asthma may be associated with lesser side effects. Perhaps the therapeutic potential of olaparib may become more relevant to difficult to treat asthma especially those that do not respond to corticosteroids. Although we remain cautious, our study suggests that olaparib and potentially other PARP inhibitors are ready for testing on human asthma. Conclusion Overall, the results of the present study provide more support for the role of PARP-1 in asthma pathogenesis and the potential of PARP inhibition as a viable therapeutic strategy for the treatment of asthma in humans. More importantly, our results propose olaparib as a likely candidate to be tested in human asthma clinical trials. Authors' contributions MAG is the graduate student who led the study, conducted most of the experiments and the statistical analyses. KP conducted some the in vitro experiments and the statistical analyses. SVI conducted the skewing experiments, RT-PCR, and the statistical analyses. AAA assisted with some of the animal studies and FACS analysis. JW assisted with the animal studies. PR conducted some of the in vitro experiments and help in the troubleshooting. HFR assisted with some of the animal studies and FACS analysis. AHE assisted with the animal studies. MRL edited the manuscript and contributed to discussion. MSM contributed to the training of the first author. KAG contributed to the training of the first author. AR provided reagents and assistance with FACS. AO helped in the discussion and provided reagents and access to instruments. ASN contributed to the training of several members of the laboratory, experimental design, and troubleshooting. AHB is the principal investigator; contributed to the design of the experiment and the training of researchers as well as provided the financial support for the conducted work. All authors read and approved the final manuscript.
5,147.8
2015-07-14T00:00:00.000
[ "Medicine", "Biology" ]
Investigating Student Perceptions Based on Gender Differences Using E-Module Mathematics Physics in Multiple Integral Material . Mathematics physics is a difficult learning and becomes a scourge in studies in physics education. Learning physics and mathematics itself will be very effective when using e-modules, but in terms of making e-modules, students' opinions or perceptions are needed regarding this matter. This study aims to look at student perceptions and also compare these perceptions with other classes based on gender or gender. The research conducted is a survey type quantitative research. The sampling technique used in this study was simple random sampling with the research subject as many as 92 physics education students who contracted the mathematics physics course. The instrument used in collecting data is 15 questions containing 4 choices that must be filled out by students. Analysis of the data used in this study in the form of descriptive analysis and ANOVA test to determine whether there is an average difference in each student's perception. The results obtained indicate that girls have different perceptions in class A and class B, while for boys there is a difference between class A and class C. These results indicate that girls have a fairly large average difference in perception with each other, while for boys the perception tends to be uniform compared to girls. Introduction Global technological developments have brought an era where information and innovation can be developed very rapidly and not limitedly (Widyastono, 2015;Setyawan, 2019;Zoebaidha, 2020;Rokhim et al., 2020). Every innovation created is used to provide positive benefits for human life (Jamun, 2018;Rusyda, 2019;Sabri et al., 2021). Unlimited access to technology can also help improve living standards and accelerate in all areas of human life (Hasibuan, 2016;Rastati, 2018;Smith et al., 2018). The existence of positive and negative impacts in technological developments, of course, can have an influence on the educational learning process in the future (Nurdin, 2016;Rais et al., 2018). One that significantly affects the educational learning process is the use of appropriate and effective teaching materials. Teaching materials are all forms of materials used to assist teachers/instructors in carrying out teaching and learning activities in the classroom (Aditia & Muspiroh, 2013;Nugraha et al., 2013;Zulmaulida & Saputra, 2014;Latifah, 2015). The existence of teaching materials will greatly help teachers design learning, while for students, teaching materials will greatly help their competence (Sundayana, 2015;Harahap & Aini, 2019;Kimianti & Prasetyo, 2019). If teaching materials have many shortcomings, it will directly affect the effectiveness of classroom learning, especially in universities (Lee, 2011;Effiong & Igiri, 2015;Arsanti, 2018). Therefore, it takes an interactive, effective, and flexible teaching material to be applied in the classroom, one of the teaching materials that have these properties is the use of e-modules. E-modules were chosen because there are several materials that can support learning such as audio, animation, images, videos, and can be used flexibly (Fonda & Sumargiyani, 2018;Sari & Ariswan, 2021;Ummah, et al., 2020). E-modules can train and assist students in understanding the material and being responsible according to their abilities, as well as facilitating educators in measuring student learning outcomes (Lim, et al., 2005;Hidayati, et al., 2019;Haspen & Syafriani, 2020). In addition to understanding the material, emodules can train students to learn independently and responsibly according to their abilities (Perdana, et al., 2017;Nurhasnah, et al., 2020). Independent learning is very important to see how effective the use of e-modules is in classroom learning, researchers see that for science learning in higher education, especially Physics education, there is still a shortage of e-module teaching materials that speak Indonesian and discuss mathematics physics material. Physics is one of the foundations for building students' conceptual understanding (Taqwa & Rivaldo, 2019;Hasanah, et al., 2020;Ramadhan, et al., 2020). Physics discusses the symptoms and properties of objects in nature (Martawijaya, 2015;Neldawati, 2020). One of the objectives of learning physics is to guide students to apply their knowledge in problem solving activities (Nurhayati, et al., 2016;Maliki, et al., 2017;. Problem solving ability is usually often used on materials that are difficult for students to learn, one of the materials that is often the scourge of Physics education students is mathematics physics material. Mathematical physics is a subject at several universities that is often considered not easy and difficult to learn (Ellianawati, et al., 2017;Agustina, et al., 2019;Marisda, 2019). The fact that shows that the mathematics physics course is a difficult subject can be seen from the low student exam results and the large number of advanced students who repeat this course (Fadillah, et al., 2017;Bustami, et al., 2020;). Mathematical physics itself has a very close relationship with the completion of mathematics in any given problem or concept (Tanjung, et al., 2018;Kurniawan, et al., 2018;Jufrida, et al., 2019). The problem solving concepts given mostly only use a few teaching materials and tend to use English, this of course makes researchers interested in creating a source of complementary teaching materials in the form of e-modules that are in Indonesian, interactive, flexible, and effectively used in the classroom. The integrated e-module in the mathematics physics course takes a fairly difficult material, namely multiple integral. The double or fold integral is a branching of the integral further material that often appears in the form of a double integral and a triple integral (Rahayu, & Zuhairoh, 2017). According to Apriandi & Krisdiana (2016) the causes of students having difficulty in learning folding integral material are: (1) Difficulty in drawing a function; (2) Difficulty converting variables; (3) Difficulty in determining the limits of integration; (4) Difficulty in determining the form of integration. These difficulties will certainly be overcome if the source of teaching materials is equipped with appropriate complementary sources of teaching materials. One way to see whether the teaching materials made are good or not can be done by looking at student perceptions. Perception is basically a process that is preceded by sensing, organized, and then interpreted so that individuals realize and understand what is felt by the senses (Purwanti, 2013;Hamidah, et al., 2014;Cahyono, 2017). Perceptions examined in this study were reviewed based on gender in each class, namely regular a, b, and c. Gender is one of the differentiating factors that refers to gender identity which is generally divided into men and women (Perry, 2019;Sullivan, 2020). Gender differences between men and women significantly affect decisions about something, women tend to think more carefully and effectively the decisions they will make. By looking at the importance of students' perceptions of the mathematics physics e-module made of multiple integral material, the researchers conclude the formulation of the problem as follows: 1. How are the students' perceptions of regular class a, regular b, and regular c regarding the multiple integral material of the e-module in mathematics physics with gender? 2. How is the average difference in perception for each class of regular a, regular b, and regular c in terms of gender? Methods The research method used is a survey type quantitative research. Quantitative research methods are research methods used to examine certain populations or samples with data results in the form of numbers (López, et al., 2018). The sampling technique used in this research is simple random sampling. Simple random sampling is a method of drawing from a population in a certain way so that each member of the population has an equal chance of being selected (Acharya, et al., 2013;Cahyono, 2017;Etikan, 2017). By using a simple random sampling technique, the researcher will obtain data that is in accordance with the objectives and needs of the researcher. The data collection instrument used in this study was a student perception questionnaire distributed to 92 students in three different classes, namely regular a, regular b, and regular c. Questionnaire is a method of collecting data through a statement factor filled in by the respondents which is used to find out student responses regarding the e-module given (Wahyudin, et al., 2010). The grid of data collection instruments used in this study can be seen in Table 1. Text clarity Multimedia size suitability The clarity of the color and shape of the image Good multimedia display quality Multimedia that is presented is attractive Presentation of Material in the E-Module The material is easy to understand The order of the material is clear The sentences used are simple and easy to understand The language used is communicative Sample suitability with material The suitability of multimedia with the material Benefits of the E-Module Ease of use of modules Media can help students understand the material Interest in using modul Increased motivation to learn The collected data is then made into a scoring category which states the level of student perception of the e-module made. The Likert scale used in this study were: 1 (Strongly disagree), 2 (Disagree), 3 (Agree), 4 (Strongly Agree) with 15 questions given to students. The category level of student perception of the e-modules made can be seen in Table 2. The data obtained were processed and analyzed using descriptive statistics and inferential statistics. Descriptive statistics are used to analyze data by describing or describing the collected data as they are without intending to make generalized conclusions (Dhani & Utama, 2017;Rulandari & Sudrajat, 2017;Zellatifanny & Mudjiyanto, 2018). The descriptive statistics used are presented using mean values, median values, maximum and minimum values, ranges, and standard deviations. Meanwhile, inferential statistics is a technique for describing data used to examine differences or relationships between groups or variables (Guetterman, 2019). Inferential statistics used in the form of testing assumptions and hypothesis testing. For assumption test, the first stage carried out in this research is to do a prerequisite test by checking the normality and homogeneity of the data obtained, the examination can be carried out using the normality test and homogeneity test. The normality test is carried out on data that is mainly small in size and the possibility of the data being normal which in fact is not normal (Ahad et al., 2011). The homogeneity test aims to see the level of homogeneity of the data obtained through research (Jumliadi et al., 2020). If the sig value is above 0.05, then the data is said to be normal and homogeneous (Yusuf et al., 2018;Suyana et al., 2019). After testing the assumptions, the next researcher tested the hypothesis using the ANOVA test. Anova test or F test is a distribution used to analyze the ratio of variance of data obtained in research (Kim, 2017). The research procedure that was carried out for the first time by the researcher was giving the object whose perception was measured. After that, the researcher collected data using google form as a tool to help collect data. This data collection was carried out in 3 classes with each data collection at a different time. After the data was collected, the researcher analyzed the data using SPSS 22. The analyzed data was then viewed and conclusions were drawn about how the level of students' perceptions in the three classes and also the differences in perceptions of each class were. In simple terms, the research procedure carried out can be seen in Figure 1. Results and Discussion The data obtained from students in three different classes, namely regular a, regular b, and regular were analyzed using descriptive statistics based on gender. Descriptive analysis of regular class a statistics can be seen in Table 3. From the Table 3, it can be seen that for the regular class a has a good level of perception of the developed e-module. From the table, it can be seen that as many as 9 girls have a very good perception level, 7 people have a good perception level, and 1 person has a bad perception level of the mathematics physics e-module. As for boys, 9 students have very good perception levels and the remaining 6 students have good perception levels of the multiple integral Material mathematics physics e-module. Then, the descriptive statistical analysis data for regular class b can be seen in Table 4. From the table above, it can be seen that the regular class b has a good level of perception of the developed e-module. From the table it can be seen that as many as 8 girls (53.33%) have a very good level of perception, 5 people (33.33%) have a good perception level, and 2 people (13.34%) have a bad perception level to the mathematics physics e-module. Meanwhile, for boys, 5 people (33.33%) had a very good perception level, 7 people (46.67%) had a good level of perception, and the remaining 3 people (20.00%) had a good level of perception. It is good for the e-module mathematics physics multiple integral material. Furthermore, for descriptive statistical analysis data, the perception questionnaire for regular class C can be seen in Table 5. From the table above, it can be seen that the regular class c has a good level of perception of the developed e-module. From the table it is also seen that as many as 6 girls (40%) have a very good level of perception, 5 people (33.33%) have a good perception level, and 4 people (26.67%) have a bad level of perception of mathematical physics module. Meanwhile, for boys, 7 people (46.67%) had a very good perception level, 7 people (46.67%) had a good perception level, and the remaining 1 person (6.66%) had a good perception level. good for the e-module physics mathematics multiple integral material. After the data were analyzed descriptively, the data were then tested for prerequisites, namely the normality test and homogeneity test. The normality test aims to see the normality of the data, the results of the normality test in the regular classes a, b, and c can be seen in Table 6. From the normality test data for the three classes above, a significance value is greater than 0.05. For girls, the significance of regular classes a, b and c was 0.107, 0.078 and 0.067, while for girls the significance was 0.200, 0.112, and 0.054 respectively. This significance value has met the requirements, which is above 0.05, which means that the data obtained are normally distributed. Furthermore, the homogeneity test or the test used to see the homogeneity of the data can be seen in Table 7. From the homogeneity test data for the three classes above, a significance value greater than 0.05 was obtained. For girls, the significance of regular classes a, b and c was 0.74, 0.103 and 0.058, while for girls, the significance was 0.083, 0.127, and 0. 165, respectively. This significance value has met the requirements, namely above 0.05, which means that the data obtained is homogeneous. After completing the prerequisite test, the researcher then tested the hypothesis using the ANOVA test. The results of the Anova test carried out can be seen in Table 8. From table 7 on the results of the ANOVA test output perceptions of regular class students a, b, and c, the significance value is less than 0.05, which means the data is significantly different. For boys, a significance of 0.036 was obtained and for girls of 0.017. This value smaller than 0.05 indicates that the perception data for both girls and boys are different from each other. After conducting the ANOVA test to find out whether there was a relationship between the perceptions of students in the three classes based on gender, the researchers then conducted a post hoc test to find out the differences in detail between each class and other classes. The test results can be seen in Table 9. Based on the table above, it can be seen that for female students there are differences in perceptions between class A and class B, while for class C they have the same perception. Then for boys, there is a difference in perception between class A and class C, while class B tends to have the same perception. Based on the table of descriptive analysis results for regular classes a, b, and c it can be said that for regular class a almost all student data are at the level of perception above good with details for women 52.9% have very good perceptions, 41.18% have perceptions good, and 5.9% have a bad perception level. In addition, for men, 40% have a good perception level and the remaining 60% have a very good perception. Then for the regular class b, there were 30 respondents, each 15 female students and 15 male students, from the table it can be seen that the level of perception is not too good compared to regular class a because for girls there are 2 people (13.34%) have a perception which is not good, then for the remaining 5 people (33.33%) the perception level is good, and 8 people (53.3%) the perception level is very good. Furthermore, regular class c has a perception level that is on average good and very good, but in class c the perception level is not better than regular class a, from the table it is found that for girls as many as 4 people (26.67%) have a perception level which is not good, while the remaining 5 (33.33%) have a good level of perception, and 6 people (40%) have a very good level of perception. For the boys themselves, 1 person (6.66%) has a bad level of perception, 7 people (46.67%) have a good level of perception, and the remaining 7 people (46.67%) have a very high level of perception. good. From the three descriptive tables, it can be seen that 1 person out of 32 students in regular class a has a poor perception, this result is better than class b and c which have 5 people in the category of poor perception level, respectively. Then after finishing describing the data statistically, the next researcher described the data inferentially. Before testing the hypothesis, the researcher first conducted a prerequisite test, namely the normality test and homogeneity test. Based on table 6, the significance of the normality test for boys is 0.107 (class a), 0.078 (class b), 0.067 (class c). As for girls, the significance was 0.200 (class a), 0.112 (class b), and 0.054 (class c). This significance result has met the requirements, which are above 0.05 , so the data obtained can be said to be normally distributed. Then for the next prerequisite test is the homogeneity test, this test is used to see whether the data is homogeneous or not. From the test table of homogeneity, the significance for boys was 0.074 (class a), 0.13 (class b), 0.058 (class c). As for the results of the significance of girls obtained 0.083 (class a), 0.127 (class b), 0.165 (class c). For the condition that the significance value of the homogeneity of the data is the same, that is 0.05, so it can be concluded that the data is homogeneous. After the two conditions were met, the researcher then conducted the ANOVA test to see the relationship between the perception data between the regular classes a, b and c. ANOVA is a test used to see the average difference in variables, if the significance is less than 0.05 then there is an average difference, on the other hand, if it is greater than 0.05, it means that there is no difference in the data (Zakaria & Nordin, 2008;Mailizar, et al., 2020;Yang, et al., 2021). The ANOVA table shows a significance of 0.036 for girls and 0.017 for boys. The result of this ANOVA test is smaller than the value of 0.05 which indicates that there is an average difference in each gender, namely male and female. Gender differences greatly affect a person's perception of something specific, namely the object to be assessed (Anggoro, 2016). The perception process itself describes how the stimulus in the form of an object or event is received and interpreted so that it can give meaning to something for those who perceive it (Dzakirin, 2013). Men are described as having a firm attitude in judging things, while women are more critical in judging things than men (Duarte, et al., 2017;Klein, et al., 2018). Furthermore, according to Sulistiyawati & Andriani (2017) argues that female students have a broader and deeper mindset than boys, so in this case it certainly causes a difference in perception. In addition to the innate influence of gender, students' perceptions of good or bad can be influenced by their peers (Zizka, 2017;Mucherah, et al., 2018;Conejeros-Solar, et al., 2021). Students tend to follow suit or imitate what their friends are doing, this is certainly not good because it can hinder children's creativity and independence in thinking, especially judging something. A good perception shows that e-module products are considered to have good quality for students (Pathoni, et al., 2017;Syahrial, et al., 2019;Maison, et al., 2021). With good quality, it is hoped that the e-module can help students in carrying out mathematics physics lectures Nevrita, et al., 2020). In addition, by using the Indonesian language in the e-module, students can freely discuss the material for physics and mathematics without any errors in delivering the material in the e-module. In the long term, this e-module can guide and help students to improve their pedagogic aspects as prospective physics teachers who are required to have critical analysis and calculations. The pedalgogic aspect itself is the competence or ability of prospective teachers in mastering and managing the classroom (Yasin, 2011;Gluzman, et al., 2018;Machaba, 2018). According to research by Moh'd, et al. (2021) the level of pedagogic competence of educators plays a very important role in classroom teaching, educators or prospective educators who have a high level of pedagogic competence tend to be effective in teaching in class. One of these effective learning can be supported by using technology, namely in this study it is offered as an e-module in mathematics physics with multiple integral material. For a prospective physics teacher himself, a perception is needed for introspection of prospective teachers in increasing their competence to become professional teachers (Mashuri, 2017;Widyastuti, et al, 2017). On the other hand, by getting a good perception from students about emodules, it can be said that this research helps educators to see what kind of learning students prefer. If learning is liked by students, then good learning outcomes will follow (Schoepp, 2017;Elken & Tellman, 2019;Rahardjanto, 2019). Then for mathematics physics learning, e-modules that have a good perception of course have good quality to be a complementary source for lecturers in teaching mathematics physics (Linda, et al., 2018;Gillan, et al., 2018). E-modules that are not in English make lecturers not waste time understanding the material discussed. The addition of learning resources, one of which is by using the Indonesian-language mathematical physics e-module, is certainly very helpful in terms of convenience, attractiveness, and the variety of lecturers in teaching. Lecturers in this case do not have to think about foreign grammar so that learning time will not be wasted in class. This research has strengths and weaknesses compared to previous research. In previous research by experts (Serevina, et al., 2018;Asrial, et al., 2020) student perception variables are used to increase the effectiveness of classroom learning and did not examine how the relationship between first-class students' perceptions and other classes was. Previous research has also only focused on student perceptions for one class, while the current research uses more data. However, the research conducted has a weakness where the researcher does not specifically explain the product that is the object of student perception. In addition, the data processed is only limited to measuring the relationship between the perceptions of students in the three classes and does not use other variables. Therefore, the researcher suggests that further research is expected to be able to add variables so that it does not only measure the relationship but the influence between variables. Conclusion Based on the results of research regarding students' perceptions of mathematics physics e-module dual integral material, it can be concluded that gender differences are one of the factors that cause differences in perceptions between female and male students. This can be seen in the significance value of the ANOVA test for female and male students, which were 0.036 and 0.017, respectively. The significance value of <0.05 indicates that there are differences in perceptions between female and male students. Then, this difference can be seen in detail in the LSD follow-up test where for the female gender there are differences in classes A and B, while for men there are differences in classes A and C.
5,678.4
2021-09-12T00:00:00.000
[ "Physics", "Education", "Mathematics" ]
Biofabricated Fatty Acids-Capped Silver Nanoparticles as Potential Antibacterial, Antifungal, Antibiofilm and Anticancer Agents The current study demonstrates the synthesis of fatty acids (FAs) capped silver nanoparticles (AgNPs) using aqueous poly-herbal drug Liv52 extract (PLE) as a reducing, dispersing and stabilizing agent. The NPs were characterized by various techniques and used to investigate their potent antibacterial, antibiofilm, antifungal and anticancer activities. GC-MS analysis of PLE shows a total of 37 peaks for a variety of bio-actives compounds. Amongst them, n-hexadecanoic acid (21.95%), linoleic acid (20.45%), oleic acid (18.01%) and stearic acid (13.99%) were found predominately and most likely acted as reducing, stabilizing and encapsulation FAs in LIV-AgNPs formation. FTIR analysis of LIV-AgNPs shows some other functional bio-actives like proteins, sugars and alkenes in the soft PLE corona. The zone of inhibition was 10.0 ± 2.2–18.5 ± 1.0 mm, 10.5 ± 2.5–22.5 ± 1.5 mm and 13.7 ± 1.0–16.5 ± 1.2 against P. aeruginosa, S. aureus and C. albicans, respectively. LIV-AgNPs inhibit biofilm formation in a dose-dependent manner i.e., 54.4 ± 3.1%—10.12 ± 2.3% (S. aureus), 72.7 ± 2.2%–23.3 ± 5.2% (P. aeruginosa) and 85.4 ± 3.3%–25.6 ± 2.2% (C. albicans), and SEM analysis of treated planktonic cells and their biofilm biomass validated the fitness of LIV-AgNPs in future nanoantibiotics. In addition, as prepared FAs rich PLE capped AgNPs have also exhibited significant (p < 0.05 *) antiproliferative activity against cultured HCT-116 cells. Overall, this is a very first demonstration on employment of FAs rich PLE for the synthesis of highly dispersible, stable and uniform sized AgNPs and their antibacterial, antifungal, antibiofilm and anticancer efficacy. Introduction The growing pursuits in metal-based nanomaterials synthesis are hotly debated in several fields while acknowledging their unique physico-chemical and biomedical properties with specific advocacy for fitness in clinical settings as fascinating treatment modality, worldwide [1]. Considering that there is a wide scope to achieve desired properties in synthesized nanoparticles (NPs) including shape, size and stability by manipulating reaction conditions such as pH, temperature, concentration of metal precursors and concentration and nature of bio-reducing agents [2][3][4][5][6][7][8]. Besides, surface capping or encapsulation material of NPs deserves special importance due to being directly or indirectly concerned with Synthesis and UV-Vis Analysis of LIV-AgNPs Briefly, an apparent color change in the reaction mixture containing the aqueous solutions of PLE and AgNO 3 in 1:3 ratios (v/v), from pale yellow to light brown indicated the PLE bio-actives meditated bio-reduction of Ag + to LIV-AgNPs after 20 min at 25 ± 5 • C. The color of reaction mixture tuned into intense brown after 24 h. The appearance of a sharp UV-Vis band at λ max 428 nm was observed which is likely due to the surface plasmon resonance (SPR) of nascent LIV-AgNPs in colloidal solution (Figure 1a). The UV-Vis absorption peak position (400-500 nm) and formation of characteristic brown color LIV-AgNPs were found concordant with the reports published on plant mediated green synthesis of AgNPs [6]. Besides, UV-Vis absorption (λ max 428 nm) analysis of colloidal LIV-AgNPs up to six months revealed that the NPs were highly stable as the experiments showed no significant change in SPR peak (Figure 1b). Assessment of Bio-Actives in Pristine PLE and LIV-AgNPs by GC-MS and FTIR Before, synthesis of LIV-AgNPs, the pristine PLE was put through to GC-MS analysis [24] in order to presume plausible bio-active compounds that may acted as, (i) reducing agent for free metal cations (Ag + → Ag 0 ), (ii) stabilizing agent while growth on nascent NPs in progress during nucleation phase and (iii) capping of fully grown or stabilized NPs as described in our previous study [24] was illustrated in schematic mechanism of LIV-AgNPs formation ( Figure 2). The GC-MS spectrum of pristine PLE ( Figure 3) reflected a total of 37 peaks (P) for a variety of bio-actives were described in our previous study [24]. Based on their peak area, four major bio-actives in PLE were found to be long/short chained hydrocarbon fatty acids containing terminal -OH and -COOH groups, viz. nhexadecanoic acid (P15-21.95%), linoleic acid (P19-20.45%), oleic acid (P20-18.01%) and stearic acid (P21-13.99%) [24]. Besides, two polyphenolic bio-actives were also detected namely cardanol monoene (P27-11.92%) and piperine (P31-1.83%) [24] likely play axillary role in bio-reduction and capping of NPs (Table S1) [24]. Next, the FTIR-based assessment of as-prepared LIV-AgNPs also demonstrated the presence of PLE bio-actives that can be argued being responsible for bio-reduction of metal cations into nascent NPs, stabilization and capping of AgNPs. The FTIR spectrum in Figure 4a-c, demonstrates a variety of molecular signatures of PLE bio-actives adsorbed on AgNPs, which in fact appeared as sharp, broad, strong and weak signals pertaining to their band behavior such as stretching, banding and vibrations. In Figure 4a, a dense area of FTIR spectrum ranged between 3500 cm −1 and 3700 cm −1 indicated the presence a majority of PLE bio-actives associated to AgNPs surface and hence we analyzed this area at a high resolution. The observations of this section suggest the presence of medium and sharp stretching were assigned to the free -OH groups of alcohols [25]. Whereas, strong and broad stretching around 3236 cm −1 confirmed the presence of intermolecular bonded -OH and -NH groups of carbohydrates/lipids and primary amines, respectively, as depicted in Figure 4b, signify the reduction of Ag + to Ag 0 and capping of AgNPs [26]. The weak vibrations between 2926 and 2850 cm −1 , and 2135 cm −1 were assigned to stretching of C-H and C≡C groups of lipids and alkyne, respectively (Figure 4b). The peak at 1737 cm −1 is likely due to the presence of carbonyl (C-O) group of FAs, whereas peak at 1645 cm −1 represent carboxylic groups (C=O) of FAs and amine group (N-H) of protein ( Figure 4c) [24,26]. Indeed, the appearance of C-O and C=O signals strongly advocate the involvement of FAs and proteins in bio-reduction and PLE bio-actives corona likely physisorbed on the surface of LIV-AgNPs. Besides, the peak at 1456 cm −1 can be ascribed to CH 2 deformation or due to C-O-H bending, 1373 cm −1 represents O-H groups of phenolic compounds, signal at 1153 cm −1 was taken as C-O-C stretching which signified the presence of carbohydrates, peak around 1026 cm −1 was assigned to O-H stretching of polyphenols ( Figure 4c) [26]). Overall, our GC-MS and FTIR results strongly suggest an active role of PLE attributed FAs and polyphenolic in the synthesis of LIV-AgNPs. In same line, Rao and Trivedi [27] have also demonstrated formation of FAs encapsulated AgNPs using stearic, palmitic and lauric acids as bio-reducing and stabilizing agents. Recently, the study of Gnanakani et al. [26] exhibited the FAs namely octadecanoic, hexadecanoic and octadecanoic acids in microalgae Nannochloropsis extract as potential bio-reducing and stabilizing agents in synthesis of AgNPs. Beyond the abundance of FAs, auxiliary phenolics, proteins, carbohydrates and enzymes bio-moieties in the benign milieu of PLE can be argued to play both key roles in plant extract mediated bio-fabrication of nanomaterials [25]. Electron Microscopic Properties of LIV-AgNPs The SEM micrographs in Figure 5a demonstrated a significant level of agglomerations in LIV-AgNPs when allowed to dry to solid powder. Besides, the elemental composition of PLE-AgNPs obtained by using EDS showed prominent peaks for carbon (30.6%), oxygen (44.85%) and silicon (9.43%) along with the characteristic peak of Ag (11.39%) at approximate 3 keV (Figure 5b). Contrarily to powdered LIV-AgNPs (Figure 5a), the TEM analysis of colloidal LIV-AgNPs solutions witnessed a great level of dispersity in aqueous environment, which was likely attributed by repulsion forces existed between two O-H groups hanging out from the soft PLE corona of AgNPs (Figure 5c). At the same time, the ImageJ software-based size determination on TEM micrographs revealed the sized of LIV-AgNPs was ranged between 1-10 nm with an average diameter of 5.37 ± 1.09 nm (Figure 5d). SEM Based Analysis of LIV-AgNPs Interaction and Cellular Damage To validate the antibacterial and antifungal activities of LIV-AgNPs, the treated and untreated cells of test strains were compared under SEM visualization. The results in Figure 8b-c exhibited significantly ruptured cell wall with deep pits and cavities formation in MDR-PA cells treated with 100 µg/mL of LIV-AgNPs, which were likely due to internalization and surface contact killing or on-site augmented cations mediated toxicity, as described elsewhere [25]. Under identical conditions, Gram-positive MRSA cells were observed with significant structural damage along with tremendous bulging and deep cuts in cell membrane (Figure 8e,f), which indicated increased cytoplasmic granularity likely due to prompted interaction and internalization of LIV-AgNPs as compared to untreated cells (Figure 8d) [5]. Similarly, in the case of fungi, the LIV-AgNPs exposed C. albicans cells showed significant changes in native morphology such as deep pits in cells compared to untreated control (Figure 8h,i) as reported elsewhere [32]. Besides, Anuj et al. [33] have demonstrated a steady release of Ag + from AgNPs and thus accumulated cations can destabilize cell membrane to combat with efflux-mediated drug resistance in Gram-negative bacteria. Recent study of Al-Kadmy [34] has also suggested that coating of AgNPs had enhanced penetrative ability through the cell wall and kills the E. coli, S. aureus and vancomycin resistant Enteroccci cells on banknote currency effectively under tentative conditions, as compared to AgNO 3 . Antibiofilm Studies of LIV-AgNPs Both, bacterial cells; Gram-negative MDR-PA and Gram-positive MRSA, and C. albicans fungi are well known for their biofilm producing ability and chronic nosocomial infections spread in hospital and associated settings [35,36]. Although, several metallic nanoantibi-otics were found having great potential either to cease or eradicate biofilm adherence [37]. Whereas, the propensity of nanoantibiotics to readily diffuse through the biofilm biomass in order to reach microbial cells seemed to be compromised due to enzymatic, non-enzymatic and pH mediated degradations [38]. Interestingly, the evidence suggests that FAs, either free or physisorbed on to surface of NPs can (i) suppress the regulation of quorum-sensing (QS) genes, (ii) quenched the diffusible QS signal factors such as acyl-homoserine lactones and autoinducer-2 (AI-2) and (iii) dysregulate the associated non-QS targets like efflux pumps, oxidative stress and ergosterol synthesis [39][40][41]. Taken together the antimicrobial potential of FAs and AgNPs, we tested LIV-AgNPs for their antibiofilm activities. In, fact, our GC-MS results prompted us to consider the LIV-AgNPs as encapsulated by PLE bio-active FAs viz. n-hexadecanoic acid (P15-21.95%), linoleic acid (P19-20.45%), oleic acid (P20-18.01%) and stearic acid (P21-13.99%) (Figure 3, Table S1) [24] and hence responsible for significant anti-biofilm activities against MDR-PA, MRSA and C. albicans. The data in Figure 9 revealed the inhibition of biofilm formation by MDR-PA cells as 23.31 ± 5.2%, 31.17 ± 3.2%, 40.16 ± 5.5%, 53.37 ± 4.2% and 72.75 ± 2.2%, at 31.25, 62.50, 125, 250 and 500 µg/mL of LIV-AgNPs, respectively, versus untreated control (100%). Under identical conditions, MRSA cells could limit the accumulate biofilm mass as 10.17 ± 2.3%, 15.06 ± 2.5%, 27.00 ± 2.9%, 49.70 ± 3.9% and 54.40 ± 3.1%, respectively. Besides bacterial cells, the biofilm formed by C. albicans was also found declined significantly (p < 0.05 *) as 25.60 ± 2.2%, 35.60 ± 1.3%, 41.65 ± 1.7%, 59.9 ± 3.2% and 85.44 ± 3.3%, respectively. In parallel, the SEM based comparative analyses of untreated controls ( Figure 10a,c,e) and LIV-AgNPs (100 µg/mL) treated MDR-PA (Figure 10b), MRSA ( Figure 10d) and C. albicans (Figure 10f) cells were resulted in significant disruption in their biofilm architectures. Overall, the obtained trends in biofilm formation suggest that FAs hold a great potential to inhibit or disrupt biofilm formation against several microbial pathogens, including S. aureus [42], P. aeruginosa [43] and C. albicans [39,44]. Beyond the proven antibacterial and antibiofilm track record of AgNPs [45][46][47], a variety of FAs have earlier been warranted as potential antimicrobial agent. For instance, study of Santhakumari et al. [48] demonstrated hexadecanoic acid (100 µg/mL) could interrupted the QS by loosening of biofilm architecture (>60%) of vibrios spp. like Vibrio harveyi, V. parahaemolyticus, V. vulnificus and V. alginolyticus without affecting their planktonic growth. Besides, 12.8 µg/mL of hexadecanoic acid alone could inhibit the biofilm formation in P. aeruginosa and E.coli as 64% and 81%, respectively [43]. In the same context, Soni et al. [49] also demonstrated that palmitic acid (hexadecanoic acid), stearic acid, oleic acid and linoleic acid present in extract of ground beef inhibit the auto-inducer signals activity of the reporter strain (Vibrio harveyi) and reduced E. coli biofilm formation. Antiproliferative Properties of LIV-AgNPs on Human Colon Cancer Cells (HCT-116) Cell Viability Assay by MTT and Microscopic Analysis of HCT-116 Cells In addition to antimicrobial activities, PLE-capped AgNPs were also assessed for their anticancer potential. For this, human colon cancer cells were-cultured with colloidal LIV-AgNPs (10-100 µg/mL) for 24 and the nano-toxicity of LIV-AgNPs against HCT-116 cells was measured by employing colorimetric MTT assay. Precisely, compared untreated control cells (100 ± 2.5%), there is an apparent decline trend in cell viability as 86.10 ± 5.9%, 81.5 ± 8.2% and 46.75 ± 7.9% at 10, 50 and 100 µg/mL of LIV-AgNPs, respectively ( Figure 11). At about 100 µg/mL, we observed a ca. 50% inhibition of the cell proliferation after 24 h. In parallel, HCT-116 cells exposed to LIV-AgNPs (10, 50 and 100 µg/mL) were also investigated for NPs induced morphological changes. The representative micrographs of HCT-116 cells clearly demonstrate that treatment of LIV-AgNPs caused significant morphological changes (Figure 12 b-d) as compared to untreated cells (Figure 12a). Our results were strongly supported by the findings of Kuppusamy et al. [50] who determined the IC 50 value of their Commelina nudiflora capped-AgNPs as 100 µg/mL against cultured HCT-116 after 24 h. Besides, as compared to a single extract like Chlorophytum borivilianum extract functionalized AgNPs, which showed IC 50 value of 254 µg/mL [51], the as prepared poly-herbal encapsulated LIV-AgNPs can act as much effective anticancer nanomedicine against human colon cancer cells. In this context, linolenic acid polymers impregnated to AgNPs have also been reported to show 82.3% inhibition rate against the rat pheochromocytoma PC 12 tumor cell line [52]. Similarly, fatty acids rich Argemone mexicana extract encapsulated AgNPs (100 µg/mL) were found to inhibit 80% human cervical cancer cell line (SiHa) proliferation [53]. The AgNPs have also been reported disrupting respiratory chain and cell division while releasing Ag+ in order to augment enhanced bacterial killing. It has reported that coating of AgNPs can result in improved functionality and corrosion resistance of magnesium structures in biomedical settings [54]. With the widespread application and inevitable environmental exposure, AgNPs can be accumulated in various organs. More serious concerns are raised on the biological safety and potential toxicity of AgNPs in the central nervous system (CNS), especially in the hippocampus. Further, Chang et al. [54] investigated the biological effects and the role of PI3K/AKT/mTOR signaling pathway in AgNPs mediated cytotoxicity using the mouse hippocampal neuronal cell line (HT22 cells). They found that AgNPs reduced cell viability and induced membrane leakage in a dose-dependent manner and AgNPs also promoted the excessive production of reactive oxygen species (ROS) and caused the oxidative stress in HT22 cells [54]. Preparations of Aqueous Extract of Liv52 Drug To prepare the fatty acids rich poly-herbal Liv52 drug extract, Liv52 tablets (Himalaya Global Holdings Ltd., Bangalore, India), were crushed to fine powder and 5 g was then dissolved in 100 mL of ultra-pure water. After 1 h, the PLE solution was centrifuged at 12,000 rpm for 10 min and so collected supernatant was additionally filtered through the Wattman paper No. 1 [24]. Thus, obtained aqueous PLE was stored at 4 • C for the green synthesis of LIV-AgNPs. GC-MS Based Assessment of Bio-Actives in Poly-Herbal Liv52 Drug Extract (PLE) Considering the fact that Liv52 is a poly-herbal composition of C. spinosa, C. intybus, S. nigrum, T. arjuna and A. millefolium extracts [23], the gas chromatography massspectroscopy (GC-MS) based analysis on methanolic extract of PLE was performed to ascertain the bio-actives compounds that plausible involved in reduction, capping and stabilization of LIV-AgNPs, following the method described elsewhere [24,31]. Nanofabrication of Poly-Herbal liv52 Drug Extract Capped AgNPs (LIV-AgNPs) For the synthesis of LIV-AgNPs, PLE (25 mL) was mixed into 75 mL of 0.1 mM AgNO 3 solution. The reaction mixture was then kept in dark at room temperature (30 ± 5 • C). The color of reaction mixture was changed from pale yellow to brown after 20 min and became even dark brown within 24 h, which indeed indicated the reduction of Ag + to Ag 0 NPs [8]. UV-Vis Spectroscopy and FTIR Analysis Formation of LIV-AgNPs was monitored by using UV-Vis spectroscopy in range of 300-800 nm as described recently elsewhere [55]. The Fourier-transform infrared spectroscopy (FTIR) was performed to ascertain the presence of PLE bio-actives that have likely played either key or auxiliary role in the reduction Ag + to Ag 0 , stabilization of nano silver and capping of nascent LIV-AgNPs during synthesis [8]. Electron Microscopic and EDS Analysis of LIV-AgNPs The shape, size and elemental composition of LIV-mediated synthesized AgNPs was carried out by scanning electron microscope (SEM), transmission electron microscope (TEM) and energy dispersive spectroscopy (EDS) following the methods described in our previous study [56]. XRD Analysis of LIV-AgNPs The crystallinity and size of bio-synthesized LIV-AgNPs was analyzed by XRD machine as protocol described recently [57]. Microbial and Human Carcinoma Cell Cultures In this study, multi-drug resistant Pseudomonas aeruginosa (laboratory strain), methicillinresistant Staphylococcus aureus (ATCC 33591) and Candida albicans (ATCC 14053) were used to investigate the antibacterial, anticandidal and antibiofilm activities of synthesized PLE-AgNPs. For anticancer efficacy assessment, the human colon cancer (ATCC No. CCL-247) cell line was used. Both, the microbial and human carcinoma cell cultures were maintained as described in earlier studies [9,58]. The antibacterial and antifungal activity of synthesized LIV-AgNPs was carried out using two-fold micro broth dilution method in the range of 62.5 to 2000 µg/mL against Gram-negative MDR-PA, Gram-positive MRSA and C. albicans fungal strains as method described by Ansari et al. [59]. The MIC value is defined as the lowest concentration of LIV-AgNPs at which no visible growth of bacteria and Candida was observed. After MIC determination of LIV-AgNPs, aliquots of 100 µl from wells having no visible growth was seen were further spread on MHA and SDA plates for 24 h at 37 • C and 28 • C, respectively, to calculate the MBC and MFC values. The lowest concentration of LIV-AgNPs that kills 100% population of tested bacteria and Candida, is considered as MBC/MFC values [59]. Further, agar well diffusion assay was performed to determine the zone of inhibition (in millimeter) of LIV-AgNPs against Gram-negative MDR-PA, Gram-positive MRSA and C. albicans as method described by Jalal et al. [8]. Ultrastructural Alteration Caused by LIV-AgNPs in Bacterial and Candidal Cells The morphological changes caused by LIV-AgNPs in bacterial and yeast strains cells were examined by SEM analysis following protocol described in previous reports [60]. Briefly,~10 6 CFU/mL of MDR-PA, MRSA, and C. albicans cells treated with 100 µg/mL of LIV-AgNPs were incubated at 16 h at a recommended temperature. Thereafter, washing of treated and untreated samples were performed using centrifugation and then the pellets was fixed with glutaraldehyde (4% v/v) followed by osmium tetroxide (1%). After fixations, dehydration, drying and gold coating was performed and finally the effects of LIV-AgNPs on test strains of bacteria and Candida was seen under SEM at an accelerated voltage of 20 EV [61]. Inhibition of Biofilm Forming Abilities of MDR-PA, MRSA and C. albicans The inhibition in biofilm formation after treatment with LIV-AgNPs was quantitated by employing the microtiter crystal violet assay [61]. Briefly, 20 µl of freshly cultured MDR-PA, MRSA and C. albicans were admixed with 180 µl of varying concentrations (31.25, 62.50, 125, 250 and 500 µg/mL) of as prepared LIV-AgNPs and then the plates were kept in incubator for 24 h. The cells without LIV-AgNPs were considered as control group. After incubation, the content from the microtiter wells were decanted and gently washed with PBS and left for drying. The adhered biofilm biomass was then stained with crystal violet solution (0.1% w/v) for 30 min. The excess dyes were decanted and washed again with PBS and dried the wells completely. So stained biofilm was then solubilized with 95% ethyl alcohol and quantitated by optical density at 595 nm [62]. Visualization of Biofilm Architecture by SEM Besides, the effect of LIV-AgNPs on MDR-PA, MRSA and C. albicans biofilm architecture was investigated by SEM [62]. In brief, 100 µl fresh cultures of tested bacterial and yeast strains with and without LIV-AgNPs were inoculated on a glass coverslip in a 12-wells plate for overnight. After incubation, the glass coverslips were taken off and washed with PBS to remove the unadhered cells. After washing, the coverslips were fixed with glutaraldehyde (2.5% v/v) for 24 h at 4 • C. After fixation, washed the coverslips again and then subjected it to dehydration, drying and gold coating. After that, the effects of LIV-AgNPs on biofilm of tested bacteria and yeast were observed using SEM [61]. MTT Assay Human colorectal carcinoma cell line was used to investigate the anticancer potential of synthesized LIV-AgNPs at different concentrations (10, 50 and 100 µg/mL) in a 96-well cell culture plates by measuring optical density at 570 nm and the cell viability (%) was estimated using given formula [62]. Statistical Analysis Statistical analysis of data was done by one-way analysis of variance (ANOVA), Holm-Sidak method, multiple comparisons versus the control group (Sigma Plot 11.0, San Jose, CA, USA). The results indicate mean ± S.D. values determined with three independent experiments done in triplicate. The level of statistical significance chosen was * p < 0.05 unless otherwise stated. Conclusions This study demonstrates a simple one-pot procedure for synthesis of fatty acids rich aqueous extract of poly-herbal drug Liv52 stabilized LIV-AgNPs. GC-MS results demonstrated substantial proofs that PLE contributed terminal -OH and -COOH functional groups bearing FAs, namely n-hexadecanoic acid (21.95%), linoleic acid (20.45%), oleic acid (18.01%) and stearic acid (13.99%), that were speculated to reduce Ag + into Ag 0 and followed by stabilization with soft corona formation around the nascent NPs surface during synthesis reaction. Besides, the LIV-AgNPs were found to be potential nano-therapeutics agents in order to control bacterial growth and biofilm formation against Gram-negative MDR-PA, Gram-positive MRSA and C. albicans strains, in vitro. Significant interaction of PLE-AgNPs with both, Gram-negative and Gram-positive bacterial and fungal strains was observed. The propensity of LIV-AgNPs interaction and internalization in planktonic cells as well as biofilm biomass appeared clearly in SEM analysis of treated experimental sets of MDR-PA, MRSA and C. albicans owing to the difference in their cell wall composition. However, the antibacterial and antibiofilm potential of LIV-AgNPs might be due to a swift surface contact through a stubborn biofilm matrix formed around the colonized cells requires further investigations to understand the mechanism of their action mode for nanoantibiotics development. In addition, the dose-dependent cytotoxicity trend of LIV-AgNPs against cultured human colon cancer cells ensured that the FAs-rich PLE capped nanomaterials could act as potential anticancer nanodrugs. However, the anticancer data of LIV-AgNPs here reported are only preliminary and will be successively deeply investigated exploring their cytotoxicity on normal cells as well as the antiproliferative activity of LIV-52 extract alone, as control. Data Availability Statement: The data presented in this study are available in this manuscript.
5,404
2021-02-01T00:00:00.000
[ "Biology" ]
Rare Earth Elements: Overview of Mining, Mineralogy, Uses, Sustainability and Environmental Impact Rare earths are used in the renewable energy technologies such as wind turbines, batteries, catalysts and electric cars. Current mining, processing and sustainability aspects have been described in this paper. Rare earth availability is undergoing a temporary decline due mainly to quotas being imposed by the Chinese government on export and action taken against illegal mining operations. The reduction in availability coupled with increasing demand has led to increased prices for rare earths. Although the prices have come down recently, this situation is likely to be volatile until material becomes available from new sources or formerly closed mines are reopened. Although the number of identified deposits in the world is close to a thousand, there are only a handful of actual operating mines. Prominent currently operating mines are Bayan Obo in China, Mountain Pass in the US and recently opened Mount Weld in Australia. The major contributor to the total greenhouse gas (GHG) footprint of rare earth processing is hydrochloric acid (ca. 38%), followed by steam 615 use (32%) and electricity (12%). Life cycle based water and energy consumption is significantly higher compared with other metals. Introduction Rare earth elements (REEs) include the lanthanide series of the periodic table from atomic number of 57 to 71 starting with lanthanum (La) to lutetium (Lu) and including scandium (Sc) and yttrium (Y).They are in short supply internationally with domination by China in production and trade.The criticality level of REEs has been ranked as high with the highest score of 29 based on the analysis of several reports and three criticality factors [1].This score reported in this document was calculated by summing the individual scores for each commodity in each of recent studies of materials criticality in the UK, EU, US, South Korea and Japan. One of the members of the lanthanide series, promethium (Pm) is radioactive and exceedingly scarce.Pm is usually sourced through nuclear transformations.Rare earths are further divided into the light rare earth elements (LREE) and heavy rare earth elements (HREE) with the divide falling between the unpaired and paired electrons in the 4f shell [2].This divide is somewhat arbitrary and the common convention in industry considers that the LREE includes from lanthanum to europium and includes scandium.The HREEs include from gadolinium to lutetium and includes yttrium.The International Union of Pure and Applied Chemistry (IUPAC) definition includes gadolinium within LREE and based on that all of the HREE have paired electrons.There is some inconsistency in reporting REEs.Companies report lanthanum to neodymium as LREE and sometime include up to samarium but other times HREEs start from samarium.Sometimes precise classification of rare earth oxides are grouped as LREE La to Nd or Ce, Sm to Gd as medium REEs and Tb to Lu and Y as HREEs or yttric [3]. Over the last 40 to 50 years there has been extensive speculation on peak oil, i.e., the point at which oil production will start to decline.This "peak" concept is now discussed with respect to many elements in the periodic table where the demand is increasing rapidly but the supply is limited [4,5].Phosphorus (P) and indium (In) fall into this category.In the case of phosphorus, intensive farming practices are required to feed a growing world population but limited phosphorus supply suggests a looming crisis [5].Indium is a key ingredient for displays in mobile phones and newer generation computers.While it is relatively abundant in the earth's crust, there are few rich mineral deposits meaning that the lifetime of known, minable reserves may be as short as 10 years.The increasing demand for rare earth elements and limited reserves for some rare earths cause similar speculation for future availability.The CSIRO Minerals Down Under Flagship Cluster research used a model called "Geological Resources Supply Demand Model (GeRs-DeMo)" for several minerals (e.g., copper, gold, lithium) that predicts supply and demand of a selected commodity.A peak supply year can be estimated for a mineral commodity using the concept of "peak oil" [6,7].This model may be applied in the future to predict the sustainability of mixed REEs rather than individual 17 elements that may be complex. It is likely that wind energy and electric vehicles will be considered as part of the solution for a more sustainable future.Present technologies for electric vehicles and wind turbines rely heavily on dysprosium (Dy) and neodymium (Nd) for rare-earth magnets.Future large scale adoption of these technologies will increase the demand for these two elements [8].It is anticipated that recycling and recovery will assist to satisfy the demand for rare earth element but it is currently very low (i.e., less than 1%) and will pose significant challenge in terms of collection, processing and their recovery due low concentration in the products [9].Environmental impact indicators relevant for rare earths were listed and some literature reported results were published [10].However, the reporting of comprehensive life cycle based assessment results of all relevant environmental impacts for specific rare earth product still non-existence. There is limited comprehensive information in the open literature on the current status of REEs, partially because during the rise of industries that relies on REEs, China was the major source and the remainder of the world has not until recently actively sought to discover new reserves.The recycling aspects have also been described.This paper is considered as a review since a lot of information have been collected, collated, compiled and analysed.However, a small case-study on environmental impact has been used for demonstration rather than reporting the magnitude of results with certainty to indicate that this aspect is important consideration for rare earth industry. The objective of this paper is to describe a review on the overview of REE mining, mineralogy, extraction processes, and selected environmental impacts. Literature Review and Environmental Impact This paper is based on literature review in the area of rare earth minerals.The literature in various databases and websites have been compiled and analysed critically.The key literatures were identified and data were compiled from these openly available materials. The expected environmental impact in relation to rare earth processing particularly associated with waste management and disposal of tailings will require change in processing to retrieve rare earths [11].Recently, a review was published on sustainability aspects of rare earth production [12].They have provided some guidelines but did not report any specific results for rare earth production processes.There are limited studies on the reported impact of rare earth processing with any great detail.One preliminary study was undertaken with a limited scope [3]. Thulium (Th) 0.48 334,255 50 (6,700) Medial X-ray units-X-ray sensitive phosphors [17] Ytterbium (Yt) Electrical wiring, heat exchangers, piping and roof construction and increasingly consumer electronics [6,18] According to the Unites States Geological Survey (USGS), world resources are enough to meet foreseeable demand but world production falls short of meeting current demand [2].The production tonnage and resources for individual rare earths are presented in Table 1.Copper, a commonly used metal, has been shown as a comparison in the last row of this table.The resources have been calculated using data on the percentage of rare earths found in various ore deposits [2] and the known resources of rare earth containing ores [15].The only rare earth element estimated to have less than 1000 years resource is Eu at approximately 600 years.However growth in consumption of particularly the heavy rare earth elements has grown by many orders of magnitude in the past two decades, and another order of magnitude increase in demand is not out of the question since Eu is used in red phosphors (along with Y) for low energy consuming lighting suggesting that peak Eu might be as little as 10 to 30 years away.This further underlines the urgent case for recycling [16].Although this assessment is too simplistic since the life of particular resources will be influenced by the finding of new deposits, technological efficiency such as use of less specific material per product, extraction efficiency of low grade ore, and finally potentially the stream of recovered metal from recycling.However, it is useful to identify those materials based on these potential critical factors. Australia potentially has a role in world supply, with relatively modest deposits of REEs (2 Mt REE and yttrium oxide as Economically Demonstrated Resources (EDR), of which 31% EDR is accessible).World resource is reported as 95 Mt [22].This Australian resource estimate excludes the tailings heap of Olympic Dam which can be a significant source of light REEs at a very low grade.The estimated demand for REEs was about 125,000 t in 2010 with 95% supply from China [23].The reported current supply from China is estimated to be 94% [24].The expected demand of rare earth oxide is about 200,000 t in 2014 [11] although this figure does not discriminate between which rare earths will be in highest demand.The supply was 115,000 t in 2012 and 108,000 t in 2013.Another estimate predicted that in 2016 the forecast supply would 195,000 tonnes against a demand of 180,000 t.This may not happen in reality and may have uncertainty.Clearly, further information is required in a breakdown of consumption patterns for all of these elements. The major mineral deposits in Australia principally contain monazite and some xenotime whereas the US and Chinese deposits principally contain bastnasite.China and Malaysia have xenotime deposits.Some of Australia's deposits occur near Alice Springs (Arafura, Perth, Australia), Dubbo (Alkane, Perth, Australia; Jervois Mining, Melbourne, Australia), Laverton, WA (Lynas, Sydney, Australia), Nyngan and Young, NSW (Jervois Mining, Melbourne, Australia, Sc only), Southeast Kimberley (Navigator resources Ltd, Perth, Australia), Capital (near Canberra), Gifford Creek, WA (Artemis Resources Ltd, Sydney, Australia), Mount Gee SA (Marathon Resources Ltd, Adelaide, Australia), Olympic Dam SA (BHP Billiton, Perth, Australia), Brown's Range WA & NT (Hastings Metals, Sydney, Australia & Northern Minerals, Perth, Australia), Mary Kathleen Queensland (BHP Billiton, Perth, Australia) and Greenvale Qld (Metallica Minerals Ltd, Brisbane, Australia).These deposits have a large variation in composition [25].Although the number of identified deposits in the world may be over 850, the actual operating mines are only handful.Prominent currently operating mines are Bayan Obo in China, Mountain Pass in the US and recently opened Mount Weld in Australia.Distribution of rare earth elements in various mines is shown in Figure 1.REEs are also often found associated with uranium/thorium mineralization and uranium ores often contain appreciable REE (and vice versa).This co-deposition with radio-nuclides can pose particular challenges in the processing of REE ores [17,22,26].Current issues surrounding Lynas Corporation's REE plant in Malaysia relate to the perception that the plant will produce a radioactive waste.However, according to International Atomic Energy Agency, the radiation generated from the Malaysian plant would be the effect below harmful levels [27]. Mining of rare earths is generally divided into three historic eras: (i) monazite-placer; (ii) Mountain Pass and (iii) Chinese eras [26].The advent of the Chinese era (mid 1980s) was marked by availability of rare earths at prices that undercut most other mining operations, resulting in closure of many mines outside of China.At this point, the Chinese have around 55% of all known rare earth deposits and control 95% of world supply through integrated mining, refining and supply chains. All of these minerals are found in land based deposits, however, marine deposits have also been considered.It was reported that river run-off was considered the most dominant source of rare earth elements in the ocean [28].However, the work [29] suggests that significant ocean rare earth sediments may arise from hydrothermal plumes from the East Pacific Rise and the Juan de Fuca Ridge.These deposits have similar to or greater levels of REE compared to the Chinese deposits and perhaps greater HREE content. Mining Methods Rare earth mining can be open pit, underground or leached in-situ.For a typical open pit mine, the approach is very similar to other mining operations which involve removal of overburden, mining, milling, crushing and grinding, separation or concentration.The product of the enriched concentrate after separation may contain around 30%-70% of rare earth bearing ore.The process requires higher amount of water and energy usage (e.g., compared with other common metals, i.e., 0.2 to 1 GJ energy/t REO, 0.3 to 1.8 ML water/t REO) as well as production of waste streams (i.e., with other metals due very low grade) such as tailings and wastewater. If the deposit type is hard rock based then conventional open-cut or underground truck shovel mining system is used.On the other hand, if it is mineral sand based monazite type deposit then wet-dredging or dry mining method is used.If a wet mining operation, a floating dredge cuts the ore under the surface of a pond and pumps the ore slurry to a floating wet concentrator.Dry mining can be similar to conventional truck and shovel system. Extraction and Concentration The following description of rare earth processing is necessarily very general and represents a limited range of options that could be used in the industry.The actual process flowsheets are quite varied and tailored to the gangue and host ores such as monazite based or bastnasite based.There will be implications on impact as a result of these differences; however, this was assumed beyond the scope of this study.The extraction and processing has been described in detail [30]. To extract rare earths, further processing/extraction and refining are required.The extraction may involve using acidic or alkaline routes depending on the mineralogy of the REE-containing phases and reactivity of gangue phases.Typically, the acidic route is the most common, dominating at least 90% of the extraction methods.Depending strongly on mineralogy, the extraction step often involves roasting of the rare earth ore at 400 °C-500 °C in concentrated sulphuric acid to remove fluoride and CO2, and to change the mineral phase to make it more water-soluble.The monazite ore processing would have different routes.The resulting ore paste is washed (usually using water) and filtered or decanted to remove fine solid impurities.The REEs are then further leached (sometimes in multiple steps) using extraction agents (hydrochloric acid) and precipitating agents (ammonium bicarbonate (NH4)HCO3 or NaOH precipitation).Further separation stages are required through, for example successive solvent extraction (e.g., (C16H35O3P) and HCl) and then followed by precipitation steps using ammonium bicarbonate (NH4)HCO3 or oxalic acid (C2H2O4).The precipitate is heated to form rare earth oxides (REO).LREEs may be extracted by molten salt electrolysis based on chlorides or oxides.Metallothermic reduction processes are used to extract the middle and heavy rare metals such as Sm, Eu, Tb and Dy in near vacuum conditions with inert gas at high temperatures (>1000 °C).The difficulty and complications associated with preparing commercial grade rare earth compounds and metals is more akin to fine chemical production than commodity manufacture.Additionally, valuable rare earth ores are often "incidental" and occur in association with valuable elements such as Nb, Ta, In and Ni.The flowsheets for production can therefore be further complicated with extraction steps for other valuable metals. In an alkaline dissolution based production process, monazite concentrate is decomposed with sodium hydroxide to produce rare earth hydroxides.The rare earth hydroxides are then leached using hydrochloric acid and a mixed rare earth chloride solution is produced.The mixed solution is separated into light, medium and heavy rare earths using solvent extraction processes.Further solvent extraction stages are then required to separate these streams into solutions of individual rare earth chlorides.The separated rare earth chlorides are treated with oxalic acid to produce a rare earth hydrate, and these are then calcined in a furnace at 1000 °C to produce the final rare earth oxide products of greater than 99% purity. In China, the rare earth elements are also recovered as a by-product of iron mining.The world's largest light rare earth deposit is Bayon Obo located in Baotou, China, containing 48 million tonnes of rare earths reserves in the form of bastnasite ore [22,24].Bastnasite ore concentrate is typically calcined to drive off CO2 and fluorine.It is then hydrometallurgically treated by two stage digestion with hydrochloric acid, then treatment with sodium hydroxide, respectively, to produce rare earth hydroxides.The rare earth hydroxides are chlorinated and converted to rare earth chloride products.Separation of mixed rare earth oxides into the individual rare earths can partially be achieved by multiple recrystallisation as double salts, and ultimately by multi-stage solvent extraction.The mining and processing steps for refining of rare earths, therefore, tend to be energy, water and chemical intensive with significant environment risks affecting water discharges (radionuclides, mainly thorium and uranium; heavy metals; acids; fluorides), tailing management and air emissions (radio-nuclides, Th and U, heavy metals, HF, HCl, SO2 and dust). Commercial REE Mining in Australia In Australia, the principal rare earth mineral exploited has been monazite, which typically has associated radioactivity due to thorium content (by substitution up to 30%).Until 1995, rare earth production in Australia was largely as a by-product of processing monazite contained in heavy mineral sands [24].The process involved concentrating monazite using wet processing followed by dry concentration techniques.Wet concentration separates the heavy minerals from gangue minerals.Dry concentration, such as magnetic, electrostatic and gravity separation steps are used to separate monazite from the other heavy minerals.The rare earth elements may be dissolved from monazite by high temperature leaching in concentrated sulphuric acid [31]. In 2007, mining began at the Mount Weld deposit, Laverton, WA in Western Australia, which is the richest known deposit of rare earths in the world.This deposit's mineralogy has been described as a secondary rare earth phosphate, but those phosphates (most likely monazite) are encapsulated in iron oxide minerals.Parts of the deposit have distinct xenotime mineralogy, with heavier rare earth components.The first crushed ore was fed to the concentration plant in 2011.Mount Weld is claimed to have the highest grade known deposit of rare earths in the world.This deposit consists of the Central Lanthanide Deposit ("CLD") and Duncan Deposit.The mine has a conventional open-pit operation.About 773 kt ore has been mined at 15.4% REO grade or 116 kt contained REO.The stockpiled ore is sufficient to sustain steady state production for 6 years for Phase 1 capacity of concentration plant [32]. The concentrator commissioned in 2011 is designed to process 121,000 tonnes of ore per year producing 33,000 tonnes of concentrate in the first phase using flotation technology.The second phase is expected to double this production.It is intended that the concentrates will be exported to the Lynas advanced materials plant (LAMP) built in Malaysia.As reported in June 2013, 15,000 t of dry bagged concentrate has been made ready for shipment to LAMP [32]. The LAMP is located in the Gebeng Industrial Estate, in Pahang.Initial capacity in 2012 is designed at 11,000 tonnes of separated REOs per year, with expansion to 22,000 tonnes per year in 2013 [33].While the specific details of the flowsheet for processing concentrate are not available, they are likely to include steps of calcination, caustic conversion, acid leach and solvent extraction.Lynas reported some issues as identified relating to clogging and premature wearing of equipment that are affecting the ability to operate continuously at nameplate production capacity of LAMP.A series of work programs involving equipment changes and materials handling was implemented by Lynas in the late 2013 [32].Their immediate target is to optimise production at the Phase 1 capacity level of 11,000 tpa REO until market prices recover.In the June 2014 quarter, the company produced 1,882 tonnes of rare earth oxides (REO), up 73% over the previous quarter.The full year production to date in September 2014 was 3,965 tonnes [33].Lynas claimed in June 2014 [32] in their corporate website that the production rate is on track to achieve the target. The local residents had concerns over the operation of LAMP rare-earth processing plant particularly on the contamination of the coastal environment and the adverse health impact which could result from the mismanagement of radioactive waste streams.Recently, the Atomic Energy Licensing Board (AELB) of Malaysia agreed to issue a two-year full operating stage license to Lynas after it fulfilled all the conditions [34].The AELB assured residents that it was monitoring the operations of the Lynas plant and found radiation levels onsite and offsite to be within acceptable limits. The reported mine production of rare-earth oxides in Australia, including yttrium oxide, was estimated to be 2,070 tonnes in 2011 and 4,000 tons in 2012 [18].However, the Australian production of REO and Y2O3 is reported as 2,070 kt [1] in 2011 possibly due to a unit error.This should be 2,070 tonne, not in kilo-tonne.Unfortunately, there is a lot of anomaly in the reported production, resources, reserve and life of resources reported by various sources that is extremely difficult to verify particularly for REEs. REE Mining in China Between 2010 and 2012 the Chinese government put strict export quotas on their rare earth minerals and semi processed rare earth products [2,17,35,36].The quotas reduced the output by nearly 60% compared to the 2008 total release of 34,156 tonnes [2].These quotas created a gap between demand and supply and large increases in the prices of the rare earths.In relation to a dispute under World Trade Organisation, the United States, European Union, Canada and Japan requested consultations with China with respect to China's restrictions on the export of various forms of rare earths, tungsten and molybdenum in 2012.At the recent Dispute Settlement Body (DSB) meeting on 26 September 2014, China stated that it intended to implement the DSB's recommendations and ruling in a manner that respects its WTO obligations.China added that it would need a reasonable period of time to do so [37]. Because rare earths are not traded in a commodity market such as the London Metal Exchange, accurate records of prices are difficult to establish.However, from this source [38] it is possible to establish that dramatic increases in prices up to 10 fold were observed over the decade.The prices of several oxides are presented in Figures 2 and 3 that show the significant volatility of rare earth market.One example includes neodymium oxide which increased from just under $10/kg in 2001 to nearly $239/kg in 2011 but came down to $80/kg, which is about 3 times reduction from the peak.Other price increases include Europium, Dysprosium and Terbium oxide in the order of 300%-500% increase over the last 10 years [39].Recent trends in 2012 suggest that rare earth prices have decreased from their maxima in 2011 and are beginning to stabilise, albeit at significantly higher levels than prices even in 2010 for some with upward trend (Dy, Pr, Gd, Eu, Nd), no change (Tb) but reduction for other oxides (La, Ce, Sm).Announcements of new rare earth reserves in North Korea and Canada will also put downward pressure on prices. The Chinese producers also have issues with environmental pollution resulting from poorly regulated mining operations, as well as reported smuggling of rare earths from the Southern provinces which have been estimated to account for 30% of total Chinese production [19,35]. RRE in Korea A major impact on the rare earth market occurred with the announcement of rare earth deposits at Jongju in North Korea which are being developed by Pacific Century Rare Earth Mineral and the North Korea government.These reserves double the current estimates of rare earths reserves.Moreover, the rare earth deposits are relatively rich with 664.9 Mt at 9% TREO, 634 Mt between 5.7% and 9.0% and another couple of billion tonnes between 3.97% and 5.7% along with another 2.8 billion tonnes of lower grade ores.The total amount of lower grade REO is estimated to be around 216.2 Mt.Further exploratory drilling is expected to be undertaken this year and reported when the drilling is finished.However, veracity of this announcement is questioned by the sources of industry intelligence.Furthermore, there is a complication of making this project into reality due to political implications and international sanctions in the short to medium term. RRE in Canada Until the discovery of the Jongju deposit(s) the Canadian deposit at Nechalacho in the North-West Territories was one of the most exciting rare earth developments in recent years.This large deposit is rich in heavy rare earth elements consists of the following minerals: LREE in bastnaesite, synchisite, monazite and allanite and HREE in zircon, fergusonite and rare xenotime.The mine is fully owned by Avalon Rare Earth metals.Avalon also has a number of other sites in North America.The mine site treats the ores using hydrometallurgical approaches with acid leaching as the key processing step [41].The % of HREE to total RE varies from 6% to 30% from the top to bottom of the deposit.Total deposit is estimated to be 1.71 Mt of rare earth oxide.As with most REE mining thorium and uranium is associated with the deposit.Production is to begin in late 2016. Uses Some of the largest uses of rare earths are in catalysts (20%, largely Ce and La), rare earth magnets (21%, largely Nd, Sm and Dy), alloys (18%), powder production (12%) and phosphors (7%) [42].Catalyst applications are for both industrial and auto catalysts.Phosphors are important for a range of applications particularly for visual display in screens and low energy lighting [26].This is a likely growth area and will put pressure particularly on Eu and Tb reserves.Another area for expansion will be rare earth magnets (Nd, Pr, Sm and Dy) particularly for alternative energies.These will find widespread application in wind turbines, the auto industry (electric and hybrid cars) and defence industry (i.e., missile guidance systems).Rare earth containing (Er) glasses are important for fibre optical amplifiers required in high speed optical communication networks [26,42].Improving the efficiency of solar energy conversion is another area of probable expansion for rare earths [42][43][44].The rare earths Er, Yt and Ho show promise for up-conversion (converting low energy photons into higher energy photons by increasing the wavelength i.e., infra red photons to visible) and Yb, Tm and Tb for down-conversion (converting high energy photons to lower energy) [43,44]. Some novel areas for rare earths include refrigeration using either laser cooling or the magnetocaloric effect [42].In laser cooling, vibrational energy is removed by emission of photons with a higher average energy than the absorbing photons in various materials that may include Y, Yb, La, Nd and Tm [45,46].In the magnetocaloric effect, a material has its magnetic domains aligned under a strong magnetic field and when the field is removed the domains can unalign using phonons energy causing cooling.Room temperature magneto-caloric materials (e.g., Gd5(SixGe(1-x)) [46] may offer an alternative to current vapour compression cycles used with current refrigerants [47].Some more mundane uses include flints for cigarette lighters (Ce), polishing agents (CeO2), rechargeable batteries (La) and carbon arc lamps (La) [26]. Growth in demand for these materials is certainly of concern in many countries which is best summarised in some US documents [15,26].Rare earths underpin technologies that are seen as critical for clean energy economies, such as in photovoltaic devices, battery technologies for transport and wind energy, and phosphors for lighting.High speed optical fibre communication is another likely growth area.In fact a risk analysis undertaken by the U.S. Department of Energy nominates Dysprosium, Neodymium, Terbium, Europium and Yttrium as critical to advancement in a clean energy future [48]. Recycling Unlike other metal recycling industry, the recycling industry for rare earth is evolving with many of the activities still at development stages.The need for recycling is in part driven by rapid increase in demand and scarcity of materials combined with security of supply issues, a problem exacerbated by additional export restrictions imposed by the Chinese authority.The recycling option, however, offers several advantages including lower environmental impact, less dependency on Chinese export and the feed materials can be free from radioactive contaminants (e.g., Th and U). Possible opportunities available for recovery of rare earths are typically from used magnets, batteries, lighting and spent catalysts.The materials could be sourced from post-production wastes and postconsumer disposed goods.Clearly, the latter option is expected to be more complex, requiring more extensive physical and chemical treatments and higher energy requirement. Typical post-production wastes may, for example, include scraps produced during manufacturing of magnets (with expected wastage of 20%-30%) or sludge produced from shaping and grinding of magnet alloys [49].Typical recovery methods may include re-melting, reuse and re-blending of scraps or other selective extraction methods (e.g. using a molten salt agent (MgCl2), Na2SO4 double-salt precipitation technique or oxalate secondary precipitation approach for recovery of Nd2O3 [50].There are already metal recyclers that have identified the value in magnet-manufacturing dross, with pilot programs using standard and simplified hydrometallurgical routes to recover Nd and Dy. Recycling of rare earths from consumer goods at end-of-life is generally more challenging due to dispersed nature of the rare earth compounds which are intricately embedded into the products (e.g., neodymium magnets from hard disks and compressors, electronic displays).For effective recycling, it requires efficient and effective physical and chemical separation techniques.Physical separation methods may include mechanical dismantling of the components such as that proposed by Hitachi in which the neodymium magnets from hard disks and compressors motors/generators are selectively removed by a purpose designed automated machine, enabling significantly faster (8 times) separation than manual labour.Such a recycling approach is expected to meet 10% of Hitachi's needs by 2013 [51].Other recovery approaches have been outlined for magnets from MRI (magnetic resonance imaging for medical application) [49] and fluorescent lamps and tubes [52].A potential future strategy for industry is to collaborate on the design of RE magnet components to achieve some degree of dimensional standardisation and therefore reusability, and industrial design with easy recycling as an objective. The chemical recovery methods for rare earth elements generally involve pyro-metallurgical and hydrometallurgical approaches [53,54].Pyrometallurgical routes are commonly used for recovery of metals from electronic scraps.However, the rare earths are easily lost using this method as they tend to report to the slag phase due to the high affinity of the rare earth elements to oxygen.Hence hydrometallurgical approaches may be required to leach the rare earth elements from the slag.Rare earth elements (e.g., lanthanum, cerium, neodymium and praseodymium) and other valuable metals (Ni, Co) can be recovered from used Ni-MH batteries by leaching with sulphuric acid [55].While for spent catalysts, it is still not yet considered technically feasible and economically acceptable for the recycling of rare earth elements (mainly La and Ce) from FCC catalysts. The losses of the consumer goods through exportation and uncontrolled disposal are thus making the recovery of resources difficult.One of the primary constraints is to have effective collection system and consumer cooperation in recycling of end-of-life devices.Although the R&D in this domain is gaining momentum but there is still a lack of fully developed commercial scale plants for rare earth recycling in part due to drawbacks on yields and cost.In addition, some of the devices containing rare earths are still new in the market place and it would take some time for the products to reach the end of their life (10-20 years for electric motors in vehicles and wind turbines) before they are available for recycling. Another approach to address the rare earth supply issues is substitution with materials made from more common elements but this approach is still in its early stages.Possible opportunities for substitution under consideration are alternative turbine types for wind turbines; improving the reliability of traditional turbines with gears and alternative motor designs motors of hybrid and fully electric vehicles.However, it is conceived that substitution opportunities in applications related to energy efficient lighting systems containing rare earths (compact fluorescent lamp, LED, plasma display, LCD display); or catalysts in automotive and petroleum refining are still difficult and unlikely in the short term horizon without further research. Dispersion and Issues with Recovery As with all scarce elements one of the biggest issues is dispersion at the end-of-life for products into which these elements are incorporated.In the past most discarded product and waste from processing has ended up in landfill or dumped in the ocean.While there is discussion about mining waste dumps, material disposed of at sea is effectively lost, i.e., dispersed into the environment.In the context of rare earths, most products where the more precious rare earths are used are not inherently dispersive and programs exist or are beginning in many countries for reclamation and driven by recent increases in the price of rare earths [56][57][58]. Potential dispersive uses of rare earths include catalysts, magnets (Sm, Nd), missiles (Sm, Nd,) sunglasses (Er), MRI agents (Gd), lighting (Er, Eu, Tb) and cigarette lighters (Ce).Industrial catalysts generally go to landfill and can potentially be recovered from landfill sites in the future if they are not dispersed.In Europe there is a tendency to dispose of these catalysts into cement products as clinker, which is obviously a loss.While a large proportion of any automotive catalyst can, in principle, be recovered at end-of-life, degradation during the operational vehicle lifetime results in the formation of dust and dispersion of the dust onto roadways during exhaust gas flow.The dispersed material ends up in river systems as a result of storm water flow and effectively dispersed back into the environment.For automotive catalysts, this equates to the dispersive loss of the active metal catalyst (usually Pt is more important than that of CeO2 which is also an active ingredient of the catalyst).Products including high field strength rare earth magnets are another potential source for dispersion.This is an interesting category as products with miniaturised electronics such as mobile phones tend not to be discarded but stored in Australian households [59] and potentially available for rare earth recovery.However, the potential for mobile phones to be collected has also been documented [60].This behaviour is not unique to either electronic goods or Australian households but common across the OECD [61]. On the other hand rare earth magnets used in missile guidance systems will be dispersed into the environment if the missile is deployed and used.Lighting products have traditionally been sent to landfill at end of life, but programs are now being considered for recovery of the rare earths in these products as discussed in the "Recycling" Section.Cigarette lighters (rare earth flints) are generally dispersed into the environment, either in landfill or storm water drains.Erbium used as fibre optic amplifiers for high speed communications might also end up being dispersed, in this case because the Er is used at dopant level [62] making it expensive to recover. The rare earths at greatest risk are those that have high volume, low reserves and significant dispersion.Essentially, rare earth elements are concentrated from very low grade in the ground and then dispersed into the various equipments in small quantities.Thus, unless recovered, there is greater chance of these materials lost with the disposal of the equipment. Life Cycle Based Environmental Impact (Demonstration of One Data Set) For life cycle based environmental impact assessment, one simple case study has been selected from the Ecoinvent database of SimaPro LCA software [63,64].The mining and concentration of a bastnasite ore with a rare earth oxide concentration of 6% has been assumed in China [65].The boundary includes the mining, concentration of REE oxides and separation of products such cerium concentrate (60% cerium oxide), lanthanum oxide, neodymium oxide, praseodymium oxide, and samarium europium gadolinium (mixed medium heavy REE concentrate, 94% rare earth oxide).The global warming potential impact or greenhouse gas (GHG) footprint of rare earth production has been estimated and reported in kg CO2-equivalent (kg CO2-e) unit per kg of REO production. The process included material and energy inputs, emissions and land use for the mining and concentration of a bastnasite ore with a rare earth oxide concentration of 6%.Input and outputs were reported from bastnasite ore composition mined in China.Infrastructure and land use was approximated with iron ore mining, assuming Chinese Bayan Obo mine which is largely started as an iron ore mine (Fe content is about 33%) but now produces over 95% of Chinese and 83% of global REE production.Some inputs of auxiliary materials were estimated according to stoichiometry.The estimation for energy consumption, wastes and emissions is indicative only.Production in China was considered for this study.This process may be applicable for other regions if a similar ore type is used. The assumed subsequent process produced products such cerium concentrate (60% cerium oxide), lanthanum oxide, neodymium oxide, praseodymium oxide, and samarium, europium, gadolinium (mixed medium heavy REE concentrate product with 94% rare earth oxide).This process included roasting and cracking of the REE concentrate with 98% sulphuric acid at 500°C in a rotary kiln.Solvent extraction (SX) was used for the separation of the different rare earth oxides where organic chemicals were used.The obtained rare earth oxide product has a purity of up to 99.9%.The revenue from each product was used to allocate environmental burden.The break-down of various energy and materials on the total greenhouse gas (GHG) emission from the production process of REE concentrate is shown in Figure 4. Miscellaneous The major contributor to total GHG footprint of REE processing is hydrochloric acid (ca.38%), followed by steam use (32%) and electricity (12%).Overall 51% of GHG is due to use of energy in various forms (i.e., diesel, steam, fuel oil and electricity).The remaining minor elements are from other chemicals and transport.To reduce GHG impact of REE processing, the focus should be on how to reduce the acid and energy consumption during processing.The reported GHG is an order of magnitude higher in a study published recently [66].There is always difference between reported results from an LCA due a variety of reasons that include the assumed boundary and allocation of impact.Ore grade is one of those main ones, which is assumed as 4% for their study.There will be differences on environmental impact if different ore type, deposit or mining methods are assumed.In this paper, although GHG impact has been selected, other impacts such as radio-activity will be important which has been considered beyond the scope of this present paper. The GHG footprint of separated oxides of RE elements from the respective materials and energy contribution is shown in Figure 5. Figure 5. Greenhouse gas footprint of selected rare earth products (note: disposal includes hazardous solvent incineration, 70% mixed REE is processed, 90% recovery to products, mine rehabilitation is under miscellaneous [67]. Disposal steps of solvents mixture with water and tailings and to hazardous waste incineration contributed significant amount of GHG emission for each separated RE oxide product.Use of organic chemical (during solvent extraction) had the next largest contribution on the total GHG footprint.Thus, the effort should focus on the reduction of the use of organic chemicals during rare earth purification and on the disposal issues. Table 2 shows the few selected life cycle based environmental impact of selected rare earth elements production (Australian indicator set method was chosen for this impact analysis).Note: *DALY-Disability adjusted life years (metric to determine toxicity on human health developed by World Health Orgnisation [64]. The environmental footprints depend on the ore grade and recovery of the particular REEs.For example, La and Ce is generally available in higher amounts in ore and concentrate and thus can be recovered more readily compared with other REEs.Thus their (La and Ce oxides) specific impact is relatively less compared with other elements.Since the energy footprint of REE oxides are similar to that of other metals, the footprints of REE metals would significantly be higher (with further processing stages) even with energy intensive metals such as aluminium (211 MJ/kg) and titanium (361 MJ/kg) [68].Their water footprints are much higher than that of the results for most metal studied in the past (i.e., titanium water footprint is reported to be 110 kL/t metal [69]). The environmental impacts such as radioactivity potential, acidification, eutrophication, solid waste generation, water use, gross primary energy footprint, toxicity and any other impact of significance on regional and global basis should be taken into consideration.However, most of the REEs are expected to be used in energy reduction, energy efficiency and renewable energy technologies.Thus future LCA studies should include the boundary of the use phase and their impacts on overall emission reduction that requires further data and analysis that is beyond the scope of this current study. Conclusions The availability of rare earths is in transition from a temporary decline due mainly to quotas being imposed by the Chinese government on export to becoming more available with production increasing elsewhere in the world.The reduction in availability coupled with increasing demand led to increased prices for rare earths in 2012 and 2013, but prices have fallen from their peak values.The increasing demand for rare earths in a range of applications means that the rare earth market is likely to be demand-driven for some time to come.This is particularly the case for rare earths used in high field strength magnets.Price fluctuations are therefore likely to vary until such time as new rare earth deposits supply the market or formerly closed mines are reopened. The rare earths at greatest risk are those that have high volume, low reserves and significant dispersion.Essentially, rare earth elements are concentrated from very low grade in the ground and then dispersed into various equipments in small quantities.Thus, unless recovered, there is greater chance of these materials lost with the disposal of the equipment.Recycling and recovery of rare earth poses challenge in terms use of energy to collection, reprocessing and reproducing products at specification that can replace primary metals. Figure 2 . Figure 2. Price history in US dollars of the expensive rare earths [40]. Figure 3 . Figure 3. Price history in US dollars of the comparatively cheaper rare earths [40]. Table 1 . Abundance, resources, production and uses of the rare earths. Table 2 . Environmental footprint of selected rare earth oxides production.
9,576
2014-10-29T00:00:00.000
[ "Environmental Science", "Geology", "Materials Science" ]
Upregulated Expression of CYBRD1 Predicts Poor Prognosis of Patients with Ovarian Cancer Cytochrome b reductase 1 (CYBRD1) promotes the development of ovarian serous cystadenocarcinoma (OV). We assessed the function of CYBRD1 in OV underlying The Cancer Genome Atlas (TCGA) database. The correlation between clinicopathological characteristics and CYBRD1 expression was estimated. The Cox proportional hazards regression model and the Kaplan–Meier method were applied to identify clinical features related to overall survival and disease-specific survival. Gene set enrichment analysis (GSEA) was applied to identify the relationship between CYBRD1 expression and immune infiltration. CYBRD1 expression in OV was significantly associated with poor outcomes of primary therapy and FIGO stage. Patients with high levels of CYBRD1 expression were prone to the development of a poorly differentiated tumor and experience of an unfavorable outcome. CYBRD1 expression had significant association with shorter OS and acts as an independent predictor of poor outcome. Moreover, enhanced CYBRD1 expression was positively associated with Tem, NK cells, and mast cells but negatively associated with CD56 bright NK cells and Th2 cells. CYBRD1 expression may serve as a diagnostic and prognostic indicator of OV patients. The mechanisms of poor prognosis of CYBRD1-mediated OV may include increased iron uptake, regulation of immune microenvironment, ferroptosis related pathway, and ERK signaling pathway, among which ferroptosis and ERK signaling pathway may be important pathways of CYBRD1-mediated OV. Furthermore, we verified that CYBRD1 was upregulated in OV and significant correlated with lymph nodes metastasis, advanced stage, poor-differentiated tumor, and poor clinical prognosis in East Hospital cohort. The results of this study may provide guidance for the development of optimal treatment strategies for OV. Introduction Ovarian cancer, the sixth most common genital malignancy among women worldwide, is the most lethal gynecological tumor [1]. Ovarian cancer features extensive peritoneal spreading, and 70% of patients are first diagnosed at a late stage, most frequently with serous carcinoma [2]. Serous ovarian cancer (OV) accounts for over 70% of deaths of patients with ovarian cancer, and overall survival has not changed significantly for fifty years. According to the World Health Organization, 230,000 new cases of OV are diagnosed annually, and 50,000 women die each year [3]. Most patients with advanced OV experience a 29% 5-year survival rate compared with 92% at an early stage [2]. Despite the highly malignant phenotype and complex pathogenesis of OV, the molecular mechanism is not understood. erefore, it is critically important to identify prognostic indicators of the progression of OV. e molecular characteristics of OV include genomic instability and clonal diversity [4,5]. Even when treated with an inhibitor of ADP-ribose polymerase, OV remains incurable and lethal [6]. Extensive studies show that apoptotic Treg cell-mediated immunosuppression correlates with poor prognosis [7]. Further, specific widespread patterns of intraperitoneal dissemination of tumor cells contribute to the heteromorphosis of the immune microenvironment [8]. Numerous studies support the conclusion that tumor-infiltrating lymphocytes (TILs) [6] influence clinical outcomes of patients with OV [8]. Moreover, TILs may contribute to tumor progression [8], the therapeutic efficacy of PD-L1 [9], and the prognostic implications of neoadjuvant chemotherapy [10]. ese findings underscore the significance of immune microenvironments associated with OV. erefore, we suspected that the expression of CYBRD1 might regulate OV invasion and metastasis through the immune microenvironment. Cytochrome b reductase 1 (CYBRD1) is an iron-regulated ferric reductase that mediates iron-regulated signaling pathways [11] by catalyzing the conversion of ferric to ferrous ion during iron absorption [12]. Ferrous iron promotes DNA damage and participates in the pathogenesis and progression of cancer by inducing the production of reactive oxygen species [13,14]. e loss of ferrous ion binding leads to the apoptotic death (ferroptosis) of hepatic cancer cells that is mediated by DNA damage induced by procaspase-3-activating compound 1 (PAC-1) [14]. Ferroptosis (iron-regulated cell death) contributes to the maintenance of the stability of the tumor microenvironment [13]. Further, CYBRD1 is expressed at higher levels in tumors of patients with breast cancer than those of normal tissues, and high levels of CYBRD1 play a role in prolonging survival by inhibiting FAK activation [15]. ese findings support the conclusion that CYBRD1 expression shows promise as a predictor of prognosis. However, insufficient data are available to link CYBR1D to the absorption of ferrous ions and its association with the immune microenvironment. To address these unanswered questions, here, we aimed to assess the prognostic value of CYBRD1 expression in the immune tumor microenvironment of OV through analysis of gene expression profiles obtained from e Cancer Genome Atlas (TCGA) (https://tcga-data.nci.nih.gov/tcga/) [16]. To further investigate the mechanisms and understand the biological pathways underlying OV, we conducted gene set enrichment analysis (GSEA) to identify pathogenic genes whose products participate in a CYBRD1-associated regulatory network. We further analyzed TCGA data to determine the effect of CYBRD1 on the clinical outcomes of patients with OV and to identify relevant signal transduction pathways associated with CYBRD1 function that contribute to the malignant phenotype of OV. We made correlation analysis of the correlation between CYBRD1 expression and immunocytes and ferroptotic markers. Moreover, we prove a correlation between CYBRD1 and clinicopathological variables and draw survival curves to analyze the correlation between CYBRD1 expression with OS in patients with ovarian cancer from Shanghai East Hospital (EH). Our results suggest that CYBRD1 expression is closely correlated with the prognosis of patients. e results provide insights into the mechanism of CYBRD1 function in OV. Data Acquisition and Bioinformatics Analysis. RNA-seq data (376 patients with OV (workflow type: HTSeq counts)) and relevant clinical data were obtained from TCGA. RNA-seq data were obtained using an Illumina nextgeneration sequencing platform. Clinical data included histological grade, clinical stage, and anatomical locations. We acquired primary outcomes of therapy, overall survival (OS), and disease-specific survival (DSS) to analyze clinical prognosis. e inclusion criteria were (a) clinical stages I-IV, (b) complete follow-up data, and (c) microarray-based expression data. Gene expression values are expressed as log2. e correlation between CYBRD1 expression and clinicopathological variables was analyzed from 100 patients diagnosed with OV from EH cohort between 2010 and 2020. Samples with absent or unavailable clinical indicators were treated as missing values. Consent was obtained from the study participants prior to study commencement. All experiments were approved by the Ethics Committee of the Tongji University Animal Ethics Committee (Shanghai, China). Gene Set Enrichment Analysis. We used GSEA [17] to investigate the expression of CYBRD1 in OV. CYBRD1 expression data were stratified into low and high types to annotate biological functions (1000 permutations), and reactome pathways (reactome.org) were illustrated using cluster Profiler [18] (P < 0.01). Analysis of Immune Infiltration and Ferroptosis. We used marker genes of 24 types of immune cells described by Bindea et al. [19] to conduct gene set enrichment analysis (ssGSEA) to evaluate 24 types of tumor-infiltrating immune cells (TIICs) [17]. We used MaxStat (R package) [20] to stratify TIICs into low-and high-abundance groups. Furthermore, we analyzed the correlation between CYBRD1 expression with ferroptotic biomarkers (BECN1, FLT3, VDAC2, ALOX12, ACSL4, and GPX4). Gene expression data were normalized and analyzed using GSVA (R package) [21]. ssGSEA classifies gene sets associated with biological function, chromosomal localization, and physiological regulation [18]. e significance of the correlation between CYBRD1 and TIICs and ferroptotic biomarkers in OV was evaluated using Spearman's rank correlation analysis. An FDR <0.25 and adjusted P value <0.05 were set as the threshold values. Immunohistochemistry. All samples were fixed in 4% paraformaldehyde at 4°C overnight. Five-micrometer-thick histological sections were processed by ethanol dehydration, xylene clearing, and paraffin embedding. Each section was stained with hematoxylin and eosin. Sections were incubated with primary antibodies (anti-CYBRD1; 1 : 500, Bioss, China) at 4°C overnight. e staining procedure was performed according to the instruction of the commercial kit (ZsBio, China). IHC analysis was performed by two independent pathology investigators at 400× magnification in five randomly selected representative fields separately. A quantitative scoring system was applied to the assessment [22]. e staining intensity criteria were as follows: no positive coloring count 0 points, light yellow (weak positive) count 1 points, brown yellow (positive) count 2 points, and brown 2 Journal of Oncology (strong positive) count 3 points. Expression intensity �staining intensity × percentage of positive cells [23]. ImageJ software was used to measure the grayscale value of the exposure slices to calculate the protein expression (semiquantitative), and the expression of CYBRD1 was divided into high-expression group and low-expression group. Statistical Analysis. Survival rates analysis was performed to estimate the association of the OS and DSS of OV patients in the CYBRD1 low and CYBRD1 high groups using the Kaplan-Meier method and Cox regression. We then estimated the predictive performance of CYBRD1 on clinical prognosis (including OS and DSS), as well as other clinicopathological features using univariate and multivariate Cox regression analysis. All experimental errors are shown as two standard error of the mean (representing 95% confidence intervals). Patients' survival rates were estimated using the Kaplan-Meier method. Survival curves were assessed using the log-rank test. We used the Mann-Whitney U test to evaluate the correlation between CYBRD1 expression and clinicopathological variables. A set of 376 OV samples were divided into CYBRD1 low and CYBRD1 high groups to determine the potential relevance of OS to clinical features. Clinicopathological variables of the CYBRD1 low and CYBRD1 high groups were subjected to logistic regression analysis. Multivariate analyses using the Cox proportional hazards model were conducted to estimate DSS and OS while adjusting for potential confounders. e hazard ratio (HR) and 95% confidence interval (CI) were calculated for each variable. Comparison between categorical variables was made using an χ2 analysis. Statistical analyses were conducted using the SPSS software (version 22.0), and P < 0.05 indicates a significant difference. e median value of the CYBRD1 expression was defined as the cutoff value. R language 3.6.1 2 was used to conduct these analyses. e significance of the association between TIICs, ferroptotic biomarkers, and CYBRD1 expression in OV was evaluated using Spearman rank correlation analysis. To provide reliable evidence of the predictive value of CYBRD1 for patients with OV in EH cohort, a nomogram and calibration that integrated the CYBRD1 and independent risk factors was constructed to predict the 1-year, 3year, and 5-year OS for OV patients in East Hospital cohort. Clinical Pathological Variables. High levels of CYBRD1 were significantly associated with the outcomes of primary therapy (SD-PD-PR versus CR, P < 0.05) and FIGO stage (I and II versus III and IV; P < 0.05) (Figures 1(a)-1(f )). Moreover, univariate logistic regression analysis revealed that high levels of CYBRD1 were significantly associated with poor outcomes of primary therapy (odds ratio [OR] � 0.719, CR versus PR-SD-PD), FIGO stage (OR � 1.471; I and II versus III and IV) ( Table 2). ese finding demonstrate that patients with OV with upregulation in CYBRD1 expression were more likely to develop a poorly differentiated tumor and a worse response to primary therapy. is confirmed the good prognostic accuracy of CYBRD1. Univariate analysis revealed that high levels of CYBRD1 served as an independent factor that predicted shorter OS (HR, 1.438; CI, 1.107-1.868; P � 0.007). e expression of CYBRD1 and primary therapy outcome and tumor residual were significantly associated with shorter survival (Table 3). Multivariate analysis revealed that CYBRD1 was significantly associated with OS (HR, 1.416; CI, 1.024-1.958; P � 0.036) and the outcome of primary therapy (HR, 3.304; CI, 2.320-4.706; P < 0.001) ( Table 3). CYBRD1-Related Signaling Pathways and Functional Analysis. To identify CYBRD1-related biological pathways involved in OV, we used GSEA (GSEA v2.0, http://www. broad.mit.edu/gsea/) to analyze pathways that significantly changed in OV samples ( Figure 3 and Table 4). CYBRD1 levels ( Figure 3 and Table 4) were significantly associated with mucopolysaccharidoses (NES � 1.749, NOM P � 0.025; FDR, P � 0.097) (Figure 3 ese findings indicate that CYBRD1 was significantly associated with cell proliferation, energy metabolism, and apoptotic signaling pathways. To identify CYBRD1 expression involved in ferroptosis, we analyzed the correlation between CYBRD1 expression and ferroptotic biomarkers. We found that BECN1 (Figure 4 (Figure 4(f )) were negatively correlated with the expression of CYBRD1 (P < 0.001). ese findings indicate that CYBRD1 expression was significantly correlated with ferroptosis. Immune Infiltration in OV. e numbers of infiltrating T effector memory (Tem), natural killer (NKs), mast cells, macrophages, gamma delta T cells (cδ T cells), T central memory (Tcm), immature DCs (iDCs) neutrophils, T helper 17 ( 17) cells, eosinophils, T helper cells, T helper 1 ( 1) cells, CD8+ T cell, cytotoxic cells, NK CD56dim cells, dendritic cells (DCs), B cells, follicular helper (TFH), regulatory T (Treg) cells, and activated DCs (aDCs) were significantly and positively associated with high levels of CYBRD1 expression ( Figure 5). e most highly positive correlations of CYBRD1 levels were with Tems, NK cells, and mast cells, and the most negative correlations were with CD56 bright NK cells and 2 cells. e results showed that CYBRD1 was associated with immune infiltration in ovarian cancer. We speculated whether the upregulation of CYBRD1 expression could promote tumor progression through immune-related pathways. However, no other studies have shown that CYBRD1 can directly affect the prognosis of OV through immune mechanism. We only hypothesized and speculated through GSEA analysis, and the mechanism still needs to be further explored. CYBRD1 Expression and Localization in Ovarian Tumor Tissues. IHC was applied to measure the expression of CYBRD1 in OV tissues and verified CYBRD1 located within tumor cells and enriched predominantly in the cytoplasm of tumor cells (Figure 6). e 100 patients diagnosed with OV were divided into CYBRD1 low-expression group and high- Survival Outcomes and Multivariate Analysis. Kaplan-Meier analysis and the log-rank test revealed that patients in the CYBRD1high group experienced significantly shorter OS than patients in the CYBRD1 low group (HR � 5.43 (2.31-12.8), P < 0.001) (Figure 7(a)). We performed univariate and multivariate analysis to identify predictors of OS using the Cox regression model in the EH cohort (Table 7). Univariate analysis (HR, 5.43; CI: 2.31-12.80); P < 0.001) and multivariate analysis (HR, 8.42; CI: 3.24-21.89; P < 0.001) revealed that CYBRD1 was significantly associated with shorter OS (Table 7). We constructed a forest map of the risk score and clinicopathological parameters to identify the indicators that were significantly associated with OS. ese parameters were included in the multivariate Cox regression model, revealing that CYBRD1 expression was independent risk factors associated with OS (Figure 7(b)). Subsequently, we estimated the efficiency of the predictive model to develop a quantitative approach for predicting the prognosis of OV patients. A nomogram that integrated the CYBRD1 and pathological variables was constructed, and the C-index � 0.7016 (Figure 7(c)). e bias-corrected line in the calibration plot was observed to be close to the ideal curve, which showed better consistency in terms of prediction and observation of the probability of the 3-year and 5-year OS than 1-year OS patients with OV (Figure 7(d)). is may be related to the small 1-year OS number of OV patients in EH cohort. All these findings suggest that the nomogram had a certain accuracy in predicting clinical outcome in OV patients. Discussion OV is the sixth most common genital malignancy of females worldwide and accounts for the highest mortality rate among gynecologic cancers. High-grade serous ovarian cancer is the most common histological subtype, accounting for 90% of cases [3]. Despite advances in basic research, chemotherapy, and surgery during the past 50 years, the morbidity and mortality rates of OV continue to increase [3]. Studies of the expression and functional activation of CYBRD1 were proposed [15,24] because CYBRD1 mediates the transport of ferric ion in lung cancer cells and is involved in mitochondrial metabolism [24]. However, the relationship between CYBRD1 expression and immunocytes in OV is unknown. Here, we investigated the relationships between the expression of CYBRDR1, patients' clinical variables, and immune microenvironments of OV. Journal of Oncology For this purpose, we conducted bioinformatics analyses of TCGA RNA-seq data. We found that high levels of CYBRD1 expression in OV were associated with worse outcomes of primary therapy, high histological grade, and poor prognosis. GSEA demonstrated that mucopolysaccharidoses, the butyrophilin BTN family, the EGFR/ SMRTE pathway, IRF3-mediated induction of type I INF, FOXO-mediated transcription of cell cycle genes, and the ERK pathway were differentially enriched in association with high levels of CYBRD1 expression. ese findings indicate that CYBRD1 may serve as a potential indicator of prognosis and a therapeutic target. Further, CYBRD1 expression was positively associated with the Tems, NK cells, and mast cells and was negatively associated with the numbers of CD56 bright NK and 2 cells. CYBRD1 is a ferrous ion-regulated reductase that activates multiple intracellular signaling pathways involved in transmembrane ferric ion transport [12]. CYBRD1 comprises 286 amino acid residues and six membrane-spanning domains [12]. e amino acid sequence of CYBRD1 is 45%-50% similar to that of cytochrome b561, which facilitates electron transport across the membrane [25]. CYBRD1 primarily acts as an iron-and hypoxia-regulated reductase, which is modulated by HIF-2α and inhibits the metabolism and absorption of iron [25]. Ferric iron is required for tumorigenesis and cancer progression [26]. Iron activates the generation of oxygen radicals, which contribute to cell death, ferroptosis, or carcinogenesis by directly damaging DNA [27]. CYBRD1 mediates direct electron transfer, instead of transport and diffusion across the membrane, and may therefore facilitate energy reprogramming in lung cancer epithelial cells [24]. Abnormal expression of CYBRD1 correlates with iron metabolism of TILs and may be regulated by activated HIF in malignant breast cells [28]. Moreover, CYBRD1 may serve as a prognostic marker for various cancers [28,29]. For example, increased expression of plasma membrane-localized CYBRD1 is associated with favorable prognosis and is implicated in cancer cell proliferation and apoptosis in patients with breast cancer [15]. Further, a meta-analysis of TCGA data revealed that CYBRD1 expression is increased and serves as a prognostic indicator of patients with OV [29]. Similarly, our present study shows that high levels of CYBRD1 expression in OV were significantly associated with poor outcomes of primary therapy and FIGO stage. Our present bioinformatics analyses revealed that CYBRD1 expression was associated with mucopolysaccharidoses, the butyrophilin BTN family, the EGFR/ SMRTE pathway, IRF3-mediated induction of type I INF, FOXO-mediated transcription of cell cycle genes, and the MAPK/ERK signaling pathway, which are related to the proliferation and metastasis of OV cells. Further, activation of the FAK/ERK pathway contributes to tumor cell adhesion and the induction of ovarian cancer [30]. Others found that the IL-33/ST2 axis increases the growth of cancer cells via the MAPK/ERK/JNK signaling pathway and may serve as a prognostic indicator of patients with EOC [31]. Several studies illuminate the effects of signaling through the MAPK/ERK pathway associated with CYBRD1-mediated ion transport. For example, iron reduces the viability of OV cells when ERK signaling is altered [32]. Further, endometriosis-associated ovarian cancers exhibit a disequilibrium of iron homeostasis that is essential for the modulation of cell survival in a MAPK/ERK-dependent manner [33]. Moreover, secretory fimbrial epithelial cells exposed to iron enhance the proliferation of cancer cells, which is accompanied by changes in MAPK/ERK proteins [32]. A study of immune infiltration in patients with myelodysplastic syndrome with advanced clinical pathological features found that CYBRD1 expression regulates the cell cycle and DNA repair, whereas CD34 is downregulated and triggers an immune response [34]. Further, studies [4] found that OV is significantly affected by iron metabolism. Further research on the correlation between CYBRD1 expression, the ERK pathway, and immune infiltration is necessary. Ferroptosis is a newly defined form of regulated cell death characterized by iron overload, lipid reactive oxygen species (ROS) accumulation, and activates MAPK signaling pathway to induce carcinogenesis, promote progression, and suppress immunity system [3,[35][36][37]. Hu et al. [38] found that the depletion of PIR initiates HMGB1-dependent autophagy by binding to BECN1 and subsequently promotes ferroptosis by activating ACSL4 in human pancreatic cancer cells. Yang et al. [39] identified that GPX4 modulate ferroptotic cancer cell death, and the upregulation of PTGS2 expression was a marker for lipid peroxidation in GPX4 induced ferroptosis in 17 types of cancers. Another report showed that the receptor tyrosine kinase Flt3 modulated glutamate oxidative stress-induced cell death, ROS production and lipid peroxidation in multiple neuronal cell lines, and primary cerebrocortical neurons [40]. Our findings showed that the CYBRD1 expression was significantly correlated positively with ACSL4, BECN1, PTGS2, ALOX12, and Flt3, which were "driver," and negatively with GPX4, which was a "suppressor" in ferroptosis. erefore, our findings suggested that ferroptosis may be one of the mechanisms of CYBRD1-mediated occurrence and development of OV. Finally, we validated the correlation between CYBRD1 expression with prognostic factors in EH cohort. e results showed that CYBRD1 expression significantly enhanced in advanced stage (P � 0.014), lymphatic invasion (P � 0.017), and poor-differentiated tumor (P < 0.001). Moreover, CYBRD1 was an independent indicator of prognosis and ROC curves and the nomogram and calibration showed that CYBR1 had a certain accuracy in clinical prognostic prediction. erefore, we can argue that CYBRD1 expression is significantly associated with short operating systems and acts as an independent predictor of adverse outcomes. To our knowledge, the association of CYBRD1 in ovarian cancer has not been previously reported, and it will be helpful in clinical practice. ere are some limitations in this study and lack of in-depth research on ferroptotic mechanisms, and the single validation may affect the accuracy and reliability of our results. Nevertheless, we believe that our findings are persuasive enough to ensure future studies with further clinical validation. At present, we only stay on the phenomenon research, we are not deep enough on the mechanism research, and we hope that the current problems found can play a certain helpful role in the future mechanism research. Conclusions Our findings indicate that CYBRD1 expression may serve as a novel prognostic indicator of poor outcomes of primary therapy and poor prognosis of patients with OV. Further, the ferroptosis and ERK pathway may be closely associated with CYBRD1 in OV. Moreover, our findings that CYBRD1 expression differentially correlated with the abundances of TILs and immune microenvironment. ese results provide a platform for the development of novel inhibitors of the pathogenesis and progression of OV. Data Availability To analyze the roles of CYBRD1, RNA-seq data and relevant clinical data were downloaded from TGGA. ere were all available data to be released.
5,143.6
2021-09-21T00:00:00.000
[ "Biology", "Medicine" ]
Gas discharge combustion with a liquid tetrachloride electrode Titanium powders are widely used in selective laser melting and electron beam melting technologies to create medical implants. However, powders of a certain fractional composition are suitable for these purposes. A gradual increase in the volume of manufactured products requires the creation of new cheap methods for obtaining metal powders. Plasma-chemical synthesis in titanium tetrachloride solution can be one of the new methods for producing powders. A feature of this method is the use of a gas discharge with liquid electrodes, as a liquid electrode and is a titanium tetrachloride solution. The aim of the work was to study the possibility of obtaining a metal powder and its rapid cooling in a liquid. The question of the influence of the discharge parameters on the formation of metal particles remains open. In this work, a study of a gas discharge with liquid electrodes under the conditions of anode and cathode modes is carried out. A gas discharge between a liquid electrolyte and a metal electrode is experimentally investigated in the voltage range of 300-1000V. The conditions under which a stable discharge column is formed and a metal powder is formed are established. The regularities of the change in the I–V characteristic of the discharge are determined depending on the interelectrode distance. Introduction The high demand of the modern world industry for titanium alloys [1,2,3] poses topical issues for manufacturers to reduce the cost of titanium products by developing new production processes, as well as introducing resource-saving technologies. The development of Additive Manufacturing (AM) made it possible to manufacture metal products by melting the powder and further obtaining a continuous solid-phase structure [4]. Additive manufacturing makes it possible to manufacture parts of complex geometric shapes, which are impossible to obtain using traditional methods. In addition, additive technologies can reduce the time required to obtain a finished product [5]. AM allow the use of a wide range of various metallic materials: nickel, titanium, aluminum alloys, cobalt-chromium, various steels, etc. With the help of AM, ready-made functional products with high mechanical properties can be obtained. The quality of the products obtained depends on many technological parameters of the process, the correct choice of which is a fundamental factor in obtaining the required properties. Despite such clear advantages over standard technologies, there are many issues to be addressed. With selective laser melting, the formation of various defects is observed: the formation of cracks, the presence of porosity, distortion of geometry, overheating of individual sections of the product leads to the separation of the product from the platform. Eliminating all of these is a complex task that includes the feedstock problem. The quality of the raw materials is very important for titanium alloys, this is due to the strong deformation of the additive part after the end of the printing process and the high price of titanium powder. Therefore, the development of new simple methods for obtaining titanium powder of a given dispersion is urgent. Plasma electrolyte sputtering is one of the new methods for producing metal powders [6]. Which consists in using a gas discharge with liquid electrodes. In this case, a sputtered metal electrode is installed above a liquid electrolyte that serves as a cathode [7]. When the metal anode contacts the liquid cathode, the electric arc is initiated, and then the metal electrode rises vertically to a distance of 5 mm. from the electrolyte surface. When the arc burns, the metal anode melts and the metal is sprayed out under the action of the plasma [8]. The formation of liquid metal droplets is observed, which quickly crystallize in the electrolyte. In this work, the goal is to develop a similar alternative method for producing a powder, which consists in using a titanium tetrachloride solution. Main part The study is based on the assumption of the dissociative decomposition of titanium tetrachloride to atoms and radicals, followed by quenching of the decomposition products by their rapid cooling. At temperatures of 5000 K and above, more than 99% of titanium and chlorine are in the atomic state. Rapid cooling to room temperature can suppress the reverse oxidation reactions of titanium with chlorine. Investigations of the parameters of the gas discharge and the possibility of obtaining powders of metallic titanium from titanium tetrachloride were carried out on a setup, the functional diagram of which is shown in Figure 1. It consists of a power supply system -1, an electrolytic bath -2, an electrode system -3, an oscilloscope -4, an additional resistance -5, voltmeter -6, ammeter -7, thermocouple -8. Using the electrode system, the distance between the anode and the electrolyte solution was controlled. Oscilloscope 4 was used to control the shape of the applied voltage and current, and the voltage and discharge current were measured with a voltmeter and ammeter. The combustion of a gas discharge occurs between a metal anode/cathode (depending on the mode) made of graphite. The graphite electrode is a cylinder 10 mm in diameter immersed to a depth of 1 to 5 mm in a titanium tetrachloride solution. To understand the processes occurring at the plasmaliquid cathode interface, it is necessary to know the dependence of the cathode potential drop on the acidity of the electrolyte and pressure. Spectral studies showing the lines of intensities of elements entering the solution from the magnitude of the cathode drop and the acidity of the electrolyte can give an idea of the mechanism of charge transfer and the properties of plasma. It was found that the magnitude of the cathode drop does not depend on the gas pressure. Measurements of the intensity of the spectral lines of metals dissolved in the electrolyte from the acidity of the electrolyte show that for more acidic electrolytes, the intensity can exceed tens of times. The most efficient preparation of powder of metallic titanium or its hydride was observed using an atmosphere consisting of a mixture of argon, hydrogen, and TiCl 4 vapor. It was possible to achieve the maximum titanium yield of 35% (the ratio of the mass of the obtained powder to the mass of titanium introduced into the reactor in the composition of TiCl 4 ). The resulting powder was pyrophoric and could be removed from the receiver only in an argon atmosphere. In the course of research, it was possible to obtain an ultradispersed titanium powder with a particle size of 10-2000 microns, including the supply of titanium tetrachloride from a liquid to a hydrogen plasma zone, cooling and condensation of the powder in a liquid medium. Particle size control is carried out by changing the direct current of the discharge in the range of 100-500 A. The main factors affecting the dispersion of the products obtained are: the rate of the chemical reaction, the temperature and rate of its change, the presence of nucleation centers in the system, and the chemical rock of the crystallizing substance. At temperatures of hydrogen thermal plasma, which significantly exceed the sublimation temperature of titanium tetrachloride, as a result of dissociation, a rapid increase in the concentration of atoms of the starting substance occurs. The partial pressure of the low-volatile substance exceeds its equilibrium pressure, as a result of which the formation of nuclei of the condensed phase begins. Steam supersaturation is reduced due to the formation or growth of new nuclei. With sufficiently rapid cooling of the system, supersaturation increases and, as a consequence, spontaneous bulk condensation occurs. The limiting stage of the crystallization process is the diffusion of the substance to the centers of crystallization, therefore, the growth of crystals is difficult and the predominant formation of centers of crystallization is observed without their further growth. In a significant volume of the vaporgas phase, two processes occur simultaneously: the formation of new crystallization centers and the growth of the previously formed ones, which determine the polydisperse structure of the obtained products. The degree of supersaturation and the rate of crystallization can be increased by lowering the temperature. When a system with a high concentration of a substance is removed from the hightemperature region to the quenching zone, a sharp drop in temperature occurs and the system becomes highly supersaturated with respect to the new temperature. At high quenching rates, small particles are formed, and at low rates, large ones. As the current that generates the plasma arc increases, the plasma temperature increases, which contributes to an increase in the titanium concentration in the gas phase.
1,967
2021-01-01T00:00:00.000
[ "Materials Science" ]
Intracellular Energy-Transfer Networks and High-Resolution Respirometry: A Convenient Approach for Studying Their Function Compartmentalization of high-energy phosphate carriers between intracellular micro-compartments is a phenomenon that ensures efficient energy use. To connect these sites, creatine kinase (CK) and adenylate kinase (AK) energy-transfer networks, which are functionally coupled to oxidative phosphorylation (OXPHOS), could serve as important regulators of cellular energy fluxes. Here, we introduce how selective permeabilization of cellular outer membrane and high-resolution respirometry can be used to study functional coupling between CK or AK pathways and OXPHOS in different cells and tissues. Using the protocols presented here the ability of creatine or adenosine monophosphate to stimulate OXPHOS through CK and AK reactions, respectively, is easily observable and quantifiable. Additionally, functional coupling between hexokinase and mitochondria can be investigated by monitoring the effect of glucose on respiration. Taken together, high-resolution respirometry in combination with permeabilization is a convenient approach for investigating energy-transfer networks in small quantities of cells and tissues in health and in pathology. Introduction The alterations in cell bioenergetics have become a hallmark of heart diseases and cancer, two of the leading causes of death worldwide. Thus, better knowledge of cellular bioenergetic processes may provide several options for treatment of these diseases. As a part of bioenergetic studies, real-time analysis of oxidative phosphorylation (OXPHOS) with high-resolution respirometry has been extensively applied to investigate mechanisms of this key element of cellular bioenergetics. However, in addition to the ATP synthesis inside mitochondrion, the second and just as important part of the energy provision is the transport of the energy-carrying phosphoryl group from sites of regeneration to ATPases across the cytosol. Phosphotransfer circuits composed of creatine kinase (CK), adenylate kinase (AK), and glycolytic/glucogenolytic enzymes along with substrate shuttles, such as glycerol-3-phosphate, are essential parts of the cardiac bioenergetic infrastructure and integral to maintaining energy homeostasis [1][2][3]. These phosphotransfer networks are especially necessary for any cell or tissue with high and intermittent energy fluctuations, such as skeletal and smooth muscle, kidney, brain and neuronal cells, retina photoreceptor cells, spermatozoa, and gastric mucosa [4][5][6][7]. The 18 O phosphoryl oxygen exchange measurements have demonstrated that under basal conditions in intact noncontracting rat diaphragm muscle cells, almost every newly generated ATP molecule appears to be processed by CK (88%) or the AK phosphotransferases prior to its use [8]. In a normal heart, corresponding parameters are 80-88% for CK, about 15% via AK reaction, and the remaining 5-7% via glycolysis [9][10][11]. It is relatively common in the field of bioenergetics to use isolated mitochondria in respirometric studies. This approach enables the assessment of the metabolism inside the mitochondrion e.g., the usage of different respiratory system complexes or detailed assessment of the overall rates of energy production. However, isolation of mitochondria disrupts their normal morphology and interactions with other cellular structures. Furthermore, there is evidence that isolated mitochondria possess several functional characteristics that differ considerably from those of intact mitochondria in permeabilized myofibers and cells [12]. Clearly, the isolated mitochondria do not provide information regarding their function in the intracellular environment under physiological settings, because functional connections between mitochondria and other cellular structures (e.g., ATPases, cytoskeleton), essential for normal function in vivo, are destroyed by the isolation procedure. Therefore, better understanding of bioenergetic processes can be derived from experimental models where the mitochondrial function is directly assessable, but also as undisrupted as possible. In these terms, selective permeabilization of the cellular outer membrane offers several advantages. First, this model preserves the mitochondrial interactions with cellular components existing in vivo as discussed. Secondly, this model enables the assessment of energy-transfer networks connecting mitochondria with ATPases. Thirdly, it allows reduction of necessary sample sizes as compared to that needed for the isolation of mitochondria [13,14]. Therefore, permeabilization offers direct and controllable access to mitochondrial processes in in vivo samples to expand our knowledge beyond data gained from inherently limited in vitro models. Implementation of the permeabilization technique to study OXPHOS in the framework of a molecular system bioenergetics has helped to explain the complex network of cellular bioenergetics in heart muscle cells. Results from these works have demonstrated intracellular diffusion restrictions for metabolites, metabolic compartmentalization, metabolite channeling and functional coupling between energy transport networks and OXPHOS. Characterization of metabolic fluxes, including feedback loops regarding distant energy use back to mitochondria, and complex structure-function relationships between included complexes, resulted in formulation of concept of intracellular energetic units [2,3,[15][16][17]. However, for most tissues the intracellular diffusion restrictions for energy metabolites and accompanying micro-compartmentalization together with energy transport circuits is a relatively unexplored and undervalued area in cellular bioenergetics. In this article, we introduce the possibilities to use high-resolution respirometry to investigate the organization of mitochondrial-cytosolic networks and phosphotransfer networks by using cell permeabilization technique and oxygraphy. Diffusion Restrictions and Micro-Compartmentalization Effective communication between energy production in mitochondria and energy consumption across the cytosol is vitally important for all cells. Therefore, the diffusion of ADP toward mitochondria is of regulatory importance. Also, ADP is a limiting factor of the ATPase activity and thus it is critical to maintain a high ATP/ADP ratio near the ATPases [4]. Therefore, efficiency of these processes is dependent on the ability of cellular apparatus to remove ADP from the microenvironment of ATPases. In addition, the produced ADP amount should be a signal for sufficient ATP synthesis in mitochondria. In oxidative slow-twitch muscles, such as heart and skeletal muscle, the cross-talk between mitochondrion and ATPases is especially important to avoid disjunction in energy supply and to regulate mitochondrial work at very different levels of energy need. Paradoxically, in cells, especially in an oxidative muscle cells, the diffusion of molecules, including energy carriers, is much slower than in water due to the diffusion restrictions by organelles and cytoskeleton of the cell. The arrangement of mitochondria in heart cells and skeletal muscle is highly regular which is important for efficient energy transfer to ATPases; in many other tissues the mitochondria are more dynamic [18,19]. In recent decades, several groups have demonstrated that kinetic parameters of energy metabolism measured in isolated mitochondria versus mitochondria in vivo give strikingly different results [20][21][22]. For example, the affinity of mitochondria to exogenous ADP, expressed as apparent Michaelis-Menten constant (Km(ADP)), measured in isolated mitochondria from cardiac tissue is approximately 20 µM; the same parameter for mitochondria in vivo, in permeabilized cardiac fibers or cardiomyocytes is close to 400 µM [23]. The Km(ADP) value is dependent on muscle type: in glycolytic muscles the mitochondrial affinity for ADP is high, close to this value for isolated mitochondria, while in slow-twitch oxidative muscles it is similar to the heart tissue (Km(ADP) = 300-400 µM) as mentioned above [20,21]. Over the last decade studies have demonstrated that the voltage-dependent anion channel (VDAC) regulates the flux of metabolites through the outer mitochondrial membrane (OMM), and is selective for ATP, ADP, AMP, NADH, and NADPH. In the closed state VDAC is virtually impermeable to ATP and ADP [24,25]. Possible explanation for this is that VDAC permeability in muscles is regulated by some cytoskeletal proteins in the level of OMM [26][27][28]. Therefore, the intracellular movement of adenine nucleotides is not as easy as can be expected given their essential biological role. By using cell permeabilization technique Km(ADP) values can be measured in small samples of different tissues or cells which get quick information about the bioenergetics regulation type in this tissue. In biopsies from human gastric mucosa the Km(ADP) is about 100 µM [6]. In postoperative samples from colon tumor the affinity of mitochondria to ADP was at the same range (Km(ADP) = 126 ± 17 µM) [29,30], whereas human normal colon tissue displayed significantly lower affinity (Km(ADP) = 260 ± 55 µM). These data demonstrate that the remodeling of intracellular diffusion barriers is involved in carcinogenesis. It has been shown that the diffusion restrictions lead to metabolic micro-compartmentalization i.e., unequal concentration distribution of metabolites, including energy metabolites, in different areas of a cell. Spatial micro-compartments are formed where the concentrations of compounds are significantly higher or lower than of the cell in general [31][32][33]. This in turn could lead to situations where local low concentration could hinder cell function as described in cardiac muscle [33,34]. Also, spatial fluctuations in ATP concentrations caused by restricted diffusion raises the need for more efficient energy transport with the opportunity to precise regulation of energy fluxes. This suggests that free diffusion of energy metabolites is ineffective for muscle work and most of the energy flux should be transported by direct transfer through special pathways. Energy Transport Systems as a Regulator of Oxidative Phosphorylation Energy channeling through CK and AK transport circuits is more profoundly studied in muscle cells. It was shown twenty years ago that oxidative muscle cells possess strong diffusion restrictions for the ATP and ADP at the level of OMM [20,35]. Unlike ADP and ATP there is no restriction for movement of phosphocreatine (PCr) and creatine through OMM via VDAC [36]. In complete CK energy-transfer systems isoenzymes of mitochondrial CK (MtCK) are functionally coupled to adenine nucleotide translocase (ANT) in the mitochondrial inner membrane compartment. On the other hand, the cytosolic CK isoforms situated near the ATPases create micro-compartments for privileged exchange of substrates and products, and thus assure effective metabolite channeling. ATP input or removal in these micro-compartments will drive the CK reaction predominantly in a given direction. In a particular cell type, at least one dimeric cytosolic isoform is always co-expressed with a MtCK, generally cytosolic muscle type CK (MCK) with sarcomeric MtCK (sMtCK), or cytosolic brain-type CK (BCK) with ubiquitous MtCK (uMtCK) [4,37]. In oxidative muscle cells ATP-syntasome [38,39], MtCK and ANT complexes together with respiratory system form a large protein supercomplex called Mitochondrial Interactosome (MI), which regulates not only ATP formation but also its movement out of the mitochondrion [15,36]. The PCr production by MtCK is regulated by ANT, ATP/ADP antiporter, which provides conditions, where PCr formation is kinetically favored. ADP, formed by MtCK, is directly channeled to mitochondrial matrix by ANT where it has an instant influence on the respiration rate ( Figure 1). It has also been shown that as the affinity of mitochondrion to ADP is low in oxidative muscle cells due to the restriction of ATP/ADP transport through OMM, the oscillations in the concentration of ADP and creatine in the mitochondrial intermembrane space (IMS) are sufficient to regulate OXPHOS rate [40,41]. Respirometric study of MI, using the method of metabolic control analysis, revealed that the MtCK and ANT complexes are the key points of regulation of respiration rate in cardiac cells under the normal physiological conditions [42]. In permeabilized cells cytosolic water-soluble liquid components and freely floating proteins are washed out but most of the ATPases and enzymes attached to cytoskeleton or other structures remain active. The movement of the adenine nucleotides through mitochondrial outer membrane (OMM) voltage-dependent anion channel (VDAC) may be restricted by specific protein complex (X). However, the cytosolic creatine kinase (CK) isoforms coupled with ATPase and mitochondrial CK and adenylate kinase (AK) cytosolic isoform AK1 and mitochondrial AK2 create an opportunity for energy transport without ADP and ATP free diffusion in the cytoplasm. In these energy-transfer networks ADP generated in ATPase reactions and ATP produced in mitochondria are quickly directed to CK and AK reactions. Therefore, adding CK and AK activating compounds (creatine (Cr) or AMP, respectively) is also reflected in the rate of oxidative phosphorylation (OXPHOS). In the case of the AK shuttle the AMP derived from AK (AK1) reactions in the cytoplasm enters to the IMS where AK2 converts AMP and ATP to ADP. In addition to speed up the movement of the energy-rich phosphoryl group in cytoplasm, these energy transport systems provide better feedback between ATP consumption and synthesis. Addition of pyruvate kinase (PK) and phosphoenol pyruvate (PEP) to medium traps ADP that is not attached to energy transport systems. Therefore, in the presence of PK-PEP system and without activation of the CK or AK pathway the rate of OXPHOS decreases. Hexokinase (HK) bound to VDAC directs mitochondrial ATP to glycolysis pathway and remained ADP can stimulate OXPHOS. Adenine nucleotide translocase (ANT) is situated in the inner mitochondrial membrane (IMM). IMS, mitochondrial intermembrane space. Regarding to the AK pathway, the isoenzyme AK2 is in IMS [43] and it has been found that ADP generated there from ATP by AK2 can be channeled into mitochondrial matrix by ANT where it stimulates OXPHOS ( Figure 1) [44]. This suggests that AK2 plays a role in energy metabolism and energy transfer by regulating the ATP/ADP rate between the cytoplasm, IMS and the mitochondrial matrix. AK2 is strongly expressed in liver, heart, skeletal muscle, and pancreas, but also in kidney, placenta, brain, testis, pancreas, lung, and human gastrointestinal wall [6,29,43,[45][46][47]. Moreover, AK2 plays an important role in differentiation of cardiac, neural, and hematopoietic stem cells [47][48][49][50]. It has been shown in Jurkat cell line that induction of apoptosis increased the amount of cytochrome c as well as AK2 in the cytosol [51], therefore evaluation of AK coupling to OXPHOS could be useful for evaluation of cell damage. Also, the possible role for AK2 in the apoptotic process could not be related to the 'normal function' of AK in cells but it has been suggested that AK2 is involved in a novel apoptotic pathway by forming a complex with FADD (Fas-Associated protein with Death Domain) and Caspase-10 [52]. The possible cytosolic partner of AK2 in full energy-transfer circuit AK1 is the most abundant AK isoform located in cytosol. It is present in most mammalian tissues, and its expression is especially high in tissues with high energy need, such as brain, skeletal and heart muscles and in erythrocytes [43,46]. The AK1-knockout muscle in mice adapts to the lack of AK1-catalyzed phosphotransfer through up-regulation of glycolytic, CK and guanine nucleotide phosphotransfer systems, but the energetic efficiency of AK1-knockout muscle was lower than that of wild type [53]. The activity of AK in cardiomyocytes develops close to the adult value at the end of the first month already, meanwhile increase in the MtCK isoform content starts in the end of the second postnatal week and the CK pathway is fully developed by the end of third month [54,55]. During aging, decline in the CK pathway is the first detectable sign of the alterations in bioenergetics metabolism in 1-year-old (middle-aged model) rat cardiomyocytes while the alterations in the AK pathway are not significant [55]. Also, Nemutlu et al., using determination of 18 O labeling ratios in metabolic oligophosphates, detected decrease in CK as well as AK pathway activity in aged rat myocardium. Interestingly, this decrease was found to be smaller in stress conditions (initiated with isoproterenol) [56]. Glycolytic enzymes can also contribute to intracellular high-phosphoryl transfer. Energy-rich phosphoryl groups from ATP can be used to phosphorylate glucose and fructose-6-phosphate near mitochondria and from the other side, in the cytosol pyruvate kinase (PK) can phosphorylate ADP and thereby provide ATP for use (reviewed in [7]. Activity and subcellular localization of HK isoenzymes determines the further metabolic fate (anabolic or catabolic) of glucose-6-phosphate and modulates other intracellular roles of glucose. The tight regulation of HK binding to the OMM depends on cellular energetic needs in skeletal muscle [57]. Also, the decrease in HK2-mitochondrial interaction indicates negative outcome of ischemia-reperfusion injury of the heart [58,59]. Moreover, evidence is mounting that binding of HK2 to VDAC plays a pivotal role in highly malignant cancer cells in promoting cell growth and survival [60,61]. Therefore, studies are needed including not only ATP formation inside mitochondrion, but the interactions of mitochondrion with the other components of the cell to assure transport of phosphoryl group to the energy consumption sites. Besides, energy-transfer pathways work in both directions-to transport energy to the ATPases and to transport information to the mitochondrion. It is important to understand regulatory pathways in bioenergetics and the order of pathological changes to support maintenance of normal functioning of the cells and to start prevention in first signs of alterations. Materials and Sample Preparation Chemicals: All chemicals used in this study were purchased from Roche, Fluka, and Sigma-Aldrich (Saint Louis, MO, USA) only ultra-pure chemicals suitable for molecular biology and work with cell cultures were used. To make the Mitomed solution dissolve EGTA, MgCl 2 , taurine, KH 2 PO 4 , HEPES, sucrose and add K-lactobionate stock solution (0.5 M, store in 12 mL aliquots at −20 • C); adjust pH to 7.1 with KOH; and store in 25-50 mL aliquots at −20 • C. On day of the experiment weight and add respiratory substrates glutamate/pyruvate and malate; adjust pH to 7.1 with KOH or add 1 M neutralized stock solutions directly into the oxygraph chamber. With added substrates the Mitomed solution can be stored two days at 4 • C. Before experiments add BSA and 0.5M DDT stock solution (prepare freshly each day) ready to use respiratory solution can be stored for a few hours at room temperature. The instructions for making ADP, ATP, AMP, and other stock solutions can be found in Table 1. Keep the stock solutions on ice during the experiment. Sample permeabilization: Tissue permeabilization procedure is advised to carry on as described [14]. For cells, the permeabilization procedure is carried out directly in an oxygraph chamber with saponin for 5 min before starting the measurements. The appropriate saponin concentration should be tested for each sample type i.e., it varies from 25 µg/mL for rat cardiomyocytes [55] to 65 µg/mL for human gastric cancer cell line MKN45 [62]. Permeabilized samples can be stored in Mitomed solution (without respiratory substrates), under gentle shaking on 4 • C for a few hours. Quantitative Assessment of Intracellular Diffusion Restrictions Intracellular diffusion restrictions for a certain substrate can be measured indirectly by comparing the apparent Km value for the given substrate in permeabilized cells to the corresponding value for isolated enzyme or isolated organelle. Protocol 1 can be used to determine Km(ADP) in permeabilized cells using oxygraphy. Timing~1 h The Km(ADP) which characterizes intracellular diffusion restrictions for ADP, can be determined only in permeabilized tissue samples or cells and not in preparations of isolated mitochondria. The Km(ADP) varies significantly between cell types and these differences likely stems from specific structural and functional organization of their energy metabolism. To determine the apparent affinity of mitochondria to exogenous ADP the dependence of respiration rate on exogenous ADP can be measured. From these data, by using Michaelis-Menten equation Km for ADP (herein apparent Km) and Vmax can be calculated. 1. Add cells/fiber into the oxygraphic chamber. 4. Start cumulative addition of ADP until the saturation of respiration rate. The ADP concentration range depends on the sample. For preparations with low Km(ADP) (e.g., isolated mitochondria and most cell cultures) the concentration range is 0-500 µM ADP. For permeabilized tissue samples the saturating ADP concentration may reach up to 5 mM ADP (usually 2 mM). 5. Calculate the Km(ADP) and Vmax values from the [ADP] versus respiration rate (the basal rate of respiration, Km 0 , subtracted) relationships on the basis of the Michaelis-Menten equation. Critical steps: Injectable ADP stock solution should be divided up to eight doses to cover the required concentration range/Km curve. Too many additions extends the duration of experiment and thereby constant stirring and reduced oxygen concentration in the chamber could cause mechanical disruption of cell/fiber structure and inactivation of respiration, resulting lower oxygen consumption rates and inaccurate Km(ADP) value. Representative traces can be found in [19,26,36,40,63]. Additionally, plotting the data obtained using Protocol 1 to double reciprocal (Lineweaver-Burk) plot gives information about presence of different mitochondrial populations with differently regulated OMM. If the data gives a straight line, the mitochondrial population in the sample is homogeneous. Creatine Kinase Pathway The stimulatory effects of creatine on mitochondrial respiration allows efficient recycling of ADP inside mitochondria directed by MtCK in IMS and leads to tight coupling of mitochondrial respiration with ATP synthesis (Figure 1). These processes are known and studied in permeabilized cardiac cells over thirty years [64,65]. Further studies have shown that there is a large variability in distribution and the role of CK network between different muscle types and animal species. The increase in the respiration rate in response to creatine addition is 20% in chicken ventricular muscle but for another bird, pigeon, the corresponding number is 60% [66]. The effect of creatine on OXPHOS is well established in rat ventricular cardiomyocytes but not in rat atrial fibers, despite the presence of active MtCK [67]. Interestingly, in human atria MtCK is functionally coupled to OXPHOS, as in ventricular muscle [68]. As was discussed before, one indicator of the level of regulation of respiration kinetics of cells, is the Km(ADP). In fast-twitch muscles m. extensor digitorum longus, m. gastrocnemius white the Km(ADP) value is approximately 20 µM, in the same range as isolated mitochondria and it does not change in the presence of creatine [20,21]. In slow-twitch muscles (soleus, heart) with high Km(ADP) value it decreases to the 80-100 µM in the presence of creatine [15,21]. In the latter case the phosphotransfer is directed by the CK pathway and therefore the diffusion restrictions for ATP/ADP on OMM level has lower influence on the mitochondrial oxygen consumption rate. Because, in the presence of creatine the functional coupling between MtCK and ANT ensures that ATP synthase uses the ADP circulating in the IMS and therefore the respiration rate increases faster than without creatine. This is reflected in the increase of the mitochondrial apparent affinity to ADP (Km(ADP) decreases) [36,42,63,69]. Described dissimilarities in OXPHOS regulation by creatine suggest different roles of CK in these muscles. In fast-twitch glycolytic muscles, the main role of CK is the energy buffering, while in slow-twitch muscles, the CK pathway is responsible for compartmentalized energy transfer. In these cells CK system ensures stable energy supply for myofibrils and ion channel ATPases [4]. In connection with these two tasks, CK network ensures local low ADP level to prevent ATPase inhibition, and proton buffering. In malignant tumor tissues decrease in the CK activity and creatine content is detected in colon and stomach adenocarcinoma, colon melanoma, as well as skeletal muscle fibrosarcoma cells [29,70]. Interestingly, unlike other CK isoforms, the expression of uMtCK increases in malignant tumor cells. The authors proposed that this phenomenon relates to the uMtCK role as inhibitor of mitochondrial permeability transition pore and therefore apoptosis [70]. Also, in colorectal colon tissue the creatine activation was up to 60% from the exogenous ADP activated maximal respiration, while in corresponding tumor tissue no activation after creatine addition was detected [29]. Here we introduce three protocols developed to characterize the role of CK pathway in energy metabolism using oxygraphic method (Protocols 2-4). Protocol 2. Determination of Km(ADP) in the presence of creatine. Timing~1 h 1. Intracellular diffusion of adenine nucleotides could be restricted (characterized by high Km(ADP) measured in permeabilized tissue/cells) but creatine/PCr transport through the VDAC could bypass the restrictions when CK pathway is functionally coupled to OXPHOS. 3. Add cells/fiber into the oxygraphic chamber. Start cumulative addition of ADP until to respiration rate saturation. 7. Calculate the Km(ADP) and Vmax values from the [ADP] versus respiration rate value (the basal rate of respiration, V 0 , subtracted) relationships on the basis of the Michaelis-Menten equation. When the calculated Km(ADP) value with creatine is significantly lower than the corresponding value without creatine, it confirms an effective functional coupling of OXPHOS to CK pathway. Critical step: Creatine has low solubility at high concentrations. There is two ways to add ceratine into the oxygraph chamber. The first is to weigh the required amount of substance and add it directly to the chamber. The second is to use creatine stock solution (0.2 M) and keep it at 60 • C. Prepare the injection solution of creatine (0.2 M) just before the experiment. Wash the syringe immediately after every injection to avoid blockage by insoluble residue. Next protocol (Protocol 3) enables the quantitative measurement of the CK pathway contribution to the energy-transfer flux. With this simple protocol we can see what relative proportion of CK is connected pathway from the entire energy transport in a particular tissue type. For example, in adult mammalian heart muscle cells CK has a strong control over energy transport and OXPHOS while in postnatal heart cells activation of respiration with creatine is not detectable [54]. The following test can only be performed with permeabilized tissue/cell samples because they have intact mitochondria in their natural milieu including ATPases and CK near them which enables functional coupling between ATPases and CK. Protocol 3. Test for evaluation of the activity of the CK pathway in energy transport and ATP/ADP flux in general. Timing 0.5 h The test allows assessing functional coupling between the CK pathway and OXPHOS without hampering ATP/ADP diffusion between mitochondria and ATPases. 1. Add cells or permeabilized tissue sample into the oxygraph chamber. 3. Add MgATP (2 mM) to induce maximal activity of ATPases (V ATP ). Slight oxygen consumption could be detected in these conditions in resting muscle cells. 4. Add creatine to a final concentration of 20 mM (V Cr ). If there is a marked rise in respiratory rate after the addition of creatine then CK pathway is activated and concomitant increase in respiration rate reflects functional coupling between mitochondrial CK with OXPHOS as well as general ADP transport activity. Optional: 5. Add ADP (2 mM) to register maximal ADP dependent oxygen consumption rate. The extent of creatine activation (the creatine index) could be calculated as (V Cr − V 0 )/V max(ADP) Critical steps: Prepare the injection solution of creatine (0.2 M) just before the experiment and keep it at 60 • C because of low solubility of creatine at that concentration. Wash the syringe immediately after every injection to avoid blockage by insoluble residue. Representative traces can be found in [42]. To study specifically the CK pathway and determine the role of MtCK in it the Protocol 4 could be used. Protocol 4. Creatine test to determine the coupled state of mitochondrial creatine kinase. Timing 0.5-1 h In this protocol energy flux from the transfer through CK pathway and direct ATP transport could be measured separately. For that purpose, the pyruvate kinase/phosphoenol pyruvate (PK/PEP) system is added to trap extramitochondrial ADP. Therefore, all the ADP produced in ATPase reactions and not engaged in the CK pathway is regenerated by PK/PEP system; and only ADP/ATP circulating inside the mitochondrion activates respiration. Add MgATP (2 mM) to activate ATPases. The increase in respiration rate is observable because ADP generated by ATPases is diffused to mitochondria. 3. Add PK (10 U/mL) to activate PK/PEP system which is included to rephosphorylate ADP produced by cytosolic ATPases. While the CK pathway is not activated, energy transport between mitochondrion and ATPases is prevailing and taking place through direct ATP/ADP transfer. Therefore, addition of PK/PEP decreases oxygen consumption rate. In this situation ADP, formed by the ATPases, is regenerated by PK/PEP and backflow of the ADP to mitochondrion is smaller, and oxygen consumption rate, used for rephosphorylation inside mitochondrion, decreases ( Figure 1). 4. Start stepwise addition of creatine until saturation is reached (when no additional increase in the respiration is detected). If mitochondrial CK is coupled to OXPHOS, then the respiration in the presence of PK/PEP system is initiated only by ADP generated in mitochondrial intermembrane space by mitochondrial CK. Critical steps: Prepare the injection solution of creatine (0.2 M) just before the experiment and keep it at 60 • C because of low solubility of creatine at those concentrations. Wash syringe immediately after every injection to avoid blockage by insoluble residue. Representative traces can be found in [19,36,40]. Activation of CK pathway by creatine is very sensitive to the cell/fiber permeabilization quality. Therefore, the appropriate quality tests for outer and inner mitochondrial membrane should be performed in parallel with CK coupling experiments to exclude changes due to poor sample preparation. The activation rate of CK pathway is decreased in several pathologic conditions in comparison to respective healthy tissues. In human gastric mucosal tissue, an active inflammation weakens coupling between CK and OXPHOS as compared with cells with non-active inflammation [6]. However, there are different tendencies in malignant cells-while decreased levels of creatine and CK is reported, in some cases elevated activities are found [71]. Alterations in the CK pathway activity often appear as a first step before more profound changes of energy metabolism. Also, increase in mitochondrial density is observed in response to the CK/PCr circuit inhibition [72]. In oxidative muscles the need for functional energy transport and buffering at the moments of high energy need is especially important. Therefore, the decrease in CK system has a great impact on muscle performance and a decrease in the function for which this system could be used, to detect outset and progression of pathology. The activation of respiration as a response to creatine addition is dependent of the complexes functionally connected with the VDAC in the OMM. If the activation of creatine decreases, it is a first sign that complexes, connected with the OMM and regulating VDAC permeability, are partly detached and more molecules of ATP are diffusing to the cytosol. The alterations of CK system are very sensitive and can be detected already before the changes of other kinetic parameters of the OXPHOS. Adenylate Kinase Pathway AK catalyzes adenine nucleotide interconversion (2ADP ↔ AMP + ATP) and thereby regulates nucleotide ratios in various cellular compartments, the activity of AMP-sensitive metabolic enzymes, participates in the purine nucleotide synthesis pathway and in regeneration of other nucleoside diphosphates from NTP using AMP as a preferred phosphate substrate [49,73]. Besides, through its unique property of transferring and providing for use both βand γ-phosphoryl groups of ATP, AK doubles the energetic potential of the ATP molecule. To date, in vertebrates nine AK isoforms (marked as AK1-AK9) have been identified with sub-cellular locations of AK1, AK5, AK7, AK8, AK9 in the cytoplasm, AK2, AK3, AK4 in mitochondria and AK6, AK5, AK9 in the nucleus (reviewed in [73]). Such intracellular placement of different AK isoforms over the entire cell could form an intracellular network for transport of energy-rich phosphoryls between cellular compartments for ensuring efficient feedback between energy consumption and production [7,46,49]. The coupling of OXPHOS with AK system is known to supply energy for a nuclear transport [74]. Proteomics studies have revealed up-regulation of AK2 in human prostate and pancreatic cancer cells [75,76]. It was shown that AK2 can promote cell proliferation under normal circumstances and high expression of AK2 can be associated with poorly differentiated cells with high proliferative index, and that strong differences exist between highly differentiated and tumor cells in the affinity of their mitochondrial respiration for exogenous AMP [49,77,78]. The signaling function of AK realizes through the amplifying a small change in the ATP/ADP ratio into a much higher increase in the AMP/ATP ratio that in turn activates several cellular AMP-sensitive components, including those in the glycolytic and glycogenolytic pathways, and metabolic sensors and effectors such as ATP-sensitive potassium channels and AMP-activated protein kinase, which adjust energy state in the given tissue [1,49,79]. The Protocol 5 enables the determination of the potential of AK system to activate respiration in general. In tissues such as human breast cancer which have low mitochondrial respiration (RCI < 2), only total AK-mediated respiration can be measured by the respirometry method without the addition of PK-PEP system (protocol 5) [29]. Besides, the data obtained by conducting oxygraphic method on human breast cancer, healthy colon, and colorectal cancer (clinical postoperative samples) correlate well with total AK activity in these tissues [29]. This result indicates that the rapid and simple oxygraphic methods can be used to detect changes in AK activity in clinical postoperative tissues. Protocol 5. Analysis of OXPHOS coupling to AK pathway. Timing 0.5 h Cells in which AK is functionally coupled to mitochondrial OXPHOS a small decrease in ATP e.g., in case of cellular stress induces a large increase in AMP which stimulates OXPHOS through AK-catalyzed ADP regeneration in mitochondria. This protocol enables determination of the potential of AK to activate respiration in cell cultures, clinical material and in experimental animal preparations. In oxidative muscle cells AMP significantly stimulates respiration at maximal concentration of ADP generated by the system. This reflects the intracellular metabolic compartmentalization and local production of ADP by mitochondrial AK functionally coupled with ANT ( Figure 1). 1. Add cells/fiber into the oxygraphic chamber. 3. Add MgATP (2 mM or 0.1 mM) to activate ATPases and induce maximal endogenous (intra-systemic) ADP production which should increase the respiration rate. 4. Add AMP (2 mM) to activate the AK reaction and register V AMP . Respiration should increase due to activation of cytosolic and mitochondrial AKs. The extent to which respiration is stimulated by AMP indicates the functional coupling of whole AK pathway. 5. Inhibit AK with diadenosine pentaphosphate (AP5A, 0.2 mM, V AP5A ) in order to measure AK-dependent part of AMP activated respiration. Consequently, in this setup the inhibitory effect of PK on the AMP-mediated O 2 consumption correlates with intracellular AK1/AK2 ratio. 6. Add carboxyatractyloside (CAT, 1 µM) to inhibit ATP/ADP transport through ANT. In intact mitochondria the respiration is controlled by ANT and if inner mitochondrial membrane is disrupted ANT does not control respiration. 7. To express the strength of the AK functional coupling with OXPHOS calculate AK index (IAK) as IAK = (V AMP − V AP5A )/V AP5A . Critical steps: Cells with a low Km(ADP) should be measured at low (0.1 mM) ATP concentrations, while for cells with higher Km(ADP) vales the use of higher (2 mM) ATP concentrations is recommended [77]. Representative traces can be found in [30,77]. In a more targeted approach, a simple new oxygraphic method was used for quantitative estimation of cellular compartmentalization of AK activity in permeabilized mammalian cells and tissues [77]. The protocol distinguishes between the mitochondrial AK2-dependent respiration activity and activation of respiration induced by the cytosolic AK activity, which is mainly dependent on AK1 activity. The main advantage of this method is its capacity to estimate the relative ratio of AK1 and AK2 activities in one sample without extraction of cellular proteins. Protocol 6. Determination of AK1 (cytosolic AK) and AK2 (mitochondrial AK) dependent portion of the AMP stimulated respiration. Timing 45 min-1 h Assessment of AK1/AK2 ratio gives more detailed information about organization of cellular energetic metabolism. AK as the processor of AMP has an influence on regulation of intracellular signaling. Shift in AK1/AK2 may indicate also to the problems in cell differentiation because AK1 is predominating in well differentiated cells. This is a simple oxygraphic semi-quantitative analysis for the presence of AK1 and AK2 in permeabilized cells. It is based on ATP/AMP-stimulated AK-catalyzed reactions providing ADP to OXPHOS, ADP trapping in the bulk phase of the cytoplasm by the PEP/PK system and measurements of O 2 consumption rates. 4. Add AMP (2 mM) to activate the AK reaction coupled with OXPHOs and mediated by AK2 and AK1 and ANT. Register the maximal AMP stimulated respiration (V AMP ). 5. Injection of 10 IU/mL PK decreases the respiration to the level of AK2 coupled reaction. Because the PEP/PK system is formed and the present V PK demonstrates AK2-specific coupled reaction with ANT inside mitochondria. 6. Add CAT (1 µM) to check inner mitochondrial membrane (IMM) intactness. With intact IMM ANT controls the respiration and if control is lost the respiration rate with CAT significantly exceeds the basal respiration rate. 8. The functional coupling with OXPHOS system with AK1 activity could be characterized by the corresponding AK index (I AK1 ). The I AK1 is calculated according to the following equation: where V AMP , V PK and V AP5A are the rates of O 2 consumption that were measured in step 4, 5, 6 respectively. Calculate the index for AK2 functional coupling with OXPHOS s as I AK2 = 100% − I AK1 . Critical steps: The method is limited by poor mitochondrial respiration of an examined bio-material, i.e., by a respiration control index (RCI) below 2 (see also Protocol 5). Representative traces can be found in [77]. In addition, there is a protocol developed to simultaneously determine coupling of CK and AK to OXPHOS (Protocol 7) [6,68,80]. Protocol 7. Determination of functional coupling of AK and CK to OXPHOS. Timing 1.5 h Effective activation of respiration by using endogenous ADP sources generated by energy transport pathways is characteristic to cells with high diffusion restrictions for adenine nucleotides in the level of outer mitochondrial membrane. With this protocol we can detect functional coupling between two most common phosphotransfer networks and OXPHOS in one sample. This protocol is especially useful when the amount of test material is limited. 1. Add cells/fiber into the oxygraphic chamber. 4. Add AMP (2 mM) to activate the coupled reaction of mitochondrial AK (AK2) with ANT. In these conditions the rise in respiration rate (V AMP ) is caused by coupling of AK to OXPHOS. 6. Add creatine (20 mM) to activate coupled reaction between mitochondrial CK and ANT. In these conditions creatine stimulated respiration (V Cr ) is activated by local generation of ADP in the vicinity of ANT and associated rise in respiration rate indicates the strength of coupling of MtCK. 7. Add cytochrome c (Cyt c, 10 µM) for quality control for intactness of outer mitochondrial membrane. 9. Add CAT (1 µM) to check quality of inner mitochondrial membrane. 10. To assess the strength of the functional coupling independently of mitochondrial content in individual preparations, activation of respiration by AMP can be normalized for the respiratory rate registered after addition of AP5A, thus producing the relative index (I AK = V AMP − V AP5A /V AP5A ). The coupling of CK to OXPHOS is characterized by relative index I CK (I CK = V Cr /V ADP ). Critical steps: Creatine has low solubility at high concentrations (see also Protocol 1). You can open the oxygraph chamber and add creatine as a pre-weighed substance. Representative traces can be found in [6,80]. Coupling of Hexokinase to Oxidative Phosphorylation Hexokinases (HKs) catalyze the first and the essentially irreversible step of glycolysis, phosphorylating glucose to glucose 6-phosphate (G6P). Hexokinases 1 (HK1) and HK2 can bind to VDAC through its hydrophobic N-terminus [81,82]. It is believed that HK-VDAC interaction facilitates the access of kinase to newly generated ATP and overcomes the restriction that the OMM exerts on the permeability for the adenine nucleotides ( Figure 1) and avoids this interaction product inhibition by G6P. Another consequence of HK-VDAC interaction is that it promotes VDAC closure and blocks the mitochondrial Ca 2+ -dependent opening of the mitochondrial permeability transition pore, in association with protecting the cells from entering apoptosis by preventing binding of pro-apoptotic proteins to VDAC [83]. HK2 is a predominant isoform and it is upregulated in many types of tumors associated with enhanced aerobic glycolysis (the Warburg effect) [60,61,84]. Unlike HK1, the HK2 has retained a catalytic activity of the N-terminal domain and this specific feature enables a doubling of the production of G6P [85]. According to the theory proposed by Pedersen and co-workers the overexpression of the VDAC-bound HK2 is a major player in promoting the growth of aggressive cancers and this enzyme represents good target for cancer therapy [86]. Here we introduce the protocol (Protocol 8) where the coupling between OXPHOS and HK2 can be characterized. Timing 40 min The coupling of mitochondrion-bound hexokinases (HK) with the OXPHOS in permeabilized cells and tissues can be assayed by high-resolution respirometric test. With this we can measure the ability of HK to stimulate OXPHOS by locally-generated ADP in the vicinity of VDAC channel. 1. Add cells/fiber into the oxygraph chamber. 3. MgATP (0.1-2 mM) (V ATP ) is added to achieve maximal stimulation of mitochondria with endogenous ADP e.g., ADP produced by the ATPases. The effect of glucose (glucose index) can be calculated as follows: (V Gluc − V ATP )/(V ADP ). Critical steps: In tissues or cells with low Km(ADP) and/or low capacity to produce endogenous ADP e.g., in cancer tissue, 0.1 M ADP can be added instead of 2 mM ADP, to activate endogenous ADP production. See also Protocol 5. Representative traces can be found in [17,29,80]. Summary Deeper understanding of energy-transfer profiles gives important information about the variability of bioenergetic regulation in different tissues in health and disease. Measurements of KmADP in permeabilized cells and the high value of it is a good indicator of intracellular complexity in terms of energy transport. Regulated metabolite exchange across the OMM through the VDAC is a crucial modulator of energy metabolism in all cells. In rat cardiomyocytes, where only limited amounts of VDAC channels are permeable to ATP/ADP [87] and these cells possess the high Km(ADP) value, is direct transfer by ATP/ADP diffusion predominantly substituted by the CK pathway. Moreover, the loss of complexity (normal structure-function relationships), which is manifested as a decrease in KmADP value, could be the first indicator of pathological changes taking part in a tissue or cells, as is shown in case of colorectal carcinogenesis [29,88]. Therefore, if we want to study all the factors influencing energy metabolism, and follow alterations emerging during pathology or aging, intracellular diffusion restrictions for ATP/ADP and energy-transfer pathways should be investigated as well. It is important to mention that the decrease in the activation of the CK system in the presence of creatine is a very sensitive signal that can indicate to the onset of pathological changes or to first signs of energy metabolism alterations due to aging in oxidative muscle cells [55]. Consequently, these protocols presented here could be used for diagnostics purposes to assess the state of health of the working muscle or other tissues. In recent years, significant progress in cancer treatment has taken place, and especially when the malignant tumor has been discovered at an early stage. Therefore, sensitive protocols, enabling detection of the first alterations in the healthy tissue or benign tumor could give information for successful early diagnosis. Besides, if the pathogenic switch mechanism is detected, it is possible to use this knowledge for development of specialized treatments. Using permeabilized cells and tissue fibers, several pathways and functional interactions of mitochondrion with different complexes can be studied simultaneously. High-resolution respirometry protocols presented here provide quick and compendious results. These protocols allow characterization of functional mitochondria in their normal intracellular position and assembly, preserving essential interactions with other organelles. As only a small amount of tissue is required for analysis, the protocols can be used in diagnostic settings in clinical studies. It is not with less importance that the results could be acquired with short period of time; the permeabilization procedure and specific analysis can be completed in 2 h. In conclusion, systemic functional analysis of changes in cellular phosphotransfer networks may help to explain many pathogenic mechanisms in numerous diseases. Acknowledgments: The authors thank A. Koit for correcting the English. Conflicts of Interest: The authors declare no conflict of interest.
9,994.6
2018-09-26T00:00:00.000
[ "Biology" ]
A PRACTICAL WAY TO APPLY A TECHNIQUE THAT ACCELERATES TIME HISTORY ANALYSIS OF STRUCTURES UNDER DIGITISED EXCITATIONS . Time history analysis using direct time integration is a versatile and widely accepted tool for analysing the dynamic behaviour of structures. In 2008, a technique was proposed to accelerate the time history analysis of structural systems subjected to digitised excitations. Recently, this technique has been named as the SEB THAAT* (Step-Enlargement-Based Time-History-Analysis-Acceleration-Technique), and the determination of appropriate values for its parameter is introduced as the main challenge. To overcome this challenge, a procedure is proposed in this paper. The basis of the procedure is the comments on accuracy control in structural dynamics and numerical analysis of ordinary differential equations, legalised in the New Zealand Seismic Code, NZS 1170.5:2004. As the main achievement, by using the proposed procedure, we can apply the SEB THAAT and carry out the time history analysis clearly and with less parameter setting compared to the ordinary time history analysis. The proposed procedure is always applicable and, except when the behaviour is very complex, oscillatory and non-linear, the reductions in analysis run-time are considerable while the changes in accuracy are negligible. The performance can be sensitive to the problem, the integration method, the target response, and the severity of the non-linear behaviour. Compared to the previous tests on the SEB THAAT, the efficiency of applying the SEB THAAT using the proposed procedure is better, the sensitivity of the performance to the problem is lower, and a measure of accuracy is available. Compared to other techniques for accelerating structural dynamic analyses, the use of the SEB THAAT according to the proposed procedure has several positive points, including the simplicity of implementation. Introduction In many structural analyses, the behaviour is dynamic and non-linear.The semi-discretised models can be expressed as [1][2][3][4][5][6]: Initial conditions: where t is the time, t end stands for the analysis interval, M is the mass matrix, f int indicates the vector of internal force, f (t) indicates the vector of excitation (external force), u(t) denotes the vector of displacement, u(t) denotes the vector of velocity, ü(t) denotes the vector of acceleration, u 0 implies the initial displacement, u0 implies the initial velocity, f int0 implies the initial internal force (f int0 , is not needed in linear problems, it may be essential in the presence of material non-linearity), Q represents the constraints that distinguish nonlinear behaviour from linear behaviour, e.g.rigid barriers cannot be passed (see e.g.[7]). Returning to Equation ( 1) and the time integration computation, when f (t) is available in a digitised format, the generally accepted comment for the integration step is as follows [1,[33][34][35][36]: where ∆t is the integration step, T is the smallest oscillatory period with worthwhile contribution to the response [36], ∆t cr is the upper bound on the integration step because of the linear theory of numerical stability [2,23,33], ∆t CFL is the upper bound on the integration step in wave propagation problems and associated with spatial discretisation [37], f ∆t is the step at which the excitation is digitised [1,33,35,38] (and disappears when the excitation is continuous), χ is defined as follows [1,34,35]: when the behaviour is linear, 100 when the behaviour is non-linear and there is no impact, 1000 when the behaviour is non-linear and there are impacts. ( In many analyses, f ∆t is the governing term in Equation (2), leading to ∆t = f ∆t.Focusing on this special case, which will be even more popular in future (in view of the improvements in recording instrumentation), there are several major methods, that can accelerate the analyses by modifying the f (t); see [30][31][32][39][40][41][42].In view of the main features of one of these methods, i.e. the SEB THAAT (Step-Enlargement-Based Time History Analysis Acceleration Technique) [38,39,[43][44][45][46], as listed below: • Significant reduction in the analysis run-time; see Table 1, • sufficiently accurate response history when the parameters are set properly; see e.g.[45,46], • simple formulation [1,38,39], • good versatility; see Table 1, • contribution of all the data of the original excitation in the new excitation [1,38,39], • having a mathematical basis [39], • having a formulation that depends on the excitation and not directly on the structural system [39], • considerable number of the previous successful tests; see the review presented in Table 1, • reducing the analysis run-time without increasing the use of in-core memory [1,38,39], the SEB THAAT has a good potential to accelerate analyses of systems subjected to digitised excitations. In a review on the other methods [30][31][32][40][41][42], three do not use the original excitation's total data in defining the new excitation [30][31][32], three take into account features of earthquakes and may be inappropriate for general structural dynamic problems [30,31,40], one produces new excitations digitised in unequal steps [40], the formulation and implementation of one is complicated [41], and for the method proposed in [42], the implementation is more complicated than the SEB THAAT [38,39] and the method is tested on a small number of examples; see also Section 6. The SEB THAAT is therefore a good candidate for Developing IDA curves and fragility functions, considering deficient cap beam-column joint * In the price of negligible inaccuracy. further study to speed up the analysis of structural systems subjected to digitised excitations.The objective of this paper is to review the SEB THAAT and propose a procedure for its clear application; such a procedure does not exist at present; see [38].In continuation, after reviewing the SEB THAAT, its major challenge (i.e.proper selection of the SEB THAAT's parameter and clear application of the SEB THHAT [38]) is discussed.Then, to overcome the challenge, a procedure is proposed and its performance is evaluated via several examples, including a realistic example.A number of mainly practical issues are discussed later, and finally, the paper concludes with an overview of the achievements and an outlook for the future. The focal idea, basics, and main formulation Convergence is the most basic requirement of approximate computations [49,50].The analysis of Equation (1) by direct time integration is an approximate computation [1,2,23].Therefore, convergence must be established for time integration calculations.Convergence of the responses produced by time integration implies that, for sufficiently small integration steps, by using smaller steps, the difference between the computed and exact responses should asymptotically vanish; see Figure 2 and [2,23,51].In Figure 2, E is the error in arbitrary norm (the above-mentioned difference) [52] and q is the rate of convergence, generally equal to the integration method's order of accuracy [23,39].The analysis run-time is another important feature of direct time integration [1,2,23,53]. When the right-hand-side of the first relation in Equation ( 1) is in digitised format, replacement of the excitation with an excitation digitised in larger steps can reduce the analysis run-time.The replacement is, however, an approximate computation, and the computed responses should continue converging to the exact responses.This idea, in addition to using all the data of the excitation in producing the new excitation, is formulated as the SEB THAAT, based on two mathematical facts, a broadly accepted convention and a realistic assumption [39].The two facts are: (1.) Consider Equation (1), its analysis by an integration method of order q, and an approximation of f (t), i.e. f new (t), converging to f (t) with order q ′ .If q ′ ≥ q, the analysis of Equation ( 1) by the integration method, after replacing f (t) with f new (t), leads to responses that converge to the responses of the original Equation (1), with order q [39,54]. ( .)For a continuous function of a continuously changing variable x, i.e.H(x), if ∆x is a sufficiently small change of x, (O implies the big Oh operator) [55]: The convention is the second order of accuracy of majority of integration methods [2,23]; and the realistic assumption is that, despite being available in digitised format, the f (t) in Equation ( 1) is a smooth function [55] The new excitation, which preserves the response convergence and uses all the data of the original excitation, is defined as follows for integer values of n [1,38,39] (for real values of n, see [56]): where (10) and considering that t end f ∆t is not necessarily a positive integer, Implementation of Equations ( 5)-( 11) is reviewed in Figure 3. Application of the SEB THHAT to analysis of Equation ( 1), using a specific value of n, implies: (1.) computation of f new (t), (2.) replacing f (t) with f new (t) in Equation ( 1), (3.) time integration of Equation ( 1), with the integration step, obtained from Equation (5). In view of Equations ( 2) and ( 5)-( 11) and Figure 3, for a clear and effective application of the SEB THAAT, the value of n must be set carefully. Assigning an excessively large value to n can lead to very inaccurate responses, while assigning too small a value to n will prevent the SEB THAAT from reducing the analysis run-time, in accordance with the potential of the problem, the analysis, and the SEB THAAT. In view of Equation ( 5) and because of the small run-time needed to implement Equations ( 6)-( 11) (compared to that of direct time integration), the SEB THAAT can reduce the analysis run-time.However, the response of the analysis after applying the SEB THAAT should differ negligibly from the response computed ordinarily.With attention to Equation (2), a comment to limit the inaccuracy is to satisfy (see [1,38]): The literature Since its launch in 2008, research on the SEB THAAT has progressed in two main directions.In one direction, the performance of the SEB THAAT is under test (see Table 1), considering different structural systems, different non-linear behaviours, different damping mechanisms, different integration methods, different digitised excitations, etc., and starting with single degree of freedom systems [39], recently focusing on large systems such as the Mi-Figure 3. The process of replacing f (t) with fnew(t) using Equations ( 5)- (11). Even more, for some tests, Equation ( 13) holds for values of n larger than the n max in Equation ( 14), e.g.see [66].Furthermore, for two tests [57,61], the accuracies increase after applying the SEB THAAT, i.e. simultaneous reduction in the analysis run-time and the error of the target response.Besides, the results of the few tests carried out on systems with wave propagation behaviour are satisfactory [1,47,58,60].Also, in one test, the decrease in the run-time is greater, when the behaviour is non-linear [57].The successful performance of the SEB THAAT, when the excitation record is related to near-or far-field earthquakes, is briefly demonstrated [60], as well.Impressive application of the SEB THAAT to the analysis of multistory steel structure buildings when the structural plan is regular or irregular in plan or height is another great achievement [44,61,66,68].It is also displayed that the application of the SEB THAAT in analyses essential for fragility study, though is together with verification computations, can considerably reduce the total run-time [47,48].Finally, some tests on the application of the SEB THAAT to analyses other than the solution of Equation ( 1) have been successfully carried out; see [38,[69][70][71]. The other main direction of the research seeks answers to various conceptual questions.First, it is shown that the excitation can be multi-component [72].The effect of non-linearity on the performance and specifically the accuracy of the responses after application of the SEB THAAT are studied next.As the result, when the non-linearities are modelled properly and adequate values are assigned to the parameters of the non-linear solution, the SEB THAAT's performance can be conceptually similar in linear and non-linear analyses [46].Meanwhile, the T in Equation (2) should be related to the target response [73].Then, the practical preference for considering an upper bound for the enlargement scale, n, is mentioned (see [1,74]).The SEB THAAT is later compared with direct down sampling; and as a main observation, though the down sampling can lead to a good accuracy and analysis run-time reduction in some tests, the performance of the SEB THAAT is never weaker [64].In the same study [64], it is demonstrated, that for the SEB THAAT to be successful, Equation (1) should not necessarily be the result of finite element method [4,75]; finite volume method [76] is also acceptable.It is then studied whether the inaccuracy because of the SEB THAAT can cause numerical instability [77].As the outcome, when Equation (2) holds, the responses obtained from linear analyses are stable, regardless of whether the SEB THAAT is applied.Meanwhile, even when the integration method's order of accuracy is different from two, the SEB THAAT can be successful [78].The effects of the SEB THAAT on the runtimes of linear and non-linear time integration analyses are compared as well [79]; in addition, it is also shown that, in contrast to non-linear analyses, in linear analyses, the reductions can be determined, in terms of the enlargement scale (see [1]).The next study [80] was on values of a and b k different from those introduced in Equation ( 9) and subjected to the following convergence-based restriction (inherited from [39]): As the result, though Equation ( 9) is not always the best selection, it is the best selection when considering different cases of the parameters in Equation ( 1) and different integration methods.The other question was how to extend Equations ( 6)-( 11) to an arbitrary real value of n larger than one; an appropriate way is presented in [56].Some initial studies are also performed on the frequency content of the inaccuracies due to the SEB THAAT (see [1]).Recently, the performance of the SEB THAAT has been studied when the structural system is non-classically damped [43].As the result, for majority of time integration methods, the performance is independent of the type of viscous damping.In a very recent study, it has been demonstrated that for steel structural buildings with 5-20 floors, the SEB THAAT can be reliably used, considering n = 2 [44].Finally, the performance of the SEB THAAT compared to some other analysis acceleration techniques is discussed in [42]. A major challenge As discussed in [38,44,60], a major challenge for the SEB THAAT is the clear and practical application of the SEB THAAT.For a better explanation, with reference to Figure 4 and Equation (12), the reduction in the analysis run-time and the accuracy of the target response can be very sensitive to the problem under investigation.As a direct consequence, for a clear practical application of the SEB THAAT, assigning appropriate values to the SEB THAAT's parameter, i.e. n, is an important challenge. In addition, time history analysis of structural systems, while irreplaceable in many cases [81], is generally time consuming [1,23,39,82], especially when the analysis is a part of a probabilistic or optimisation computation, the structural system is very large, or the structural behaviour is highly non-linear or very oscillatory; see [82][83][84][85].In application of the SEB THAAT, the reductions in the analysis run-time can be significant (see Table 1), even compared to other analysis acceleration methods (see [1] and Section 6).Consequently, it is reasonable to use the SEB THAAT to reduce the large computational efforts in many real time history analyses.Developing a practical way for clear and simple application of the SEB THAAT is therefore a necessity, for which, the SEB THAAT's enlargement scale, n, should be set carefully. Moreover, taking into account Equations ( 5)-( 11) and (13), n is the only parameter to be set for application of the SEB THAAT (in addition to the parameters of the ordinary time history analysis).Accordingly, the main challenge for a clear practical application of the SEB THAAT is determining the appropriate value of n.Some ambiguities are stated next. First, currently the only relation for determining the n is the following inequality: Secondly, given its definition, T in Equation ( 16) cannot be easily determined or estimated, especially prior to the analysis.Next, that the definition of T , given immediately after Equation (2), and in particular the notion of "worthwhile" therein, is not clear.Then, the T χ in Equation ( 16) is not a precise criterion for accuracy [1,[86][87][88][89][90].Finally, the T χ in Equations ( 2) or ( 16) is the only term in these equations that relates the integration step to the structural system and behaviour.Consequently, a direct determination of n is complicated and can be costly, contradicting the purpose of the SEB THAAT, i.e. reducing the analysis run-time.An alternative can be to approximate the appropriate value of n by some upper estimation of n and attempt to correct the estimation in several steps.Meanwhile, for special groups of analyses, simple reliable values can be assigned to n; for a recent achievement, see [44]. Theoretical bases The current approach to applying the SEB THAAT is to simply assign a value to n, based on the experience.In this section, the current approach is replaced with an algorithm that, starting from an upper estimation of n, after a number of repetitive time integration computations, assigns a value to n, appropriate for the accuracy of the target response.The new approach is consistent with the comments, in: (1.) structural dynamics [33], (2.) numerical solution of ordinary initial value problems [91], (3.) the New Zealand Seismic Code, NZS 1170.5:2004[35,92]. Accordingly, it is reasonable to terminate the iterations of the new approach using the criterion in the New Zealand Seismic Code, NZS 1170.5:2004[35,92], that is, after repeating a time integration computation with half steps, the absolute relative difference of the two peak target responses should not exceed 5 %.It is reasonable to start from n = 20; see Figure 4 and [43]. Therefore, what remains unclear is mainly the details of each successive analysis, with respect to the previous analyses.Specially, it should be determined how to change the n and the ∆t.For non-linear analyses, additional details, e.g. the non-linear tolerance, need special attention, as well.Using a subscript on the right to introduce the sequence of time integration computation and paying attention to the discussion above and Equation ( 5) leads to: With careful attention to Equation (17), the New Zealand Seismic Code, NZS 1170.5:2004[35,92], and the comments in structural dynamics and numerical analysis of initial value problems [33,91], not only the n but also the f ∆t can change from one time integration computation to the next time integration computation.Even in ordinary analyses, when repeating a time integration computation with half an integration step, the f ∆t is halved by linear interpolation of the excitation data [33,35,92,93].Consequently, Equation (17) would rather be replaced with: It is also worth noting that the successive computations starting from Equation ( 17) cannot consider integration steps smaller than f ∆t (see the inequality in Equation ( 5)), whereas there is no limitation when starting from Equation (18).In view of the details of the SEB THAAT, the f new (t) corresponding to j = 1 can be obtained by first replacing the digitised f (t), with a record, g(t), digitised in steps equal to f ∆t 1 , using linear interpolation; and then using Equations ( 6)- (11) and n = n 1 , to replace g(t) with f new (t).This approach can be used for all the successive computations, i.e. arbitrary value of j (for the first computation, the linear interpolation is trivial).Accordingly, by extending Equation (18) to: determination of how n j and f ∆t j should change with j is to be clarified. Comparing with the successive time integration computations in ordinary time history analysis, where vol.64 ∆t j = f ∆t j (see [33,35,92]), f ∆t j can be considered representing the inaccuracy because of the integration method's approximation [1,2,23].Similarly, with regard to Equation ( 5), n j represents the inaccuracy due to the SEB THAAAT.Therefore, with attention to the theoretical bases of the SEB THAAT [38,39], it is reasonable to preserve the consistency between the changes of the two sources of inaccuracy.Given this, the New Zealand Seismic Code, NZS 1170.5:2004[35,92], and the fact that, at n = 1 and f ∆t → 0, the inaccuracies due to the SEB THAAT and the approximations of the integration method disappear, it is reasonable to preserve the consistency by: Specifically, the reason for the " 1 2 " in Equation ( 20) is the tradition of halving the integration steps to check the accuracy of the target response in engineering and science; see [33,35,91,93].Together with Equations ( 18) and ( 19), Equation (20) leads to the following integration steps: and the fact that (see also Table 2): From Equation ( 22) and Table 2, the following points can be concluded: (1.) The ending criterion of the time history analysis in the New Zealand Seismic Code, NZS 1170.5:2004[35,92] cannot be used, when setting the integration step according to Equation (21). The reason is that, different from what is implied in Equation ( 21) (see the third row in Table 2), in the procedure in the New Zealand Seismic Code, NZS 1170.5:2004[35,92], the integration step halves with each new time integration.Using P j to introduce the peak target response in the j th computation, the correct ending criterion is (see Appendix A, for a proof attempt): Note that, using n j−1 = 1 in Equation ( 23) simplifies Equation (23) to the ending criterion in the New Zealand Seismic Code, NZS 1170.5:2004[35,92]. (2.)A procedure that uses Equation ( 21) for computing ∆t j can be continued endlessly, until the ending criterion is satisfied, i.e. there is no lower bound on ∆t j .The reason is implied in the relation leading to ∆t j ; see Equation ( 21) and Table 2. (3.)As it should, the integration step ∆t j converges to zero (see the last row of Table 2). A procedure to simply and clearly apply the SEB THAAT is presented next. The procedure By using the following procedure, assigning values to n, the main parameter of the SEB THAAT, can be automated, eliminating concerns about determination of n (see Figure 5): (a) Select the target response, the integration method (preferably unconditionally stable; see [1,2,23,94]), and for non-linear problems, set the non-linear solution details. (d) Compute n j , using: ) (e) Use linear interpolation to change the f (t) at the right hand side of Equation (1) to g(t), digitised in steps equal to f ∆t j , defined as follows (when j = 1, g(t) = f (t)): (f) Use Equations ( 6)-( 11) to change g(t) to f new (t), digitised in steps equal to ∆t j : (g) Time integrate Equation ( 1), considering f new (t) as f (t), and ∆t j as the integration step; for nonlinear problems, use the non-linear tolerances, δj , stated in Table 3 (see the comments in [95] and the difference between the integration steps in the last row of Table 2), and do not stop the computation when the iteration of non-linear solution fails (see [96]). (h) Compute the peak target response, as P j . (?) If only one-time integration computation is carried out, return to Step (c). (?) If the last two peak target responses computed in Step (h) do not satisfy Equation ( 23), return to Step (c). (i) Accept the last time integration computation and response as final. Obviously, for application of the above procedure, no parameter regarding the SEB THAAT needs to be set in advance, a measure of inaccuracy is computed (see Appendix A), and there is no limitation for the application.These are remarkable achievements in terms of simplicity, availability of a measure of inaccuracy, and versatility.Nevertheless, it is also essential to study the accuracy and computational effort of applying the SEB THAAT according to the proposed procedure and compare the results with those summarised in Table 1. Complementary points The procedure proposed in the previous section, besides eliminating the need to select a value for n, has removed the T , χ, ∆t cr , and ∆t CFL from the analysis.This is an additional significant achievement, simplifying the analysis even more.The removal of these four parameters can be explained by considering the role of these four parameters in time history analysis. In short, these parameters are only necessary to maintain the accuracy (including numerical stability) of the target response [1,2,23,33,34].Meanwhile, the accuracy is checked in the last decision-making step of the proposed procedure (just before the Step (i)).Therefore, it is reasonable to consider the accuracy control according to Equation ( 2) redundant and discard it. A main difference between the previous applications of the SEB THAAAT and applying the SEB THAAT according to the proposed procedure is that while in the previous applications there was only one time integration computation, several time integration computations are essential when using the proposed procedure.In addition, a measure of the accuracy is available in terms of the peak target response when using the proposed procedure.Accordingly, when comparing the proposed procedure with the ordinary analysis, it is reasonable to carry out the ordinary analysis sequentially, each time with half steps, and end the analysis iterations with the 5 % criterion of NZS 1170.5:2004, as well.(This is in agreement with the existing comments; see [33,91,93].)Considering this and Figure 5, we can expect the proposed procedure to be neither complicated nor computationally expensive. Finally, it is worth noting that adding the "preferably unconditionally stable" to the Step (a) of the proposed procedure implies that using an unconditionally stable method for the analysis is preferable, but not obligatory [2,23,94].(Only, unconditionally unstable methods should not be used.)Several explanations can be presented for this statement in Step (a) of the procedure.Firstly, the procedure involves repeating time integration computations, and hence even when the response is inaccurate in one-time integration computation, it may be sufficiently accurate in the subsequent computations.The run-time needed for the inaccurate computation is negligible compared to the total analysis run-time as well; see the last row of Table 2 and [1].The second explanation is that the time history analysis and time integration are mostly for non-linear analyses [1,2,23], for which satisfying the requirements of linear stability may be insufficient; see [1,[86][87][88][89][90]97].And as the final explanation, using vol.64 Equations ( 24)-( 26), similarly for all problems, regardless of the integration method, makes the analysis to be simpler, more attractive, and perhaps even more efficient for large problems. Preliminary notes Ground motions are generally available in a digitised format [33].For this reason, and because of the role of the New Zealand Seismic Code, NZS 1170.5:2004[35,92] in the presented discussions, the structural systems in this section are considered to be subject to ground acceleration.Accordingly, in Equation (1): where üg (t) implies the ground acceleration, Γ is a vector with the size of the degrees of freedom, needed for matrix multiplication and considering spatial changes of üg [33]. In addition, similarly to the majority of the past studies on the SEB THAAT (see Table 1), the examples here have a structural dynamic nature, where the members of Γ are all equal to one.Furthermore, as explained in Subsection 3.3, for comparing the ordinary analysis with the analysis according to the proposed procedure, the former is consisted of sequential time integration computations.The procedure is similar to the proposed procedure, with roots in conventional analyses (see [33,35,92]; only slight changes (including replacement of Table 3 with Table 4 and using n j = 1) are implemented in the proposed procedure.For a better explanation, for ordinary analyses, first, a time integration computation is carried out with f ∆t as the integration step and the non-linear tolerance recommended in [95]; see Table 4.The computation is then repeated, with updated parameters, including half integration steps, new non-linear tolerances (see Table 4), and new excitations obtained using linear interpolation.If the absolute relative difference of the two peak target responses is not more than 5 %, the last response is final.Otherwise, the computation is repeated, until convergence of the peak target response is reached.Meanwhile, as demonstrated in [96], similar to the analyses using the proposed procedure for application of the SEB THAAT, the time integrations do not stop when the non-linear solutions fail.A question that may arise here is why in the first time integration computation of the ordinary time history analysis, the selection of the integration step is not according to Equation (2) or its slightly modified version in the New Zealand Seismic Code [35,92].A main reason is that, given the objective of the paper and the ending criterion of the proposed procedure, it is sufficient to show that the proposed procedure is clear in application and can notably reduce the analysis run-time (compared to the ordinary analyses), for many cases.Considering these, for the sake of simplicity and consistency with the analyses according to the proposed procedure, it is reasonable to consider one of the terms in the right hand side of Equation ( 2), and use f ∆t as the integration step of the first time integration computation of the ordinary time history analysis.For majority of cases, this approach (using an integration step in the first analysis larger than the result of Equation ( 2)) will reduce the analysis runtime of ordinary time history analysis.The reduction in analysis run-time due to the use of the proposed procedure will then be a lower-bound of the true reduction in the analysis run-time.When needed, Equation ( 2) can be considered for determination of the integration step in the first time integration computation of the ordinary time history analysis. Accuracy of the responses after applying the SEB THAAT according to the proposed procedure is determined by comparing the responses with the responses obtained from the ordinary analysis.The run-times essential for the analyses are compared in view of the total numbers of integration steps.Accordingly, in this section, fractional time stepping, with the maximum number of non-linear iterations equal to five (as conventional) [98,99], is used for the non-linear solution.Other choices for measuring the analysis run-time and non-linear solution are used in the study of a realistic example in Section 5.All values are given in the International System of Units (SI). Example one: A simple non-linear problem Figure 6 shows a preliminary model of a tall building's structural system, subjected to a ground acceleration.g stands for the acceleration of gravity, equal to 9.81 m s −2 , and Table 5 reviews the model's main properties.Specifically, the stiffness is linearelastic/perfect-plastic considering unloading (u yi is the yield displacement of the i th spring), and hence the behaviour may be non-linear. In Step (a) of the proposed procedure, the base shear is set as the target response, and the C-H method [100] (ρ ∞ = 0.7) is set for time integration.Steps (b)-(d) lead to n 1 = 20.As the result of Steps (e) and (f), Figure 7a shows the excitation in the first time integration computation ("new" as Table 5. Main properties of the structural system in the first example. a subscript implies that the argument is related to the SEB THAAT's application using the proposed procedure).In Step (g), the non-linear tolerance is set to 10 −2 that, when used together with the integration step set in Step (f), results in the time history reported in Figure 8a for the target response, the peak of which is stated at the top of the figure.In view of the number of analyses carried out, the computation proceeds from Step (h) to Step (c).Steps (c) and (d) lead to n 2 = 10.5, using which, Steps (e) and (f) result in Figure 7b as the excitation record.In Steps (g) and (h), using the tolerance in Table 3 corresponding to j = 2 and the integration step equal to the step of the excitation in Figure 7b, leads to a time integration computation with the results shown in Figure 8b.According to the number of time integration computations performed, it is then checked whether the peak target responses reported in Figures 8a and 8b satisfy Equation ( 23).The answer is positive, and Step (i) introduces Figure 8b as the final response; and the procedure stops in Step (j). In the ordinary analysis, a time integration computation is first performed with steps equal to f ∆t (=0.01 s) and a non-linear tolerance equal to 10 −2 (see Table 4).The resulting time history of the target response and the peak value are shown in Figure 9a.The second computation is carried out with half steps and a non-linear tolerance equal to 10 −4 (see Table 4).The consequence is shown in Figure 9b.Given the peak values noted at the top of Figures 9a and 9b, the difference in the peak target responses is much less than 5 % and hence the response displayed in Figure 9b is final.The run-times of the two analysis approaches are compared in Table 6 (where the red oval shapes refer to the run-time details of the final computations and the red numbers are used to compare the run-times), and the good accuracy is evident when comparing Figures 8b and 9b.The result is an 87.26 % reduction in the analysis run-time, at the cost of a visually unrecognisable change in the accuracy of the target response.Therefore, by using the proposed procedure, the SEB THAAT may be easily applicable (without worrying about the value of n) and significantly speed up non-linear time history analyses.Finally, it should be noted that the behaviour of the structural system is indeed non-linear, albeit slightly, as shown in Figure 10. Example two: An interesting linear problem The model in Figure 11a is subjected to the ground acceleration in Figure 11b.The target response, S, is the sum of the kinetic energy E K and the potential energy E P (equal to the input energy of the earthquake minus the energy damped in the structure), i.e., Given Equations ( 1) and (28), by denoting the displacements of the three masses by u 1 , u 2 , and u 3 , and the corresponding velocities by u1 , u2 , and u3 , the target response S can be expressed as [93]: Starting the study with the proposed procedure, in Step (a), S is the target response and the integration The first time integration computation of the ordinary time history analysis is carried out with steps equal to that of the excitation record, i.e. f ∆t = 0.005 s.The obtained target response is shown in Figure 14a.The time integration is then repeated with half a step, resulting in Figure 14b.The peaks of the two responses are within 0.02 % relative difference.Accordingly, Figure 14b represents the final target response.The accuracy is determined by comparing Figures 14b and 13b, and the analysis run-times are compared by the number of steps, which is clear due to the linear behaviour.As a result, by using the SEB THAAT according to the proposed procedure, assigning a value to n is auto- [33], considering 2 % damping for the 1 st and 3 rd natural modes Table 7. Main properties of the structural system in the third example [96]. 4.4. Examples three and four: Tests on analyses non-linear due to elastic impact A brief overview Pounding and collision are of major causes of destruction during earthquakes [102][103][104][105]. Besides, time history analysis is a powerful tool for seismic analysis (see [1,33,35]) and, in the first and second examples, the damping was nonzero and non-classical.Considering these, two structural models, involved in elastic impact, one damped classically, and one undamped, are studied, in the next two subsections. A classically damped system involved in elastic impact Consider the structural system introduced in Figure 15 and Table 7.The average acceleration method [106] is selected for the time integration, and the velocity of the third floor is set as the target response.Given Figure 16, the elastic impacts actually occur and cause the behaviour to be non-linear.For both the ordinary and proposed analyses, details of the non-linear solution are set similar to those in the first example.After the sequential time integration computations, the results shown in Table 8 and Figure 17 are achieved.Table 8.Summary of the analysis run-time study for the system introduced in Figure 15 and Table 7. Figure 17.Final responses obtained for the system introduced in Figure 15 and Table 7 using average acceleration time integration method. Figure 18.The ground acceleration in repetition of the study of the system introduced in Figure 15a and Table 7. Accordingly, the reduction in the analysis run-time is 81.45 %, and despite the slight difference between the two graphs in Figure 17, the accuracy is acceptable, considering that: (1.) The main seismic features of the response [33] are not changed, (2.) both the ordinary and the proposed analyses do not consider the accuracy in the entire history, (3.) neither of the two graphs in Figure 17 are exact, (4.) results of non-linear dynamic analyses are rarely exact [1,89,97,107]. The study is repeated, considering the excitation in Figure 18 (instead of that in Figure 15b).Also, instead of one target response, three target responses, including the third floor's displacement, velocity, and acceleration, are taken into account, separately.Results of the study are summarised in Table 9 and Figure 19, displaying the very good performance of the SEB THAAT when applied according to the proposed procedure.In more detail, while the difference between Figures 19a and 19b 9 and Figure 19 also display that the performance of the proposed procedure is not necessarily sensitive to the target response.In contrary, comparing Table 8 with Table 9 and Figure 17 with Figure 19 implies that the performance of the SEB THAAT's application using the proposed procedure can be sensitive to excitation.Finally, Figure 20 confirms non-linearity of the dynamic behaviour. An undamped system involved with elastic impact and material non-linearity The structural system in this section is the bridge structure introduced in Figure 21 (see [96,108]), where the vertical movements are neglected, the decks are rigid, the piers are massless, the damping is zero, the impacts are elastic, and the following three alternatives are considered for the s in Figure 21a: s = 0.4657588964, 0.7452142342, 1.117821351, (30) and the rest of the parameters are set as follows: Equation ( 30) introduces three cases, where the structural system has 25 %, 100 %, and 200 % additional excitation, compared to when the system is in the vicinity of linear/non-linear behaviour, for which, s = 0.3726071171 (note that the non-linearity is of piece-wise linear type [1,107,108]).These percentages can also be referred to as the severity of the non-linear behaviour, SN; see [96].Accordingly, by studying the performance of the proposed procedure for different values of s, we can arrive at an idea about the effect of severity of the non-linear behaviour on the performance.The displacement of Point A in Figure 21 (the central pier's mid-point), u A , and the potential energy of the system, E P , i.e.: are considered as target responses, in two separate studies (the new variable f i stands for the shear force of the i th pier from left).In addition to the fact that, because of Equation ( 30), the behaviour is non-linear, given the sizes of d i and u yi in Equation (31), the elastic impact is always involved in the non-linear behaviour.By removing the material non-linearity, the target responses change as shown in Figure 22, in orange.Evidently, the contribution of material nonlinearity is negligible when SN = 25 % and is significant when SN = 100 % and SN = 200 %.Meanwhile, by comparing the blue graphs in Figures 22a, 22b, and 22c, we can get an idea of the extent to which the non-linearity can affect the behaviour. In addition to changes in the physical parameters, i.e. the target response and the severity of non-linear 85) 200 Table 10.The eighteen cases under consideration in the study of the system introduced in Figure 21 and Equation (31). behaviour SN, changes in the integration method (as a computational parameter) are also taken into account.The analyses are carried out thrice, using the average acceleration [106], the central difference [109], and the C-H [100] (ρ ∞ = 0.85) time integration methods.Consequently, eighteen cases are included in the study; see Table 10, where, in the first and fifth columns, the three digits in right of each C introduce the target response, the integration method, and the severity of non-linear behaviour, respectively.The final results are reported in Figures 23-25, where the first number in the top boxes implies the percentage of reduction in the analysis run-time and the two numbers in the parentheses are the numbers of repetitions in the ordinary and proposed analyses, respectively.The main observations are as follows: ( .)The final target responses obtained from the proposed analysis approach and ordinary analysis do not always match over the entire analysis interval. ( .)In rare cases, the application of the SEB THAAT using the proposed approach seems to slightly increase the analysis run-time; see Figures 25a and 25e. Both the accuracy of the target response and the reduction in the analysis run-time because of applying the SEB THAAT according to the proposed procedure may be sensitive to the target response, the integration method, and the severity of nonlinear behaviour.In particular, the reductions in the run-time are generally greater when examining the displacement of Point A. Meanwhile, the sensitivities are greater when the SN is greater. The first observation can be explained by the seismic requirements that influence the analysis procedures (see also [33,35,92]).Based on these needs, the ending criterion of both the ordinary and proposed analyses is the convergence of the peak target response.Therefore, it is not reasonable to expect the two responses to necessarily match or even come close to each other over the entire time frame of the analy-ses.Accordingly, the accuracy shown in Figures 23-25 is acceptable.Further explanation is presented in Section 6. Regarding the second observation, firstly, the number of cases displaying a longer analysis run-time when using the proposed procedure is low and the observed amount of the increase is small.Then, the observation is in agreement with the literature [1,46,79], according to which, the details of the non-linear solution should be set carefully.For instance, for this specific problem, by changing the maximum number of nonlinear iterations from five to three, the reduction of the analysis run-time in Figure 25a changes from −9.07 % to approximately +14.25 %.Thirdly, as stated in Section 4.1, the first time integration computation of ordinary analyses is generally carried out with a step obtained from Equation ( 2) or a slightly different version of Equation ( 2) (e.g.see [35,92]).For the previous examples in this paper, the result of Equation ( 2) is not very different from f ∆t.In this example, however, the result of Equation ( 2) is about 30 times smaller than f ∆t.Using the correct step for the first time integration computation of the ordinary analysis changes Figure 25 to Figure 26, where the reduction in the analysis run-time is considerable and about that of the previous examples, and the accuracies of the blue graphs are about those of the black graphs, as well as the blue graphs in Figure 25.Finally, and in completion of the previous explanation, the behaviour of the system corresponding to Figure 25 is very complicated; Figure 27 clearly displays that, when SN = 200 %, the response is not only highly oscillatory [110,111], but also mathematically stiff [112,113]. And to explain the third observation, the time his-tory analysis, ordinarily or according to the proposed procedure, is sensitive to the target response (because of the ending criterion of the analysis), is sensitive to the integration method (because of the direct effect of the integration method on the response), and is sensitive to the SN (because changes in the SN change the excitation).Therefore, it is reasonable to expect that the difference of the final responses obtained from the two analyses also depend on these three parameters and the performance of using the SEB THAAT according to the proposed procedure to be sensitive to these parameters.It is also worth noting that because of the fact that convergence is preserved in different stages of the discussion, and the ending criterion in the proposed procedure is in close relation to the convergence (see Appendix A), the sensitivity to the integration method is much lower than the sensitivities to the SN and target response.In more detail, sensitivity to the integration method is negligible, unless when the SN is very high and proper convergence is delayed to smaller steps; see Figures 23-25 and [1,89,97].The better performance when the target response is the displacement of Point A can be explained by the fact that both the SEB THAAT and the proposed procedure are based on the convergence of the responses; see also [1,39].Non-linear behaviour potentially conflicts with convergence of the responses produced by the time integration [97].Besides, because of the nature of the problem, specifically the similarity of the column characteristics and the structure's geometry, the most crucial source of non-linearity is the collision of the first and last decks with the adjacent supports.Given this, the location of Point A in the structural system, and that the potential energy is affected by the response at different locations in the system, the effect of the non-linearity on the displacement of Point A is less than the effect on the total potential energy.Consequently, the performance is reasonably better for the displacement of Point A. The lower sensitivity at lower SN values can be explained in a similar way. A realistic example Most of real structural systems are of a few thousand degrees of freedom.Therefore, to display that the proposed procedure can be successful in practice, a three-dimensional steel structure subjected to a twocomponent earthquake is studied in this section; see Figures 28 and 29 and Table 11.When modelling the structural system, no specific assumption, e.g.shear building assumption, is taken into account.One node is added at the mid-point of each beam and column, with exception of the beams already halved by bracings, as well as the beams at the highest level.Each of the four beams at the highest level is divided to six beam-elements.The lengths of beam-elements are hence equal to two meters throughout the model.This results in a model with 4 968 degrees of freedom.The pattern of the bracings is continued to the ground.The lumped masses are placed at the beamcolumn connections, equal to 5 000, 10 000, 20 000, and 43 200 kg at the corners of each level of the structure (not at the top level), at the periphery of each level (not at the corners and top level), at the connections not at the periphery, and at the connections at the top of the structure (see Figure 28a), respectively.Damping is assumed to be of Rayleigh type [33], and equal to five percent in the first and third natural modes of the linear structure.Four target responses are taken into account simultaneously; given Figure 28a, acceleration and displacement of Point A in the x and z directions, respectively, displacement of Point B in the x direction, and the total base shear.The latter is obtained from: where, V BS represents the total horizontal force transmitted to the foundation (disregarding the damping forces), and R x and R y stand for the shear force at the lowest floor's typical column in the x and y direction, respectively.Figure 28b, together with the difference between Figures 30a and 30b, confirms that the structural behaviour is non-linear.Simple two-node beamcolumn elements and two-node truss elements are used for the finite element modelling of the beams (and columns) and bracings, respectively [4,75,94].The average acceleration method [106] is used for the time integration.The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [114,115] is used for the non-linear solution.Finally, the OPEN System for Earthquake Engineering Simulation (OPENSEES) is chosen as the structural analysis software [116]. The results of analysing the system ordinarily and when applying the SEB THAAT using the proposed procedure are reported in Table 12 and Figure 31.Accordingly, the performance of the proposed procedure may be acceptable for realistic structural systems.The increase in computational efficiency is considerable, as well.Finally, the presented example differs from the previous examples in the size of the structure, the consideration of a two-component earthquake as the excitation, the non-linear solution method, and the consideration of multiple target responses simultaneously. The achievements and their significance In line with the objective of the paper, the main achievement is that the SEB THAAT can now be applied, without assigning a value to the enlargement scale n.This is important from a practical point of view, particularly because of the significant variation in the reduction in the analysis run-time, reported in Table 1.Meanwhile, we can compare the previous applications of the SEB THAAT, summarised in Table 1, with the 25 observations reported in Sections 4 and 5 (see Table 13).As a result, when using the proposed procedure, the overall reductions in run-time are higher and the sensitivity to the problem is lower.In addition, compared to the previous tests, which were some on linear and some on non-linear analyses, the tests reported in this paper (with the exception of the second example) are on non-linear analyses, some with complicated behaviour.In conclusion, the use of the proposed procedure for application of the SEB THAAT, can be considered adequate.For a detailed comparison, the first example, which showed an 87 % reduction in the analysis run-time at the cost of a visually unrecognisable change in accuracy (see Table 13), is re-examined using the SEB THAAT without the new procedure.For this example, the application of the SEB THAAT considering different values of the enlargement scale n, the integration method and the target response according to Section 4.2, and a conventional value for the non-linear tolerance, i.e. 1.E−4 (see [117]), leads to Figure 32.It can be clearly seen that without using the proposed procedure, the 87 % reduction in the analysis run-time, observed in Section 4.2, is achieved at the cost of 33 % change in the accuracy of the target response when using n = 36.In comparison, the use of the SEB THAAT according to the proposed procedure, achieved only a 1.21 % change in accuracy, as can be seen from Figures 8b and 9b, and its performance is, therefore, superior to the previous applications of the SEB THAAT.Accordingly, using the proposed procedure for application of the SEB THAAT in the first example has considerably increased the computational efficiency of the Reduction in the run-time [%] Ordinary 2 12 h 56 ′ 11 ′′ 93 Using the SEB THAAT according to the proposed procedure 2 56 ′ 23 ′′ Table 12.Summary of the analysis run-time study for the system in the realistic example. Figure 31.Accuracy of the proposed approach in the realistic example. Accuracy of the target response Details An eighteen story building 87 Visually matching in the entire analysis interval See Section 4. Visually matching almost in the entire analysis interval See Section 5 * For the sake of brevity, only six results are presented in Figure 26; however, the presented reduction is based on 18 results. Table 13.Brief review of the reduction in analysis run-times in Sections 4 and 5. Figure 32.Changes in analysis run-time and accuracy of the peak target response in terms of n when applying the SEB THAAT to the analysis of the first example without using the proposed procedure. SEB THAAT.Consequently, the proposed procedure, besides eliminating the ambiguity of assigning correct values to the enlargement scale n, may enhance the computational efficiency of using the SEB THAAT.This implies improvement in simplicity and efficiency of the SEB THAAT. In addition, unlike previous applications of the SEB THAAT, the parameters T and χ do not play a role in the application of the SEB THAAT using the new procedure.In other words, whereas in the previous applications, the T and χ were considered approximately and ambiguously but directly, by using the proposed procedure, the roles of T and χ are implied in the repetitions of time integration computation indirectly but clearly and with less approximation.This removes the problems of assigning values to T and χ, and makes the application of the SEB THAAT to be more simpler and clearer. Due to the capabilities of time integration in the analysis of systems with static instability (zero stiffness) [1,2,23], it is reasonable to expect the SEB THAAT to perform well in these analyses.This has not yet been investigated [38], and the second example in this paper is indeed the first report in this field.Finally, none of the previous applications of the SEB THAAT consider multiple target responses simultaneously; the realistic example presented in this paper is a pioneer in this area. The weak points Parameter-less simple and clear application of the SEB THAAT with acceptable reduction in the analysis runtime, and accuracy of the target response, is the main advantage of the proposed procedure compared to the previous applications of the SEB THAAT.However, there are also disadvantages. A disadvantage of the proposed procedure is that the final target response may be digitised in steps which, if smaller than the step of the excitation f ∆t, may not be a fraction of f ∆t, and, if larger than f ∆t, may not be a multiplier of f ∆t.This is trivial in theory, but inconvenient in practice, and can be a hindrance in post-processing.For example, it may prevent a simple increase of the accuracy of the responses by Richardson extrapolation [25,26,118]. A second disadvantage is that using the proposed procedure does not guarantee a decrease in analysis run-time.In other words, although, in view of the examples presented, the analysis run-time decreases when the SEB THAAT is applied according to the proposed procedure, and the probability of the runtimes increase decreases when the details of the nonlinear solution are well set, the probability has not been theoretically prevented.This disadvantage is likely to be more pronounced if the problem and/or the analysis is unusually complex. Finally, inherited from the New Zealand Seismic Code, NZS 1170.5:2004 [35,92], when applying the SEB THAAT according to the proposed procedure, the analysis ends with convergence of the peak target response.This criterion may be inappropriate for important analyses, especially when the target response is to be computed accurately in the entire analysis interval. More on the complicatedness of the bridge example As implied in [96], for the structural system introduced in Figure 21 and Equations ( 30) and ( 31), both the structural behaviour and the time integration computation are complicated.The main reasons are: (1.)The non-linearity of the system originates in two different sources, i.e. elastic collision and material non-linearity; see Figure 22 and the following fourth point. (3.) Also because of the shape of the excitation in Figure 21b, the mathematical stiffness [112,113] of the problem is considerable. (4.)As the main reason for complexity of the behaviour, in view of Figure 21a and Equation ( 31), the collisions and material loading/unloading at different locations of the structural system and at different time instants are completely dependent. In other words, independent of the excitation and the SN, the non-linearity starts with the collision of the first or seventh mass to the neighbouring support.Then, when the columns are in the plastic range, the collision of the mass above the column with one of the two neighbouring masses (or support) will usually result in unloading and change in the behaviour of the column from plastic to elastic.This means that many collisions and unloading occur simultaneously, which adds complexity to both the behaviour and the computation.Finally, the complexity because of this reason increases by the second and third reasons mentioned above, i.e. high SN and mathematical stiffness. ( .)The last example in [96] differs from the bridge example presented in Subsection 4.4.3, in the earthquake excitation and the values of SN.The two excitations in the example in [96] are not particularly more complex than the excitation in Subsection 4.4.3.In addition, while the structural behavior in the example presented in [96] is very complicated, the maximum values of SN in the examples in [96] and this paper are 100 % and 200 %, respectively.Therefore, the bridge example presented in this paper is more complicated than that presented in [96]. The complexity of the behaviour and the computation leads to a weaker performance of the proposed procedure in this special example as compared to the other examples studied in this paper; see Table 13.Nevertheless, this is consistent with the nature of time integration analyses, which may require very small integration steps to achieve sufficient accuracy in non-linear complicated analyses; see [1, 86-90, 96, 97, 107, 108, 119, 120].In other words, the weaker performance in highly complicated non-linear problems, though would rather be eliminated, is to be expected, taking into account the performance of wellknown time integration methods in analysis of complex structural dynamic behaviour [2,23,[86][87][88][89][90][121][122][123][124][125][126]. Comparison with other analysis acceleration methods The aim of this paper is only to simplify the application of the SEB THAAT by eliminating the parameters from the application process.The objective has been achieved and, in the cases studied (as examples), efficiency has been improved; see Table 13.In addition, the SEB THAAT has already been compared with some other techniques; see [1,42].Taking this into account, the comparison presented in [1] is briefly extended in Table 14 to a rough comparison between the following eight techniques: (1.)The SEB THAAT according to the new procedure, proposed in this paper, (2.) direct down sampling [28], (3.) parareal time integration [20][21][22]127], (4.) time integration of integrated problems [41], (5.) combination of truncation and direct down sampling [30], (6.) impact based replacement of the earthquake record [40], (7.) the SEB THAAT by assigning values to the enlargement scale n [38,39], (8.) adaptive fast non-linear analysis; see [128][129][130]. The comparison shows the superior performance of the SEB THAAT using the proposed procedure from various points of view.It is also worth noting that the overall reduction in the analysis run-time because of the eighth technique [128][129][130] is slightly higher than that of the first technique; see Table 13 and [128].The "a", "b", "c", "d", "e", "f", and "g", respectively, imply that the technique is the first, second, third, fourth, fifth, sixth, and seventh best technique from the point of view of the feature stated in the first column.However, even from the point of view of computational efficiency, the first technique is superior due to its negligible impact on the in-core memory, and the larger number of the examples presented in this paper compared to those presented in [128,129]. Main challenges and a perspective of the future By using the proposed procedure, the SEB THAAT can be simply and clearly applied to structural dynamic analyses.Considering this, Tables 1 and 13, and the significance of computations' run-times [131], it is reasonable to use the proposed procedure for application of the SEB THAAT in real analyses.More efforts are nevertheless essential (some ongoing), in the following directions: (1.)With regard to the SEB THAAT and its application: (a) Enhancement of the application to complex structural dynamic analyses, e.g.highly oscillatory non-linear analyses. (b) Application to wave propagation problems.The behaviour of many important oscillatory systems is a combination of structural dynamics and wave propagation.As indicated in Table 1 and Sections 4 and 5, the SEB THAAT has mostly been tested in application to structural dynamic problems.In wave propagation problems, the number of degrees of freedom is generally larger, the run-times are more, and the need to speed up the analysis is greater. (c) Application to the analysis of systems subjected to various excitations.Many important structural systems can be subjected to several digitised excitations simultaneously, e.g.off-shore platforms that are exposed to earthquake, wind, and sea wave.These problems tend to be large, nonlinear, and complicated and time consuming to analyse.The application of the SEB THAAT to such problems can be complicated, especially when the difference between the digitisation steps is considerable.It is therefore essential to improve the SEB THAAT and the proposed procedure to accelerate such analyses. (d) Increasing the public attention and interest in the SEB THAAT, by: i. Preparing a user-friendly internet webpage to convert f (t) to f new (t).ii.Preparing a user-friendly internet webpage to apply the proposed procedure.iii.Testing the application of the SEB THAAT according to the proposed procedure in the analysis of large realistic systems with complex behaviour (with a real need to reduce the analysis run-time). ( .)Regarding the proposed procedure: (a) Introducing a theoretical guarantee, if necessary together with modifications of the proposed procedure, regarding the reduction of the analysis run-time, without significant change in the accuracy, when applying the SEB THAAT to arbitrary time history analysis. (b) Making changes to the procedure so that either the time step at which the final target response is reported is an integer multiple of the excitation digitisation step, or the excitation step is an integer multiple of the target response output step. (c) Modification of the ending criterion of the proposed procedure, for cases where checking the peak of the target response is not sufficient, e.g.very important structural systems. The future of the SEB THAAT's application using the proposed procedure is promising in view of presented discussions, Tables 1, 13, and 14, Equation (12), and the following two facts: (1.) Due to improvements in recording instrumentation [48,132,133], the smallest available value of f ∆t is in continuous decrease. ( .)Due to the continuous improvements in structural optimisation [134][135][136], the ever-increasing variety of material properties [137,138], and the growing importance of financial aspects, the general trend of structural system changes is towards lighter and less stiff structures.This leads to oscillations of the target response in larger periods. Many of the above seven challenges will be overcome in the near future.In addition, the SEB THAAT and the proposed procedure will be integrated in commercial structural analysis software, which will allow many numerical tests to be carried out leading to further improvements.Various theoretical improvements can be expected, as well.Finally, given the mathematical basis of the SEB THAAT [1,38,39], the application of the SEB THAAT using the proposed procedure can be tested on problems other than Equation (1). Conclusion The SEB THAAT was proposed in 2008 as a technique to replace the digitised excitations with excitations digitised in larger steps, so that time history analyses can be accelerated at the cost of acceptable changes in accuracy.After many successful tests on the SEB THAAT, this paper proposes a procedure that eliminates the need to assign values to the main parameter of the SEB THAAT.Assigning values to some parameters of the analysis is eliminated, as well. The proposed procedure has roots in the New Zealand Seismic Code, NZS 1170.5:2004[35,92], the computational traditions in structural dynamics [33], and numerical solution of ordinary differential equations [91]. The main achievements are as follows: ( .)The SEB THAAT can now be applied with no concern about the parameters n, T , and χ; this is even simpler than an ordinary (without application of the SEB THAAT) analysis according to NZS 1170.5:2004[35,92]. ( .)Using the proposed procedure, the SEB THAAT can be applied, regardless of the problem, the excitation, the integration method, and the non-linear solution details, i.e. no limitation exists for the application of the SEB THAAT according to the proposed procedure. ( .)In view of the presented twenty-five cases, the performance of the SEB THAAT, when applied according to the proposed procedure, is satisfactory.However, it is weaker in the analysis of complicated, highly oscillatory, and highly non-linear structural dynamic systems.The weakness is consistent with the characteristics of time integration analysis of highly oscillatory highly non-linear systems. (4.)The previous point is valid for the accuracy of the target response, as well as the analysis run-time. ( .)Both the reduction in the analysis run-time and the accuracy of the target response are potentially sensitive to the problem, the severity of non-linear behaviour, the target response, the excitation, and the integration method.Given the convergence, the sensitivity to the integration method is less than the other sensitivities, unless the behaviour is very complicated. (6.)Compared to the SEB THAAT's previous applications, the performance of the SEB THAAT when applied according to the proposed procedure seems less sensitive to the problem. (7.)The application of the SEB THAAT according to the proposed procedure seems leading to more computational efficiency compared to the previous applications of the SEB THAAT. (8.) Inherited from the features of time integration, the SEB THAAT can reduce the analysis run-time in analysis of statically unstable systems, with negligible effect on the accuracy of the target response. ( .)The SEB THAAT can perform well when several target responses are under consideration simultaneously. ( .)Compared to several other analysis acceleration methods, the application of the SEB THAAT using the proposed procedure is superior, in terms of simplicity of application, negligible effect on the in-core memory, significant reduction in analysis run-time, etc. Based on the above results, the author can recommend using the SEB THAAT according to the proposed procedure for analysis of arbitrary structural dynamic systems subjected to excitations available in digitised format.Given the significant reduction in the analysis run-time reported in Table 13, and the fact that still only twenty-five cases have been tested for the proposed procedure, it is recommended that in real applications additional checks for accuracy be performed after the final response is obtained.Repeating the analysis with other integration methods, or using the SEB THAAT without the proposed procedure can be two alternatives.These checks will negatively affect the reduction in the analysis runtime, but are essential until sufficient testing of the proposed procedure.Some remaining challenges are as follows: (1.) Improving the proposed procedure for cases with complicated, highly oscillatory, highly non-linear behaviour. ( .)Detailed study of the ending criterion of the proposed procedure for applications where controlling the peak response is not sufficient for the response accuracy. (3.) Testing and improving the SEB THAAT and the proposed procedure for an analysis of wave propagation problems. (4.) Testing and improving the SEB THAAT and the proposed procedure for an analysis of structural vol.64 no.2/2024 A Practical Way to Apply a Technique That Accelerates . . .systems subjected to several excitations, digitised in steps, sized differently. Finally, using the proposed procedure, the SEB THAAT is applicable to analysis of problems with governing equations different from Equation (1).The performance, i.e., the reduction in analysis runtime and the accuracy of the target response, should, however, be investigated.This can accelerate analysis of systems in different fields, and besides may increase the interest in the SEB THAAT.Vector of external forces of an MDOF structural system fnew New excitation obtained from the SEB THAAT fint 0 Vector of internal forces of an MDOF structural system at t = 0 fint i Vector of internal forces of an MDOF structural system at t = ti f An auxiliary variable for determining the new excitation by the SEB THAAT g Acceleration of gravity g(t) A record digitised in step f ∆tj, obtained from the main excitation record, f , using linear interpolation, in the proposed procedure HHT Hilber-Hughes-Taylor (time integration method) IDA Incremental Dynamic Analysis ki Stiffness of the i th spring in Figures 6a and 11a Lp Interval of the integration steps, at which the results of the analysis converge properly mi i th mass in Figures 6a and 11a M Mass matrix MDOF Multi-Degree-of-Freedom n Step enlargement scale and the only parameter of the SEB THAAT, eliminated when using the proposed procedure n ′ An auxiliary variable for determining the new excitation by the SEB THAAT n1 Value assigned to n in the proposed procedure, in the first time integration computation nj Value assigned to n in the proposed procedure, in the j th time integration computation nmax Largest value of n satisfying the accuracy-based comments on time integration step Pj The peak target response in the j th time integration computation of the proposed procedure Pexact The exact peak target response Q Constraints in the governing equation that distinguish non-linear behaviour from linear behaviour q Rate of convergence, generally equal to the order of accuracy of the time integration method q ′ Rate of convergence of the approximation in the excitation Rx x direction component of the shear force in the typical column of the lowest floor of the realistic example, without considering the damping forces Ry y direction component of the shear force in the typical column of the lowest floor of the realistic example, without considering the damping forces R The target response obtained from the ordinary time history analysis Rnew The target response obtained using the SEB THAAT s A scaling factor for the excitation in Subsection 4.4.3 S Target response in the second example, equal to the sum of the kinetic and potential energies SEB THAAT The name of the technique, its application is simplified in this paper (abbreviated from Step-Enlargement-Based Time-History-Analysis-Acceleration-Technique) SI The International System of Units (abbreviated from the French Le Système International d'Unités) SN A measure for severity of the non-linear structural dynamic behaviour, and abbreviation of Severity of non-linear structural dynamic behaviour t Time t end Duration of the dynamic behaviour and the time history analysis t ′ end Duration of the new excitation obtained from the SEB THAAT ti i th time station of the time integration computation T Smallest oscillatory period with a worthwhile contribution to the target response ( f ∆t) new Digitisation step of the SEB THAAT's result ∆t Step of time integration computation ∆tj Integration step in the j th time integration computation in application of the SEB THAAT according to the proposed procedure χ A downscaling factor in Equation (2), introduced in Equation (3) ∆tcr Upper bound of the integration step due to the linear theory of numerical stability ∆t CFL Upper bound of the integration step in wave propagation problems associated with spatial discretisation f ∆t Digitisation step of the excitation f ∆tj The digitisation step of the excitation in the j th time integration computation in the analysis according to the proposed procedure u Displacement vector of an MDOF structural system u Velocity vector of an MDOF structural system ü Accelerations vector of an MDOF structural system u0 Displacement vector of an MDOF structural system at t = 0 u0 Velocity vector of an MDOF structural dynamic system at t = 0 ü0 Acceleration vector of an MDOF structural dynamic system at t = 0 ui Displacement vector of an MDOF structural dynamic system at t = ti ui Velocity vector of an MDOF structural dynamic system at t = ti üi Acceleration vector of an MDOF structural dynamic system at t = ti u A Displacement of central pier's mid-point in Subsection 4.4.3üg Ground acceleration (üg) x x direction component of ground acceleration (üg) y y direction component of ground acceleration uy i Yield displacement of the i th spring in Figure 6a u1 Displacement of the first mass in the second and fourth examples u2 Displacement of the second mass in the second and fourth examples u3 Displacement of the third mass in the second and fourth example u4 Displacement of the fourth mass in the fourth example u5 Displacement of the fifth mass in the fourth example u6 Displacement of the sixth mass in the fourth example u7 Displacement of the seventh mass in the fourth example u1 Velocity of the first mass in the second example u2 Velocity of the second mass in the second example u3 Velocity of the third mass in the second example V BS Total horizontal force transferred to the foundation in the realistic example disregarding the damping forces Z + The set of positive integers α One of the three parameters of the HHT time integration method β One of the three parameters of the HHT time integration method δj The non-linear tolerance in the j th time integration computation ε Uniaxial strain σ Uniaxial stress σy Uniaxial yield stress γ One of the three parameters of the HHT time integration method Γ A vector which, if the f in Equation (1) originates in üg, is essential in computation of f ρ∞ Spectral radius of a time integration method at very large values of ∆t T tan θ1 Young's modulus tan θ2 Slope of the second line in the uniaxial stressstrain plot, for materials with bilinear behaviour and kinematic hardening Figure 2 . Figure 2. Typical trend of convergence in direct time integration analysis. Figure 4 . Figure 4. Reduction in analysis run-time as a function of the scale n, for linear direct time integration computations accelerated by the SEB THAAT. Figure 5 . Figure 5.A procedure for simple and clear application of the SEB THAAT. Figure 6 . Figure 6.The structural system in the first example. Figure 7 . Figure 7. Excitation records obtained from Steps (e) and (f) of the proposed procedure for the first example. Figure 8 . Figure 8. History of the target response and the peak value as the result of Steps (g) and (h) of the proposed procedure for the first example. Figure 9 . Figure 9. History of the target response and the peak value obtained from ordinary time history analysis of the first example. Figure 10 . Figure 10.Exact history of the target response in the first example and its linear counterpart. Figure 11 . Figure 11.The structural system in the second example. Figure 12 . Figure 12.Records obtained from Steps (e) and (f) of the proposed procedure for the second example. Figure 13 . Figure 13.History of the target response and the peak value obtained from Steps (d) and (e) of the proposed procedure applied to the second example. Figure 14 . Figure 14.History of the target response together with the peak value obtained from ordinary time history analysis of the second example. Figure 15 . Figure 15.The structural system in the third example [96]. Figure 16 . Figure 16.Exact responses of the system introduced in Figure15and Table7. is negligible, the computation leading to Figure 19b is about 10.8 times faster, i.e. the analysis runtime is 90.71 % shorter.Table (a).Analysed ordinarily.(b).Analysed according to the proposed procedure. Figure 19 . Figure 19.Final responses for the system introduced in Figures 15a and 18 and Table7using the average acceleration time integration method. Figure 20 . Figure 20.Evidences for the non-linear behaviour of the system introduced in Figures 15a and 18 and Table7. Figure 21 . Figure 21.The structural system in the fourth example. Figure 25 . Figure 25.Final responses obtained for the systems introduced in Figure 21 and Equation (31) and the corresponding reductions in the analysis run-times, when SN = 200 %. Figure 26 . Figure 26.Changes due to the use of the correct value of the integration step in the first time integration of the ordinary analyses. Figure 27 . Figure 27.Complexity of the structural behaviour in the fourth example when SN = 200 %. Figure 28 . Figure 28.Structural system in the realistic example. a An auxiliary variable in determination of the new excitation by the SEB THAAT b k An auxiliary variable in determination of the new excitation by the SEB THAAT ci Viscous damping of the i th damper in Figures 6a and 11a C-H Chung-Hulbert (time integration method) Cijk An indicator for the cases examined in Subsection 4.4.3,introduced in Table 10 di Distances between the decks and the supports, introduced in Figure 21a E Computational error EK Kinetic energy EP Potential energy Erj Error of the peak target response obtained from the j th time integration computation of the proposed procedure fi Shear force of the i th column from left in Figure 21a fint Vector of internal forces of an MDOF structural f Table 3 . Values of non-linear tolerance in application of the proposed procedure. no. 2/2024A Practical Way to Apply a Technique That Accelerates . . . Table 6 . Study of the analysis run-times of the first example. Table 9 . Summary of the analysis run-time study for the system introduced in Figures15a and 18and Table7. Table 14 . A rough comparison between eight techniques for accelerating time history analysis of Equation (1) * .
17,610.6
2024-05-07T00:00:00.000
[ "Engineering" ]
Review of biosensing with whispering-gallery mode lasers Lasers are the pillars of modern optics and sensing. Microlasers based on whispering-gallery modes (WGMs) are miniature in size and have excellent lasing characteristics suitable for biosensing. WGM lasers have been used for label-free detection of single virus particles, detection of molecular electrostatic changes at biointerfaces, and barcode-type live-cell tagging and tracking. The most recent advances in biosensing with WGM microlasers are described in this review. We cover the basic concepts of WGM resonators, the integration of gain media into various active WGM sensors and devices, and the cutting-edge advances in photonic devices for micro- and nanoprobing of biological samples that can be integrated with WGM lasers. Introduction Lasers have played a crucial role in optics since Theodore Maiman first reported on Stimulated Optical Radiation in Ruby 60 years ago 1 . Experiments with lasers have enabled the development of quantum optics theory 2 , as well as many different applications, for example, in manufacturing, imaging, spectroscopy, metrology and sensing 3 . During the last few years, the application of microlasers, especially whispering-gallery mode (WGM) microlasers, in chemical and biological sensing has increased due to advances made in reducing the gap between laboratory experiments and their real-world application. A number of exciting applications of WGM lasers in biosensing have been reported, such as lasing within living cells 4 , monitoring contractility in cardiac tissue 5 , detection of molecular electrostatic changes at biointerfaces 6 , label-free detection of single virus particles 7 and advancement of in vivo sensing 4,8,9 . WGM microlasers with a liquid-core optical ring resonator (LCORR) can probe the properties of a gain medium introduced as a fluid into the core of a thin-walled glass capillary, thereby providing good sensitivity for the detection of health biomarkers such as DNA and protein molecules 10,11 . Many optical platforms are potentially useful for labelfree sensing in biology and chemistry. Examples include optical sensors that make use of plasmonic nanostructures and nanoparticles, photonic crystals, tapered optical fibres, zero-mode waveguides and passive WGM resonators [12][13][14][15][16][17][18] . These micro-and nanoscale optical platforms have already been used for some of the most demanding biosensing tasks such as detection of single molecules 17,19 and detection of single influenza A virus particles and other nanoparticles 20,21 . The use of WGM microlasers for chemical and biological sensing can offer sensing modalities that are often not easily accessed on other optical sensor platforms. For example, in vivo sensing with WGM microlasers is facilitated by detection of the emission of relatively bright laser light at frequencies that are spectrally well separated from the frequency of the free-space excitation beam. Furthermore, WGM microlasers may offer a potentially very high detection sensitivity for molecules due to the narrow linewidth of the laser lines, which could enable detection of the frequency shifts induced by single molecules. Herein, we aim to provide a comprehensive review of the emerging field of biological and biochemical sensing with WGM microlasers. Our review is structured as follows. In the first section, we review the building blocks of WGM microlaser devices, their biosensing applications and their sensing mechanisms. In the second and third sections, we focus on the use of WGM microdroplet resonators in biosensing and the most recent advances made in the integration of WGM microdroplet resonators with gain media. In the fourth section, we review state-of-the-art techniques for micro-and nanoprobing of biological samples that can be combined with WGM microlasers. We close with a discussion of the prospects of using emerging WGM microlasers in biological and chemical sensing applications and as an emerging research tool for single-molecule biosensing. WGM microlasers in biosensing Building blocks Similar to conventional lasers, WGM microlasers consist of three principal building blocks: the gain medium, the pump source, and the optical resonator-here the WGM resonator. The gain medium defines most of the spectral, temporal and power characteristics of the laser light emission. Usually, an optical pump source supplies the energy needed to maintain population inversion of the active particles in the gain medium, i.e., fluorophore molecules, for light amplification by stimulated emission of radiation. The various gain media that have already found use in WGM microlaser-based sensors are reviewed in sections 'Microdroplet resonators as active cavities in biosensing' and 'Review of gain media in WGM microlasers for sensing'. The performance of a WGM microlaser can be further characterised by the quality factor (Q-factor) of the optical resonator, which is a measure of the damping of the resonator modes. The Q-factor is defined as the ratio of the stored energy to the energy dissipated per radian of the oscillation. Various important laser parameters depend on the Q-factor of the cavity such as (i) the laser linewidth, a measure of the spectral coherence of the laser emission and its monochromaticity; (ii) the photon lifetime, the time it takes for the energy in the resonator cavity to decay to 1/e of the original value; and (iii) the lasing threshold, the lowest optical pump power at which stimulated emission is observed. Several optical waves (modes) can typically be excited in a WGM resonator; the separation between two neighbouring modes is called the free spectral range. Another important parameter for resonators is the finesse 22 , which is the free spectral range of the cavity modes divided by the linewidth (FWHM) of the resonances. The finesse corresponds to the number of roundtrips the light takes inside a WGM microcavity before the stored energy decays to 1/e of the original value. The term whispering-gallery wave was first used by Lord Rayleigh to describe the propagation of sound waves in the dome of St Paul's Cathedral 23 ; whispering-gallery wave or mode is now used to describe the effect of any wave travelling around a concave surface. Comparable to this effect, light in a WGM microlaser is confined through near-total internal reflection and circumnavigates a typically spherical cavity, such as a glass microbead 24 ; the interference of the light results in WGM optical resonances. WGMs may be confined in cavity geometries such as disks 25 , toroids 26 , or deformed hexagonal resonators 27 . The unique characteristics of WGM microcavities, such as the long lifetime of the intracavity photons and the small volume of the modes, make them excellent candidates for constructing WGM microlasers with low lasing thresholds that exhibit narrow spectral linewidths. To the best of our knowledge, the first WGM laser was made from a highly polished crystalline calcium fluoride (CaF 2 ) sphere of 1-2 mm diameter. The rare-earth ion samarium (Sm 2+ ) was used as the optical gain dopant 28 . Since then, lasing has been demonstrated in many different spherical WGM cavity geometries 8,28,29 and others, such as triangular nanoplatelets 30 and ZnO hexagonal and dodecagonal microrods and nanonails 31 . Achieving a low lasing threshold is especially important in biosensing applications where the photodamage of biological samples must be avoided. The light intensities of WGMs range from MWcm −2 to GWcm −2 and are comparable to those in microscopy 32 . For example 22 , for a WGM resonator of 40 µm diameter and finesse 3 × 10 5 , when the input power is 16 µW, the built-up circulating optical power is as high as 800 mW, and the circulating optical intensity is 20 MWcm −2 . WGM lasing thresholds at µW pump power levels and below have been demonstrated [33][34][35] . Lasing thresholds of nJ have been shown for spatially and temporally incoherent optical pumping 36 . WGM thresholds of µJ cm −2 have been demonstrated for pumping with a pulsed laser 37 . Apart from optical pumping with a pulsed laser, the ability to realize freespace continuous-wave optical pumping 38 is advantageous because it allows for a wider selection of wavelengths, a smaller laser linewidth and thus a potentially higher sensitivity in biosensing applications. Sensing mechanisms In Table 1, we list WGM microlasers that have been used in some of the most exciting and most recent sensing and biosensing applications. In this section, we review some of the basic sensing mechanisms and how they have been used in the respective applications. According to the Shawlow-Townes formula, the linewidth of WGM microlasers with cold-cavity Q-factors of 10 8 pumped at 1550 nm using erbium as a gain medium can reach a few hertz, thus rendering WGM lasers potentially useful in biosensing applications that require a high detection sensitivity [39][40][41] . For example, microsphere or microring resonators doped with a gain material can provide a 10 4 -fold narrower resonance linewidth than a passive microcavity 42 . Changes in the refractive index in the surrounding medium cause spectral shifts of the WGMs, which can be used for detecting biomolecules. A spectral shift of the lasing line on the order of Δω = 210.8 kHz corresponds to refractive index changes on the order of 10 −9 . The detection of a very low concentration of biomolecules becomes possible if these spectral WGM shifts are resolved 42 . Another interesting WGM sensing modality uses the changes in the effective linewidth, where resonance broadening is attributed to the stress-induced mode shift of different polar modes in the emission spectrum of fluorescent dye-doped 6-10 µm diameter polystyrene microspheres. These changes have been used to monitor the forces that deform these microspheres when they are engulfed by a cell during the biological process endocytosis 43 . The interaction of a nanoparticle with the evanescent field of a WGM can lift the degeneracy of the clockwise and counterclockwise propagating modes, resulting in mode splitting 44 . Monitoring the frequency shifts of the laser lines from WGM splitting is a mechanism that has been used for highly sensitive detection of nanoparticles, down to nanoparticles~15 nm in radius 45,46 . The WGM frequency shifts due to mode splitting are typically too small to be resolved directly on a spectrometer. Instead, the frequency splitting is measured by recording the beat note that is produced when an ultranarrow emission line of a WGM microlaser is split into two modes by a nanoparticle scatterer. The mode-splitting sensing mechanism with WGM microlasers has been applied in biosensing for detection of~120 nm influenza A virus particles deposited on the WGM sensor from air using an erbium-doped silica microtoroid cavity 7 . Another interesting sensing mechanism for WGM microsphere lasers makes use of the concepts of Förster resonance energy transfer (FRET) and coherent radiative energy transfer (CRET) 47 . Following these mechanisms, the WGM microlaser doped with donor molecules can exhibit changes in its emission intensity and wavelength upon surface binding of acceptor molecules. WGM cavities composed of liquid crystal (LC) droplets and doped with donor/acceptor molecules were used to detect the FRET signals in the emission spectrum of the droplets. WGM microlaser LC droplets were used to detect FRET signals of fluorophores such as rhodamine B isothiocyanate and rhodamine-phycoerythrin as they attached to the LC droplet surface 47 . WGM sensing platforms The most versatile WGM microlasers provide true platforms for the development of various applications in chemical and biological sensing. One of these platforms is the so-called optofluidic ring resonator-based dye laser 48 . This microfluidic dye laser is based on a liquid-core optical ring resonator. The LCORR is made of a fused silica capillary with a wall thickness of a few microns. The circular cross-section of the capillary forms a ring resonator that supports WGMs and provides optical feedback for lasing, for example, by injecting a dye solution. Due to the high Q-factor of the WGM, a low lasing threshold can be achieved for pulsed laser excitations of~1 μJ mm −2 . A large fraction of the mode intensity extends into the capillary, where a gain medium such as rhodamine B can be introduced. LCORR lasers have been employed in a range of biosensing applications. In one example, the indocyanine green (ICG) fluorophore was dissolved in blood plasma and then injected into the LCORR capillary to demonstrate sensing in blood. When injected into blood, ICG binds primarily to plasma proteins and lipoproteins, resulting in enhanced fluorescence and lasing 10 . In a similar optofluidic ring resonator approach, a single layer of DNA molecules was used to provide laser gain in DNA detection 49 . Intercalating DNA dyes were employed so that there was no lasing from nontarget DNA on this digital DNA detection platform 49 . Gong et al. 50 recently introduced the concept of a distributed fibre optofluidic laser. Due to precise fibre geometry control via fibre drawing, a series of identical optical microcavities uniformly distributed along a hollow optical fibre (HOF) was achieved to obtain a one-dimensional distributed fibre optofluidic laser. An enzymatic reaction catalysed by horseradish peroxidase was monitored in the HOF over time, and changes in the product concentration were measured by laser-based arrayed colourimetric detection. The fabricated five-channel detection scheme is shown in Table 1. In general, optofluidic resonator platforms combine the merits of a low-threshold lasing with ultranarrow WGM lasing spectra and microfluidic integration. Furthermore, they are suitable for switching between single-and multimode lasing regimes, and they provide optofluidic tuneability of the lasing wavelength 51 . Another interesting sensing platform that solves the problem of fabricating WGM microlasers with different sensing specificities was demonstrated by coating passive 'microgoblet' WGM cavities with multifunctional molecular inks 52 . The one-step modification process uses dip-pen lithography to coat the passive 'goblet' cavity with phospholipid inks to introduce optical gain and provide molecular binding selectivity at the same time. The ink was applied such that it solely coated the light-guiding circumference of a prefabricated polymer 'goblet' microresonator. The authors showed that the highly localised deposition of the ink suffices for lowthreshold lasing in air and water. In air, the observed lasing threshold was ∼10 nJ per pulse, which is only approximately three times that demonstrated in similar goblet microlasers where the entire volume was dye-doped. The authors demonstrated biosensing applications, for example, detecting streptavidin binding to biotin that was contained in the ink and provided molecular binding selectivity. Streptavidin binding to the microcavity was detected from a redshift in the WGM laser mode 52 . Another WGM microlaser platform that is versatile and may find more widespread use comprises the demonstration of ultrasound modulation of the laser output intensity of WGM microdroplets; this platform may enable the development of laser emission-based microscopy for deep tissue imaging 53 . A liquid crystal biosensor platform based on WGM lasing has been reported for real-time and high-sensitivity detection of acetylcholinesterase (AChE) and its inhibitors 54 . The spectral responses provide direct information about molecular adsorption/desorption at the liquid crystal/aqueous solution interface and can be used as an indicator of enzymatic reactions. The limit of detection achieved was as low as 0.1 pg mL −1 for fenobucarb and 1 pg mL −1 for dimethoate, which is considerably lower than the standard levels of the pesticides specified for water quality standards. The results indicated that this versatile platform has potential for application in real-time and highly sensitive monitoring of biochemical reactions. Ouyang et al. 55 presented an optofluidic chip platform that was integrated with directly printed, high-Q polymer WGM microlaser sensors for ultrasensitive enzyme-linked immunosorbent assay (ELISA). It was demonstrated that such an optofluidic biochip can measure horseradish peroxidase (HRP)-streptavidin, which is a widely used catalytic molecule in ELISA, via chromogenic reaction at a concentration of 0.3 ng mL −1 . Moreover, it enables on-chip optofluidic ELISA of the disease biomarker vascular endothelial growth factor (VEGF) at the extremely low concentration of 17.8 fg mL −1 , which is over 2 orders of magnitude better than the ability of current commercial ELISA kits. Microdroplet resonators as active cavities in biosensing To achieve high Q-factors, which are important for sensing and lasing applications, WGM resonators require Lasing changes smooth, near-spherical surfaces that limit scattering losses. Recently, water-walled cavities have been explored to provide an ultrasmooth cavity surface and more than 10 6 recirculation cycles of light 60 . This work points to an important class of devices that can be easily and inexpensively made with microdroplets. In the following section, we review the application of WGM microdroplet resonators in biosensing. WGM microdroplets for lasing Droplet resonators in liquid-air or liquid-liquid configurations achieve high quality factors and finesse because they can trap light via near-total internal reflection using the ultrasmooth liquid interface. Liquids with a higher refractive index, such as oils or aqueous glycerol solutions, are desirable for the miniaturisation of devices and for minimising radiation losses. For example, recent sensing approaches use high-refractive-index oil to make droplet resonators with a high Q-factor up to 1.6 × 10 7 61,62 . Air-liquid microdroplets suitable for WGMs are easy to create by using water or other liquids. A high surface tension naturally forms water droplets in air. The surface tension of water in droplet form is 8000 times stronger than gravity. In the case of liquid-liquid droplets, immiscibility of the two components is often required and is an advantage in sensing. The liquids are self-contained, with minimum cross-contamination, which provides good biocompatibility and often longevity for the sensing device 8,63 . The most common method to make droplets is to use a dispenser with a sharp tip such as a syringe and slowly push the liquid out into another liquid or on top of a solid surface. The surface tension of the drop will help it maintain its position, and the amount of liquid pushed out will determine the droplet diameter 8,54,64 . Some reports have also used the natural drying of liquids until tiny droplets are formed because of the surface tension 65,66 . Finally, the cavity resonances of the WGMs can be manipulated by controlling the surface tension around the droplets, either by streaming the background fluid or by stretching the droplet using a dual-beam trap 64,67 . The requirements for the occurrence of stimulated emission with a certain frequency in WGM lasers entail excitation of a gain medium by a pump source, confinement of the resultant light and feedback from the microcavity. The droplet cavity medium can be easily mixed with molecules and submicron particles, such as fluorescent particles, biomolecules or specific binding chemical molecules. These will act as a gain medium and can later be used for light emission from the droplets and for various sensing mechanisms based on WGM droplet lasing 8,64,66,68,69 . Droplet-based WGM microlasers can be advantageous in sensing and biosensing; compared to other WGM resonator structures, they offer very high Q-factors that can exceed 10 9 under ideal conditions because of their naturally smooth surfaces as a result of surface tension 70 . Such high-Q resonant modes allow lasing at very low threshold pump powers. In addition, the position of the droplets inside a given medium can be controlled using optical trapping, magnetic fields, electrodynamic ion traps (Paul traps) and ultrasonic waves 69,[71][72][73][74][75] . Droplet-based resonators with liquid crystals as an integral component within the droplet offer an opportunity for tuning and tailoring droplet properties, such as the orientation of the LCs using electric fields 6,54 . Tuning of the droplet-based sensor can improve its sensitivity limits and makes it capable of sensing biomolecules with negative charges 6 . However, fabrication and tuning of LC droplets is complicated; therefore, their use for in vivo applications has not yet been demonstrated. WGM droplet microlasers in sensing and biosensing Yang et al. demonstrated reconfigurable liquid droplets by dispensing a solution of dichloromethane and epoxy resin using a computer-controlled microplotter 64 . Due to adhesion, a tiny drop of the liquid was left hanging on the outside wall of the dispenser after the dispenser was immersed in the solution. The dispenser was then touched to the soap water surface, which pulled the drop of solution down to the soap water surface, and the drop self-assembled into a circular floating microdroplet due to the surface tension of water and the high viscosity and immiscibility of the epoxy. The size of the droplet was controlled by the dispenser size and the immersion depth when the dispenser touched the epoxy solution. The formation of self-assembled droplets was demonstrated by Duong Ta et al. 65 They prepared a polymer solution composed of polystyrene, dichloromethane and epoxy resin and dipped a metal rod with a sharp tip inside it. The tip was then immersed into a PDMS solution and moved parallel at a constant speed until the solution completely left the tip. This created a fibre shape of the solution on the PDMS with decreasing diameter from the point where the metal rod touched the PDMS to where it left. Because of the high surface tension of the epoxy resin, the fibre spontaneously broke into numerous small pieces, forming well-aligned spherically shaped droplets. These liquid droplet microlasers are particularly exciting for biosensing applications because they have demonstrated excellent biocompatibility and miniature sizes 8,63,68,76,77 . Although droplet-based WGM microlasers may offer many advantages over other microcavities, they also face some challenges, such as deformation and evaporation because of their volatile nature, mechanical instabilities because of weak binding forces, low WGM coupling efficiencies and problems related to their positioning. To overcome the problem of toxicity, naturally occurring materials, such as lipids and starch granules, have been explored for producing WGM droplets and microlasers for biosensing 8,63,78,79 . Fluorescent dyes Fluorescent dyes are common gain materials in WGM microlasers for sensing and biosensing. Fluorophores can usually provide better biocompatibility than quantum dots (QDs), which, depending on their composition, are often toxic 80 . Some dye molecules, such as indocyanine green (ICG) and fluorescein, are of special interest in this regard because they have been approved by the US Food and Drug Administration for human use 81 . Other dyes, such as cypate, rhodamine 110, Oregon green and Tokyo green, are also claimed to be noncytotoxic with a wide range of experimental data in support of this 70 . Various dye-doped droplets have been used to demonstrate lasing from WGM-based microlasers. They have found some important applications in the imaging, labelling and tracking of cells because of the ease of implanting them within cells and because of their biocompatible nature 4,8 . WGM microlasers doped with rhodamine, coumarin 6, coumarin 102, ICG, or Nile red have been used to detect temperature, stress, water vapour and various biological molecules, such as bovine serum albumin (BSA) and acetylcholinesterase 4,8,54,63,64,66,76,82 . Most organic fluorescent dyes suffer from photobleaching, which restricts repetitive measurements and the use of high pump powers to enhance the signal unless the fluorescent material is regenerated 83 . One of the possible ways to address this problem is the replacement of dyes with polymers 84 . For example, an optofluidic microlaser with an ultralow threshold down to 7.8 µJ cm −2 in an ultrahigh-Q WGM microcavity filled with a biocompatible conjugated polymer has been demonstrated 85 . This conjugated polymer exhibits a significant enhancement in lasing stability compared with Nile red. Polymer microspheres can be used as biomarkers or assay substrates in chemical diagnostics, flow cytometry and biological imaging. Fluorescent biomaterials Fluorescent biomaterials naturally occurring in living organisms, such as flavin mononucleotide 86 , Gaussia luciferase 63 , green fluorescent protein 63,87 , Venus yellow fluorescent protein 68 , firefly luciferin 88 and chlorophyll 89 , have been merged into droplets and other resonator structures as the gain media for WGM microlasers. Researchers 58 demonstrated that natural egg white is an excellent biomaterial for a WGM laser cavity. Using a simple dehydration method, dye-doped goose egg white microspheres were obtained with various sizes from 20 to 160 µm in diameter. These microspheres can act as laser sources under optical excitation with a lasing threshold of 26 µJ mm −2 and a Q-factor up to 3 × 10 3 . Another example of the use of natural materials for lasing is chicken albumen 59 . These microsphere biolasers can operate in aqueous and biological environments such as water and human blood serum, which makes them promising candidates for laser-based biosensing and biological applications. Higher excitation powers are usually required for these biomaterials to initiate lasing compared to organic dyes because of their low quantum yields 86 . Most importantly, due to their nonsynthetic origin, fluorescent biomaterials show promise for in vivo sensing applications (see the section 'WGM microlaser-based sensors in living systems: extra-and intracellular sensing'). Rare-earth elements Rare-earth elements have been used in many examples as the gain medium in WGM microlasers. Stimulated emission has been demonstrated with WGM cavities with samarium 28 , as well as neodymium, erbium, thulium, and holmium 90 . Erbium is an interesting gain dopant that can be applied in a sol-gel process to fabricate WGM microlasers. Yang et al. 91 reported on erbium-doped and Raman microlasers on a silicon chip fabricated by the sol-gel process, where Q-factors as high as 2.5 × 10 7 at 1561 nm were obtained. Ions of Er 3+ can be used to achieve lasing in different spectral bands. Er-doped TiO 2 thin films grown by the sol-gel technique can demonstrate sharp emission peaks at 525 nm, 565 nm, 667 nm, and 1.54 µm 92 . Er:Yb-doped glass WGM microlasers have been demonstrated by using a CO 2 laser to melt Er:Yb glass onto silica microcapillaries or fibres 93 . This proposed WGM structure facilitates thermo-optical tuning of the microlaser modes by passing gas through the capillary and can be used for sensing, such as anemometry. The same group reported anomalous pump-induced lasing suppression in Yb:Er-doped microlasers 94 . Usually, a pump source achieves lasing in a system, and in most cases, a stronger pump leads to higher laser power at the output. However, in this case, the authors observed that this behaviour may be suppressed if two pump beams are used. WGM sensing mechanisms are based on tracking the resonance shift or Q-factor spoiling, monitoring WGM intensity changes 95 , and using photon upconversion 96 . WGM-modulated green and red upconversion with a Qfactor up to 45,000 was achieved in a 9 μm Er:Yb codoped tellurite sphere located in methanol 97 . The authors assessed its application in refractometric sensing and its advantages for the detection of nanoparticles with a diameter of <50 nm. Refractometric sensing with a detection sensitivity of 7.7 nm/RIU was demonstrated. Although several sensing applications of active WGM cavities doped with rare-earth ions have been demonstrated, their use in biosensing is limited because a high pump power is often required for lasing, especially for upconversion lasing 98 . Quantum dots Quantum dots are a common gain material in WGM microlasers for sensing 66,99 . Quantum dots are colloidal or epitaxial semiconductor nanocrystals in which the electron-hole pair is confined in all three spatial dimensions. They are characterised by tuneable emission wavelengths, high quantum yields, and resistance to photobleaching 99 . Laser emission into modes of a dielectric microsphere has been observed using different QDs, such as optically pumped HgTe QDs on the surface of a fused silica microsphere 100 or semiconductor ZnO hexagonal nanodisks 101 . The temperature dependence of the resonant wavelengths of a WGM microbottle doped with CdSe QDs has been studied 102 . These WGM resonators exhibit a blueshift with increasing temperature. It has been observed that these shifts are linear with temperature over an~10 nm wavelength range. This system has been found to be highly photostable for temperature sensing applications. Another example of QDs in WGM lasers is core-shell CdSe/ZnS QDs, which can be embedded in polystyrene microspheres 99 . Their potential for targeted biosensing was explored through the addition of a protein that adsorbs to the microsphere surface, thrombin, and one that does not, bovine serum albumin. Such sensors demonstrate an approximately 100 nm/RIU sensitivity and have interesting advantages such as remote excitation and remote sensing 103 . WGM resonators doped with CdSe/ZnS QDs have also been used to demonstrate the concept of automatic label-free WGM sensing of alcohol in water and of bacterial spores in water 104 . An interesting example is silicon QDs, which are especially attractive for fluorescent refractometric sensors because of their low toxicity and ease of handling 105 . The authors 105 showed that silica microspheres with a thin layer of Si QDs immersed in a cuvette with methanol demonstrate WGM resonance shifts as a function of the refractive index of the analyte solution, giving sensitivities ranging from~30 to 100 nm/RIU and a detection limit of 10 −4 RIU. Capillaries with a high-index fluorescent silicon QD coating have also been developed for protein biosensing using biotin-neutravidin as a specific interaction model 106 . Quantum dots are more photostable than their organic dye counterparts; they reach a high quantum yield of fluorescence and can emit light over a wide spectral range. However, they are not widely used for biosensing because of their weak solubility in water and toxic materials in their composition 80 . The latter problem can be solved by using a relatively new type of QD made of carbon, which opens the opportunity to explore so-called 'green photonics'. Carbon QD WGM lasers have been recently demonstrated 107 . Inorganic perovskites Semiconductor perovskites, or ABX3 materials, basically consist of a cubic unit cell with a large monovalent cation (A) in the centre, a divalent cation (B) on the corners, and smaller X-on the faces of the cube. The energy bandgap is directly related to the chemical structure of the perovskite, and its manipulation allows the full visible range to be covered in WGM microlasers. WGM lasing has been demonstrated in a number of perovskite structures with different shapes such as formamidinium lead bromide perovskite microdisks 108 , CsPbBr 3 microrods 109 , and patterned lead halide perovskite microplatelets 110 . WGM lasers can also be fabricated using perovskites as quantum dots 34 . Similar to QDs, perovskites allow gradual tuning of the emission wavelength 111 . Controllable fabrication of perovskite microlasers is challenging because it requires template-assisted growth or nanolithography. Zhizhchenko et al. 112 implemented an approach for fabrication of microlasers by direct laser ablation of a thin film on glass with donut-shaped femtosecond laser beams. This method allows fabrication of single-mode perovskite microlasers operating at room temperature in a broad spectral range (550-800 nm) with Q-factors up to 5500. Perovskite materials have a wide number of potential applications, including gas sensors 113 . Currently, the main problems with perovskites in WGM microlasers and sensors are their degradation in aqueous media and low photostability 108 . Some attempts to alleviate the water instability of perovskites, which mainly affects the structural and emission performance, include encapsulation in a SiO 2 shell, with the resulting composite assembled into a tubular whispering-gallery microcavity 114 . Other prospective materials for WGM lasers New materials for WGM microlasers and biosensors for which sensing has not yet been demonstrated have a bright future. Of special interest are 2D materials for WGM microlasers such as graphene, transition metal dichalcogenides (WS 2 , MoS 2 ) and tungsten disulphide sandwiched between hexagonal boron nitride 115,116 . WGM single-mode lasing resonance was realised in submicron-sized ZnO rod-based WGM cavities with graphene 117 . Carbon-based materials are prospective materials for WGM lasers and biosensors due to their biocompatibility. In addition to WGM resonators doped with carbon quantum dots and graphene, a diamond WGM 'cold' resonator with a Q-factor of 2.4 × 10 7 has been demonstrated 118 . Nanodiamonds including nitrogen vacancy centres coupled to disk resonators can be used for single-photon generation 119 , with prospects for quantum sensors. The niche of new materials for biosensors is gradually being expanded. Another example is MXenes, which were recently found to have strong sensitivity enhancement for biosensing, gas sensing and humidity sensing due to their metallic conductivity, hydrophilic surface, large specific surface, and wide-band optical absorption. The experimental evidence supports the mechanism by which the characteristics of 2D MXene Ti 3 C 2 T x can enhance the sensitivities of fibre optic biosensors and can be applied to the detection of most trace biochemical molecules [120][121][122] . Sensing with WGM microlasers in living cells and organisms The application of WGM microlasers for in vivo sensing in cells and organisms is often limited to the use of biocompatible materials, geometries and dimensions that do not significantly affect the integrity of the target system. Ideally, the microsensor should not cause cellular stress. In this section, we review various micro-and nanoprobing approaches that are suitable for in vivo sensing. First, we discuss some of the most promising photonic techniques for biological micro-and nanoprobing, with a view to their use in WGM sensing. Then, we review the use of active WGM resonators for intracellular lasing and in vivo sensing applications. Single-cell micro-and nanoprobing Tagging and in vivo real-time sensing of physicochemical properties within single living cells is one of the main goals in biosensing. Despite the many challenges posed by the biocompatibility requirement on the microsensors used, several photonic micro-and nanoprobing techniques have already been successfully used for such applications 123 . An example of a successful approach relies on the modification of optical fibres with different sensing nanostructures, which ideally do not compromise cellular viability. Specifically, the insertion of a SnO 2 nanowire waveguide tagged with fluorescent CdSe@ZnS streptavidin-tailored QDs (maximum emission at 655 nm) into the cell cytoplasm has been shown to enable in vivo endoscopy and controlled cargo delivery (Fig. 1) 124 . Optical pumping through a tapered fibre creates an evanescent field located in the region near the tip, where the nanowire is physically cleaved, which is thus suitable for local endoscopy and spectrometry. Another interesting example of a design that enables single-cell probing consists of the use of an active 'nanobeam' photonic crystal nanocavity constituted by a GaAs semiconductor doped with InAs QDs. These nanocavities have been shown to fulfil the biocompatibility requirement through experiments of internalisation using PC3 cancer cells in culture, in which normal cellular functions, such as migration and division, were maintained. Moreover, upon laser pumping, nanocavity spectra of the internalised probes were obtained, thus constituting the first reported example of active optical resonators in a biological environment, to the best of our knowledge. Shambat et al. 125 showed the feasibility of remote optical readout sensing by performing in vitro protein sensing experiments for streptavidin (SA)-biotin binding, which opens the way for in vivo sensing using the described approach (Fig. 2) 125 . An appealing alternative would be the use of WGM ring resonators as active cavities instead of the crystal cavity geometry. On the same note, focusing on enhancing the biocompatibility, the use of so-called 'living nanoprobes' has recently been proposed 126 . This interesting example made use of in situ optical trapping at the tip of a tapered optical fibre, while the tapered fibre was inserted in a medium containing yeast, L. acidophilus and leukaemia cells (Fig. 3). Yeast cells were trapped on the tip upon external laser pumping, and self-assembly continued with the integration of L. acidophilus cells along the optical axis. Light was guided into the target (leukaemia cells), where localised fluorescence and optical signals were detected. These bionanospear probes demonstrate the value of biomimetic approaches towards single-cell sensing, with devices capable of concentrated illumination of subwavelength spatial regions. It is possible that the nanospear approach could be combined with WGM sensing by trapping a WGM microlaser at the tip of the fibre. WGM microlaser-based sensors in living systems: extraand intracellular sensing Photonic nanoprobes are limited to acting as waveguiding media, and the inclusion of active elements (preferably with high biocompatibility) is needed to meet the requirements of single-cell probing. The next logical and necessary step would be to generate stimulated emission in or by biological systems, rather than delivering laser radiation externally. In this regard, WGM microlasers can be used for tagging purposes and may provide valuable information on the functionality of a biological system by monitoring changes in resonator properties upon the application of a given stimulus. Microlasers based on biomaterials can be further classified depending on whether the resonator configuration implies extra-or intracellular positioning. A remarkable example of an extracellular microlaser that uses Fabry-Pérot microcavities was presented by Gather and Yun in 2011 127 . In their work, the authors proposed a design based on living cells as a gain medium using E. coli cells that were previously modified so that they express green fluorescent protein (GFP) 128,129 . They used this device to demonstrate that the lasing properties from bacteria can be inherited by transmitting the capability to synthesise GFP upon cell division; this constitutes a crucial step towards large-scale self-sustained biological lasers. In another example, lasing amplification from live Venus protein-expressing E. coli bacterial cells was demonstrated to be feasible using WGM microdroplets 68 . Aside from using fluorescence proteins in extracellular microlasers, several approaches taking advantage of other biological structures were also recently reported, namely, assembly of feedback lasers using B2 vitamin-doped gelatine as a waveguide core 130 , use of nanostructured DNA films doped with fluorescent dyes 131 , fabrication of lasers based on chlorophyll-doped high Q-factor optofluidic ring resonators 89 , or even use of modified virus particles for lasing and biosensing 132 . Intracellular microlasers can open an entirely new avenue towards single-cell sensing, generating stimulated emission via biocompatible WGM cavities from within cells. One of the earliest examples in this regard was the use of polystyrene WGM microresonators (microspheres of 8-10 μm diameter) that allowed real-time sensing of biomechanical forces of endothelial living cells upon endocytosis 43 . Subsequently, using silica-coated microdisks as multiplexed microimaging probes, it was demonstrated that intracellular narrowband laser emission is feasible and enables tagging by spectral barcoding 133,134 . Furthermore, each studied cell type was able to internalise multiple microdisks, thus opening the possibility of multiplexed tagging of a large number of cells, allowing 3D tracking of individual cancer cells in a tumour spheroid and even motility measurements via long-term tracking over several days in mitotic 3T3 fibroblasts, as shown in Fig. 4. An example of the internalisation of size-dispersed core-shell organic@silica microspheres, which can act as NIR WGM microresonators, was recently presented by Lv et al. 135 . The authors were able to distinguish and perform real-time tracking of 106 individual macrophage cells, even during the foaming process, which provided further insight into the dynamics of atherosclerosis, a major cause of cardiovascular diseases. NIR WGM microresonators are a promising example considering in vivo applications because they use nano/microjoule optical pumping and output light in the near-infrared wavelength range, thus considerably reducing the impact on cell physiology. Humar et al. have shown a simple and elegant way of generating nontoxic polyphenyl ether (PPE) oil droplets inside cells (Fig. 5) 63 . They used a microinjector connected to a glass micropipette with a 1 μm outer diameter and injected the oil into cells, which formed tiny droplets because of the immiscibility of the oil in the cell cytoplasm. The size of the injected droplets was controlled by the injection time, while the injection pressure was kept constant. Once the droplets were injected into the cells, a free-space coupling method, including an oil immersion objective, was employed for excitation of WGMs and collection of fluorescence 63 . In a very recent example dealing with intracellular WGM microlasers, it was demonstrated that spectral shifts of the WGMs caused by refractive index changes can be correlated with the contractility of an individual cardiac cell in living organisms 5 . Specifically, WGM microbeads were internalised and then acted as intracellular microlasers; their resonant emission wavelengths showed a redshift associated with cardiomyocyte contraction. By tracking the spectral position of the brightest lasing wavelength, a linearly approximated external refractive index (η ext ) could be calculated, and the average η ext changes showed a characteristic increase during cell contractions (Fig. 6a). Three-dimensional images of the studied cells demonstrate that microbeads are in direct contact with a dense network of myofibrils and thus so is the evanescent field of the laser mode (Fig. 6b). Since such proteins are involved in the contractile process, the origin of refractive index variations can be traced back to the fact that cell contractions significantly increase the protein density of the myofibrils. WGM microbead lasers can be readily internalised by different types of cardiac cells and even by zebrafish, for which cardiac contractility measurements were also performed. Moreover, these quantitative transient signals can be used to assess the effect of a calcium channel blocker drug (nifedipine), providing new insights into the mechanobiology of cardiac cells in general (Fig. 6g). In general, approaches for conferring biocompatibility to microresonators make use of surface chemistry manipulation. For example, lipofectamine treatment applied to soft polystyrene active microresonators has been shown to facilitate endocytosis in four different types of cells. The feasibility of this approach has been demonstrated with the use of WGM microlaser-based cell tracking, which revealed broad compatibility with nervous system cells during division (N7 and SH-SY5Y cells), although such cells are generally believed to be nonphagocytic (Fig. 7) 136 . Several biomaterials have been explored for fabrication of WGM intracellular lasers, such as dye-doped aptamermodified silica microresonators 134 , microresonators doped with fluorescent dyes such as rhodamine B (RhB) 78 or fluorescein 43 fabricated from bovine serum albumin (BSA), and biopolysaccharides, among others. Highly biocompatible microcavities built from adipocytes from animal subcutaneous tissue have been demonstrated to enable laser emission under low-power pumping pulsed excitation. This is very suitable for the measurement of (in vivo) variations in the salt concentration in HeLa cells (Fig. 8) 8 . In addition, the conversion between B-and A-type starch structures has been monitored in soft starch granules doped with an organic dye for lasing emission 79 , which shows the potential of these devices for high-sensitivity sensing. Concluding remarks The progress in biosensing with WGM microlasers is impressive. The many different ways that WGM microresonator materials and their multiple possible geometries can be combined and integrated with gain media result in a myriad of possible WGM microlaser devices that, as we have seen from this review, can have very exciting applications in biosensing. WGM microlaser sensors can be further optimised for the specific biosensing tasks at hand. For example, by functionalisation of the sensor with receptor molecules, one can achieve molecule-specific biodetection, and by integrating the sensors with microfluidics, one can achieve more controlled sample delivery and more reproducible data capture. The microgoblet a Left: differential interference contrast (DIC) images of a WGM laser within a migrating cell before, during, and after three cycles of cellular division. The time stamps indicated in the images are in hours:minutes and represent the period elapsed after the first lasing spectrum. Right: lasing spectra of the WGM resonator recorded during the migratory period, i.e., between cell division events. Arrows mark the free spectral range (FSR) between two neighbouring TE modes. b Left: tagging of both daughter cells (B1 and B2) from a mother cell carrying two intracellular lasers (R1 and R2). Right: lasing spectra of resonators inside the mother cell (centre, recorded separately for each resonator but plotted together) and after cell division (top/ bottom). All DIC images show an area of 100 × 100 μm 2 . Reproduced from ref. 136 WGM microlaser platform is an excellent example of this. It demonstrates reproducible and multiplexed detection of several different biomarkers via a single device integrated with microfluidics, where each WGM goblet sensor is functionalised with receptor-containing molecular inks. The WGM LCORR platform and WGM microdroplets and beads are other examples of versatile sensor platforms that can be tailored to meet a variety of different sensing needs, including detection of DNA, sensing of health-related protein markers and intracellular singlecell sensing of pH and forces. This review shows that ongoing innovations in the fabrication and integration of microlasers with gain materials and lab-on-chip devices and the exploration of gain materials that provide more robust sensor operations or new functionalities (such as those based on polymers and MXenes, respectively) spur growing research activities on WGM microlaser sensors for real-world sensing applications. Developing these applications will require not only WGM devices with a robust and reproducible sensor response but also WGM sensors that operate in a highly multiplexed fashion, on chip or in solution, and that can be fabricated at low cost and for single use at the point of need. There are a number of important challenges that need to be addressed before robust and clinically relevant in vitro and in vivo sensing applications of WGM microlaser sensors can become a reality. These challenges are mainly related to the lack of chemical stability of some of the cavity materials in water, the need for miniaturisation of the cavity so as not to perturb the biological cell or organism, the difficulty of biomolecular sensing in complex media where one encounters a host of unwanted background signals, and the difficulty of optical detection in highly scattering and absorbing biological media such as human tissue where WGM lasing at near-infrared wavelengths would be most desired. Methods are needed to discern the specific response of the WGM microlaser sensor to the binding of molecules from a background of resonance shifts due to temperature and bulk refractive index fluctuations. Referencing the measurements by comparing the frequency shifts of WGMs excited in the same microbead cavity may provide a way forward for achieving WGM microlaser sensing over prolonged time periods and under variable experimental conditions. For example, measuring relative frequency shifts in splitmode optoplasmonic WGM sensors has already been used as a sensing concept for highly sensitive detection of single molecules, and these measurements were mostly unaffected by changes in temperature and the host refractive index 137 . Another of the outstanding challenges in WGM microlaser sensing is achieving high detection sensitivity at the level of single molecules. Passive WGM sensors have already demonstrated this ability; this has established them as an important platform to investigate the fundamentals of light-matter interactions, biomolecular c Calculated single-bead diameter map from confocal hyperspectral images corresponding to WGM output. d, e Images of bead-containing HeLa cells (d), and corresponding bead diameter map (e). f Time evolution of the resonant peak position for a bead inside a HeLa cell upon the addition of sodium chloride at t = 0; such exposure to a hypertonic solution produces cell volume shrinkage, which in turn causes the concentrations in the cytoplasm to vary, affecting the refractive index, which produces a shift in the peak wavelength. Scale bars 10 μm. Reproduced from ref. 8 structures and dynamics 138,139 . The WGM microlaser sensors can, in principle, become even more sensitive than their passive WGM counterparts. The difficulty lies in resolving the very small spectral shifts of the WGM laser lines on the order of~10 MHz in single-molecule detection. A way forward may be the use of two laser lines for self-reference measurements within the same resonator and to reduce common mode noise, a concept that has, in part, been demonstrated for detection of very small 15 nm nanoparticles 7 . Split-mode frequency shift detection with passive optoplasmonic WGM sensors that use plasmonic nanoparticles attached to WGM resonators has already enabled single-molecule detection. The passive 'optoplasmonic' WGM counterparts have demonstrated extremely high detection sensitivities 12 . The optoplasmonic WGM sensing concept has been used to detect even very small molecules, such as cysteamine (~77 Da), at attomolar concentrations, as well as single ions, such as single mercury and zinc ions, in aqueous solutions [140][141][142] . The application of the optoplasmonic split-mode single-molecule sensing concept 137 to WGM microlasers should be explored to achieve single-molecule sensitivity. This approach may not only open up singlemolecule sensing with WGM microlasers but also establish active WGM resonators as another important research platform to explore biomolecular interactions, their dynamics and the fundamentals of light-matter interactions in active optoplasmonic microcavities. The detection of single molecules inside a single cell using a WGM microlaser would be an exciting goal to pursue. Other important in vivo and in vitro diagnostic applications for ultrasensitive WGM microlasers to aim for include implantable sensors that detect health biomarkers and lab-on-chip devices that analyse biological samples, molecule-by-molecule, in ultrasmall (attolitre) detection volumes. To conclude, WGM microlaser sensing is blossoming. This rapidly growing research area has the potential to address many of the most pressing biosensing challenges we are facing today. The coming decade will be the proving ground for biosensors such as WGM microlasers to deal with a myriad of global health and environmental concerns, including the emergence of new viruses and the detection of toxins in our water supplies. We need versatile sensors such as WGM microlasers to be best equipped to tackle these daunting challenges, i.e., by quickly and accurately detecting virus particles, healthrelated biomarkers and novel and harmful toxins in our drinking water.
11,337.2
2021-02-26T00:00:00.000
[ "Engineering", "Biology", "Physics" ]
Brillouin Light Scattering from Magnetic Excitations Brillouin light scattering (BLS) has been established as a standard technique to study thermally excited sound waves with frequencies up to ~100 GHz in transparent materials. In BLS experiments, one usually uses a Fabry–Pérot interferometer (FPI) as a spectrometer. The drastic improvement of the FPI contrast factor over 1010 by the development of the multipass type and the tandem multipass type FPIs opened a gateway to investigate low energy excitations (ħω ≤ 1 meV) in various research fields of condensed matter physics, including surface acoustic waves and spin waves from opaque surfaces. Over the last four decades, the BLS technique has been successfully applied to study collective spin waves (SWs) in various types of magnetic structures including thin films, ultrathin films, multilayers, superlattices, and artificially arranged dots and wires using high-contrast FPIs. Now, the BLS technique has been fully established as a unique and powerful technique not only for determination of the basic magnetic constants, including the gyromagnetic ratio, the magnetic anisotropy constants, the magnetization, the SW stiffness constant, and other features of various magnetic materials and structures, but also for investigations into coupling phenomena and surface and interface phenomena in artificial magnetic structures. BLS investigations on the Fe/Cr multilayers, which exhibit ferromagnetic-antiferromagnetic arrangements of the adjacent Fe layer’s magnetizations depending on the Cr layer’s thickness, played an important role to open the new field known as “spintronics” through the discovery of the giant magnetoresistance (GMR) effect. In this review, I briefly surveyed the historical development of SW studies using the BLS technique and theoretical background, and I concentrated our BLS SW studies performed at Tohoku University and Ishinomaki Senshu University over the last thirty five years. In addition to the ferromagnetic SW studies, the BLS technique can be also applied to investigations of high-frequency magnetization dynamics in superparamagnetic (SPM) nanogranular films in the frequency domain above 10 GHz. One can excite dipole-coupled SPM excitations under external magnetic fields and observe them via the BLS technique. The external field strength determines the SPM excitations’ frequencies. By performing a numerical analysis of the BLS spectrum as a function of the external magnetic field and temperature, one can investigate the high-frequency magnetization dynamics in the SPM state and determine the magnetization relaxation parameters. Introduction Since the early 1960s, Brillouin light scattering (BLS) has been widely applied to study acoustic properties near ferroelectric and ferroelastic phase transitions [1]. Usually, a Fabry-Pérot interferometer (FPI) has been used as a spectrometer for BLS. For a traditional single-pass FPI, the contrast factor C 1 , which is defined by the ratio between the maximum transmission and the minimum transmission, was limited to about 10 3 at most. A traditional FPI with a higher contrast factor is a dark FPI with lower transmission efficiency. For successful BLS studies, high-quality transparent samples, which have no inclusions, polished surfaces, and dimensions larger than several mm, are strongly required Reviews on the early stage of BLS from SWs have been already given by Borovik-Romanov and Kreines [7], Patton [8], Sandercock [9], and Grünberg [10] by the middle 1980's. Hillebrands gives a list of publications on SW BLS up to 1999 [11]. Since the mid-1970s, the BLS technique has been intensively applied to study spin waves (SWs) from opaque surfaces. The first observation of BLS from SWs was reported by Grüenberg and Mitawe from ferromagnetic semiconductor EuO (T C = 69 K) [12]. Sandercock and Wettling reported SW BLSs from Fe and Ni at room temperature [13]. Many SW BLS results have been subsequently reported. Readers can find them in the list of references [11]. Thanks to the developments of high-quality thin film preparation techniques, such as the sputtering technique, the MBE technique, and so on, the BLS technique has been successfully applied to study collective SWs in various types of magnetic structures (thickness of L), including thin films, ultrathin films, multilayers, superlattices, and artificially arranged dots and wires [14][15][16][17][18][19][20][21]. For BLS from metallic surfaces, it is important to recognize that there is an essential difference between BLS phenomena from transparent materials and from opaque surfaces. Visible laser light penetrates at most a few hundreds of angstroms from the illuminated surface due to the skin effect (in other words, the absorption effect) [22]. The skin effect strongly violates the momentum conservation law for light scattering. For transparent materials, the momentum conservation law is fully conserved during scattering process. For description of the optical property of metals (for convenience's sake, this example is of an isotropic metal), one should introduce a complex refractive index (n, κ). Here, n is the real part and κ is the imaginary part of the refractive index. The imaginary part κ is usually larger than the real part n for visible light in metals. Then, one should take into account the large uncertainty of ∆q ⊥ /q ⊥~2 κ/n in the momentum conservation law for the surface normal (perpendicular) component q ⊥ of the light momentum. The in-plane (surface parallel) component Q // of the wave vector is defined as: Here, λ is the vacuum wavelength of the laser light, and ϑ in and ϑ s are the incident and scattering angles measured from the surface normal. Usually, the standard backscattering geometry is employed, in which one sets ϑ in = ϑ s = ϑ as shown in Figure 1. In contrast to the momentum conservation law for the perpendicular component q⊥, the momentum conservation law for Q// is always conserved, just the same as transparent materials. It is convenient for later discussions to define the surface dispersion parameter, Q//L. The magnetic structures of thickness L are deposited on appropriate substrate. Within the skin depth of a magnetic structure, both of the SWs and surface acoustic waves (Rayleigh and the Sezawa waves) coexist and can be simultaneously observed in a BLS spectrum. For a thin magnetic structure which satisfies a condition of Q//L < 1 (though in this example, this condition is satisfied for L by less than a few tens of nanometers for λ = 5320 Å (1 Å = 10 −8 cm = 0.1 nm) laser light and  45 = ϑ ), the SAW frequencies are merely controlled by the elastic properties of the substrate and independent of the external magnetic fields [23]. Hence, the SAWs are not the subjects of present interest. By examining the external magnetic field's dependence on the observed peaks, one can readily identify the SAW peaks. As discussed later, the selection rule for SW scattering helps eliminate the SAW contributions in a BLS spectrum. Because the SWs interact with laser photons within the skin depth from the laser-illuminated surface, BLS can give us information on both bulk and surface-localized SWs. Note that the tail portion of the bulk SW within the skin depth reflects the surface's pinning conditions for the SWs. The BLS technique can directly determine the pinning states. This is one of the most important reasons for the effectiveness of SW BLS in thin film magnetism studies. As already mentioned, the in-plane momentum conservation law (and also the energy conservation law) is satisfied, and one can expect sharp peaks for the surface-localized SWs in a BLS spectrum. On the other hand, one can expect broad bulk SW peaks due to the large uncertainty of Δq⊥ and possibly the bulk SW dispersion from the exchange coupling. Theory Theoretical developments on SWs and SW BLSs from magnetic films were another motive force. Damon and Eshbach have already discussed magnetostatic SWs in a ferromagnetic slab and discussed the surface-localized SW now known as the Damon-Eshbach (DE) mode by employing standard magnetic boundary conditions [24]. Beyond the magnetostatic framework of the film SW theory, the dipole-exchange framework was In contrast to the momentum conservation law for the perpendicular component q ⊥ , the momentum conservation law for Q // is always conserved, just the same as transparent materials. It is convenient for later discussions to define the surface dispersion parameter, Q // L. The magnetic structures of thickness L are deposited on appropriate substrate. Within the skin depth of a magnetic structure, both of the SWs and surface acoustic waves (Rayleigh and the Sezawa waves) coexist and can be simultaneously observed in a BLS spectrum. For a thin magnetic structure which satisfies a condition of Q // L < 1 (though in this example, this condition is satisfied for L by less than a few tens of nanometers for λ = 5320 Å (1 Å = 10 −8 cm = 0.1 nm) laser light and ϑ = 45 • ), the SAW frequencies are merely controlled by the elastic properties of the substrate and independent of the external magnetic fields [23]. Hence, the SAWs are not the subjects of present interest. By examining the external magnetic field's dependence on the observed peaks, one can readily identify the SAW peaks. As discussed later, the selection rule for SW scattering helps eliminate the SAW contributions in a BLS spectrum. Because the SWs interact with laser photons within the skin depth from the laser-illuminated surface, BLS can give us information on both bulk and surface-localized SWs. Note that the tail portion of the bulk SW within the skin depth reflects the surface's pinning conditions for the SWs. The BLS technique can directly determine the pinning states. This is one of the most important reasons for the effectiveness of SW BLS in thin film magnetism studies. As already mentioned, the in-plane momentum conservation law (and also the energy conservation law) is satisfied, and one can expect sharp peaks for the surface-localized SWs in a BLS spectrum. On the other hand, one can expect broad bulk SW peaks due to the large uncertainty of ∆q ⊥ and possibly the bulk SW dispersion from the exchange coupling. Theory Theoretical developments on SWs and SW BLSs from magnetic films were another motive force. Damon and Eshbach have already discussed magnetostatic SWs in a ferromagnetic slab and discussed the surface-localized SW now known as the Damon-Eshbach (DE) mode by employing standard magnetic boundary conditions [24]. Beyond the magnetostatic framework of the film SW theory, the dipole-exchange framework was developed [25]. In this approach, one must introduce additional boundary conditions. These are known as the Rado-Weertman boundary conditions [26] and the Hoffman boundary conditions [27]. These are related to the SW pinning effects at the interfaces. In the late 1970s, many theoretical efforts were devoted to calculating a SW BLS spectrum from an opaque surface. Cottam developed a BLS theory for a finite-thickness ferro-Materials 2023, 16, 1038 4 of 63 magnetic slab in terms of the response functions within the magnetostatic framework [28]. Another description of the BLS spectrum calculation from opaque semi-infinite ferromagnetic surfaces was published by Camley and Mills (CM) in 1978 [29]. Readers can refer to reviews by Cottam [30] and Mills [31]. Camley, Rahman, and Mills successively developed a quantitative theory for SW BLS from a ferromagnetic thin film, taking into account both the exchange coupling and the surface pinning conditions [32]. In spite of excellent agreement between the observed and calculated standing spin-wave (SSW) BLS spectra, their theory was too complicated. Another effort to calculate the SSW BLS spectrum was proposed by Cochran and Dutcher [33]. With these theoretical efforts, a quantitative comparison between the observed and calculated SW BLS spectra became possible. Other efforts were devoted to calculating SW frequencies in thin magnetic film beyond magnetostatic approximation and in layered magnetic structures. Rado and Hicken calculated the SW frequencies from an epitaxial Fe thin film on W substrate taking into account the exchange coupling, MAE, and the surface pinning energies [34]. Grünberg discussed SWs in a trilayer, in which two magnetic layers sandwich a nonmagnetic spacer layer, by adapting the magnetostatic boundary conditions at the top and bottom surfaces and each interface [35,36]. Grünberg and Mika extended the trilayer approach to more stacked multilayer films [37]. Their approach was quite intuitive, but it requires handling a large boundary condition determinant (BCD) as the stacked multilayer increases. The transfer matrix method was developed by Barnas and found to be effective in treating the SWs in magnetic superlattices [38]. On the other hand, Camley, Rahman, and Mills developed a theory of SWs in a superlattice consisting of ferromagnetic and nonmagnetic layers within the magnetostatic framework [39]. Although the theory by Camley, Rahman, and Mills was clear in the thread of the argument and much easier to handle than the theory by Grünberg and Mika, it seems to be difficult to extend beyond the magnetostatic framework. Vohl, Barnas, and Grünberg developed a SW theory based the dipole-exchange model in which the interlayer exchange coupling between ferromagnetic layers across the nonmagnetic spacer layer was taken into account [40]. I will give brief outlines of these theories as I discuss each subject. For SWs in magnonic crystals, which consist of artificial periodic structures instead of mathematical descriptions, micromagnetic calculations have been widely utilized [41]. Spin Wave Light Scattering as Dynamic Magneto-Optic Effects The most dominant interaction between laser photon and spin waves is not the Zeeman interaction but the electro-dipole interaction, which is given by [42] as below: (2) in which δε(M) is a magnetization-dependent dielectric constant. A phenomenological description of SW scattering with the magnetization-dependent dielectric constant was developed by Wettling, Cottam, and Sandercock [43]. For brevity's sake, we assume there is a transparent magnet which belongs to cubic (O h ) symmetry and has a dielectric constant ε 0 . Spontaneous magnetization M z is directed along the z-axis. Because the spontaneous magnetization appears as a result of breaking the time-reversal symmetry, not due to the symmetry lowering in the case of ferroelectrics, the dielectric tensor δε(M) should be invariant under the O h symmetry operations. For example, we can apply the C 4z. operation, which is π/2 rotation around the z axis, as below: δε(M z ) = By comparing the matrix elements before and after the operation, we can readily obtain δε xx (M z ) = δε yy (M z ) ̸ = δε zz (M z ), δε xy (M z ) = −δε yx (M z ) (4) and We can expand δε(M z ) into a power series of the magnetization M z with complex coefficients K and G as follows: The dielectric constant should obey the Onsager's reciprocal theorem [43] given by, δε(M z ) αβ = δε(−M z ) βα , and we obtain K αβz = −K βαz and G αβzz = G βαzz (7) It is obvious that the diagonal elements of the dielectric matrix should be even functions of M z , and that the expansion coefficient K should satisfy the following relations: It is known that a second-order tensor can be decomposed into the Hermitian part (ε H αβ = ε H * βα ) and the anti-Hermitian part (ε A αβ = −ε A * βα ) as follows: Here, the asterisk means the complex conjugate. Combining with the Onsager's theorem [44], one can readily obtain and From Equations (6), (7), (10), and (11), one obtains K H αβ,γ = K ′′ αβ,γ , G H αβ,γδ = G ′ αβ,γδ , K A αβ,γ = K ′ αβ,γ , and G A αβ,γδ = G ′′ αβ,γδ . Note that the real and the imaginary parts of the expansion coefficients K and G are fully separated from each other. Because we are now considering a transparent magnet, there is no optical absorption; the dielectric matrix should contain only the Hermitian components. The dielectric matrix is given as the following: Here, we have introduced G ′ 11 = G ′ zz,zz , G ′ 12 = G ′ xx,zz = G ′ yy,zz , and K ′′ 63 = K ′′ xy,z in accordance with the conventional tensor index assignment. Although Equation (13) is the fundamental equation to discuss the magneto-optic effects, it depends only on static magnetization. For discussions on SW scattering from opaque magnets, we should take into account both the contributions from the small amplitude SW variables m x and m y and from the anti-Hermitian components of the dielectric matrix as well as the Hermitian components. We can replace M z in Equation (6) by the magnetization vector M and expand Equation (6) up to the first-order terms of m x and m y . In accordance with the angular mo-Materials 2023, 16,1038 6 of 63 mentum operators in quantum mechanics [45], we can introduce the ladder operators m ± as follows: m ± = m x ± im y . Then, the m − operator describes the SW creation (Stokes) process, and the m + operator describes the SW annihilation (anti-Stokes) process. Of course, one can introduce the magnon creation and annihilation operators through the Holstein-Primakoff representation [46]. Finally, we obtain the dielectric constant matrix, which describes SW scattering in terms of the SW operators, m ± , as follows: in which we can define: Here we used the tensor index assignment of G 44 = G xz,xz = G yz,yz . Based on Equation (14), we can summarize characteristic features of light scattering from SWs as follows: 1. The polarization of the SW scattered light should be cross-polarized from the polarization of the incident light. For example, we can consider the p-polarized incident beam in Figure 1 with the polarization vector e p = sin ϑ − cos ϑ 0 . Then, the scattered beam should be s-polarized with the polarization vector e s = 0 0 1 and vice versa. Because the SAW scattering is observed in the p-p or s-s scattering geometry, we can eliminate the SAW structure from a SW BLS spectrum by inserting an analyzer in front of FPI. 2. The Stokes and anti-Stokes scattering intensities are generally different. In the above example, this can be expressed as follows: Therefore, we can observe an asymmetrical SW spectrum around the elastic Rayleigh peak. This is in contrast with the phonon BLS spectrum, which is symmetrical around the Rayleigh peak. Furthermore, when we reverse the spontaneous magnetization M z to −M z by changing the polarity of the magnetic field, the Stokes and anti-Stokes spectra are interexchanged. Figure 2 gives a schematic illustration of the reason why the Stokes and anti-Stokes intensities are different. When the dynamical magnetization m(t) rotates around the static magnetization M z , the Faraday geometry k//m(t), and the Voigt geometry k⊥m(t) that coexist during one cycle. Therefore, in SW scattering, two different magneto-optic effects simultaneously contribute to and interfere with each other. For a magnet with the real refractive index n and negligibly weak optical absorption, the real and imaginary parts of the expansion coefficients in Equation (6) can be related to the magneto-optic coefficients as follows [43]: 3. Because the matrix components ζ (±) 13 and ζ (±) 23 are not equal to the ζ (±) 31 and ζ (±) 32 components, the p→s scattering and the s→p scattering intensities will be different in general. Because of the Brewster angle, the p-polarized incident arrangement is preferable to the s-polarized incident arrangement in Figure 1. Therefore, we can observe an asymmetrical SW spectrum around the elastic Rayleigh peak. This is in contrast with the phonon BLS spectrum, which is symmetrical around the Rayleigh peak. Furthermore, when we reverse the spontaneous magnetization z M to z M − by changing the polarity of the magnetic field, the Stokes and anti-Stokes spectra are interexchanged. Figure 2 gives a schematic illustration of the reason why the Stokes and anti-Stokes intensities are different. , and the Voigt geometry ( ) t m k ⊥ that coexist during one cycle. Therefore, in SW scattering, two different magneto-optic effects simultaneously contribute to and interfere with each other. For a magnet with the real refractive index n and negligibly weak optical absorption, the real and imaginary parts of the expansion coefficients in Equation (6) can be related to the magneto-optic coefficients as follows [43]: 3. Because the matrix components ζ components, the p → s scattering and the s → p scattering intensities will be different in general. Because of the Brewster angle, the p-polarized incident arrangement is preferable to the s-polarized incident arrangement in Figure 1. Quantum mechanical description of the coupling coefficient K was given by Fleury and Loudon [46]. Hereafter, we use theh = 1 unit for convenience's sake. In accordance with their theory, let us consider a very simple example of a 3d electron magnetic crystal placed in the electromagnetic radiation specified by the vector potential A. We assume that the ferromagnetic ground state is specified by L = 0 and S = 1/2, and we also assume the L = 1 and S = 1/2 intermediate states. We took into account the orbital quenching effect at the ground state. The intermediate states split into a J = 3/2 quartet and a J = 1/2 doublet with an energy separation of 3ζ/2 due to the spin-orbit coupling ζL·S. The Hamiltonian interaction between an electron at r within a magnetic ion and the uniform or long wavelength radiation in second quantized notation is given as follows: where a and a + are the photon annihilation and creation operators, ε λ is the polarization vector of the radiation, and we omitted the coefficient of the vector potential, which is not important for our discussion. We use the |L, L z ⟩|S, S z ⟩ notation for electron wave functions. The wave functions for the ground state and for the spin excited state are given by |G⟩ = |0, 0⟩|1/2, 1/2⟩ and |G * ⟩ = |0, 0⟩|1/2, −1/2⟩, respectively. The intermediate wave functions responsible for the dipole transition are given by [45] as follows: (25) and for the quartet, and (27) and (28) for the doublet, respectively. The photon state changes from the initial state given by |n 1 ⟩|n 2 ⟩ to the final state given by |n 1 − 1⟩|n 2 + 1⟩. Here, subscripts 1 and 2 refer to the incident and scattered field quantities. The transition from the |G⟩ state to the |G * ⟩ state is interpreted as the SW creation process (the Stokes process). We can then perform perturbation calculations on the SW creation process with these wave functions and the Hamiltonian interaction, and obtain the final result given by [22,42,47] as below: where ε CF is the crystal field splitting energy given by ε J=1/2 − ε G . It is important to recognize that the spin-orbit coupling constant ζ is the key parameter to determine the scattering efficiency from a 3d electron magnet. Experimental The experimental setup for SW BLS is a rather standard one for conventional BLS studies with a 3 + 3 pass tandem FPI, except for the backscattering geometry. Figure 1 shows a standard scattering geometry for SW BLS and the coordination system employed in our SW calculations. The magnetic field H is applied along the z-direction within the plane of the sample and perpendicular to the x-z scattering plane defined by the incident and scattered light beams. We always measure the SWs propagating perpendicular to the magnetic field. We can employ three different scattering geometries: (A) magnetic fields applied in the in-pane easy direction (shown in Figure 1), (B) magnetic fields applied in the in-plane hard axis, and (C) a constant magnetic field rotated from the easy direction to the hard direction. Note that the magnetization M and the external magnetic field H are not collinear with each other in the (B) and (C) geometries. The incident angle ϑ is measured from the surface normal along the x direction and is chosen to be the same as the scattered angle. The incident angle ϑ can be arbitrarily changed between 25 • and 65 • . The scattering geometries (B) and (C) can be employed for the in-plane MAE studies. In general, scattering intensity from the surface SW mode increases for larger incident angles, and in contrast, the bulk SW mode intensities increase for smaller angles. At the early stage of our SW BLS studies, we used a laboratory-constructed Sandercocktype 3-pass (or 5-pass) FPI as an interferometer, depending on the surface quality of the prepared films. The FPI was assembled at the machine shop of the Research Institute for Scientific Measurements (RISM), Tohoku University [48,49]. For some sputtered films, we observed SW spectra by using the 3-pass FPI [50][51][52]. The spectra were excited by the 5145 Å or 4880 Å line of an argon ion laser in single-mode operation and detected by a thermoelectrically cooled photomultiplier tube (PMT) for a dark count of less than 1 cps. Later, we constructed a Sandercock-type 3 + 3 pass vernier-tandem FPI in 1994 at the machine shop of RISM, Tohoku University [53], and the multipass type FPI was replaced by the tandem FPI. The 5145 Å or 4880 Å line of argon ion laser can be replaced by the 5320 Å or 4730 Å line of diode-pumped solid-state (DPSS) single-mode laser. Various types of DPSS laser are now commercially available. The DPSS laser is much easier to use and also more economical compared with a water-cooled argon ion laser. The PMT can be also replaced by an avalanche photo-diode (APD) detector, which possesses higher quantum efficiency than the PMT detectors. In our system, a laser beam was introduced by a 45 • right-angle prism with 3 × 3 mm 2 input-face into an optical axis of the FPI. A camera lens (50 mm f 1.2) focused the beam on the sample and also collected the backscattered light. We used a spatial filter that consisted of two camera lenses (135 mm f 2.8 and 50 mm f 1.2) and a 200 µm pinhole. To eliminate the scattering from the SAWs and most of the intense Rayleigh peaks and FPI ghost peaks, a cross-polarizing beam splitter (extinction ratio = 1/200) was inserted in front of the FPI. We found that the elastically scattered light was still intense enough in many cases even after the polarization selection. In order to protect the highly sensitive PMT or APD from optical damage due to elastically scattered light and ghosts, we introduced a tandem acousto-optic modulator (AOM), which was activated around the Rayleigh and ghost peaks as an intensity attenuator. We also added a mechanical shutter, which was activated only when the peak intensity was getting higher than a preset level. In some cases, we experienced that a sample exposed to an intense laser beam (even less than 50 mW) in air was easily damaged by the local heating and oxidizing effects. Therefore, we found that we should place the sample inside a vacuum chamber or under an appropriate atmosphere during BLS measurements. In order to make possible the degree of atmosphere control and also the low-temperature studies under magnetic fields, we prepared a liquid He cryostat which could be used as a vacuum chamber. Furthermore, a closed-cycle refrigerator was used to generate low temperatures down to 15 K. In order to perform variable-temperature and magnetic field studies, we assembled a refrigerator tip suitable for BLS study under magnetic fields of up to 4.5 kOe at the RISM machine shop. During a spectral accumulation time over several hours, the lowest temperature of 15 K could be fully stabilized within ±0.5 K. Figure 3 shows typical BLS spectra obtained from nanogranular Co-Al-O films of 1~2 µm thickness prepared by means of radio frequency-reactive magnetron sputtering onto glass substrates at the Research Institute of Electric and Magnetic Materials (RIEMM) [54]. Semi-Infinite Magnet These spectra were excited by the p-polarized 4880 Å line from an Ar + laser operated in a single-cavity mode with the output power below 30 mW to protect films from local heating by the laser beam. Typical spectrum accumulation time was about 4 h. An external magnetic field of H = 2.0 kOe was applied parallel to the film plane and perpendicular to the scattering plane (x-z plane). The incident angle was chosen to be the same as the scattered angle (ϑ = 45 • ) in these measurements. We observed very similar BLS spectra from a sputtered Fe film and from Fe-Al-O nanogranular films deposited on oxidized Si(001) substrates at the Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University. These spectra were excited by the p-polarized 5320 Å line from a DPSS laser [55] The TM-Al-O nanogranular films (TM = Fe, Co), which consist of crystalline TM particles with several tens of angstrom in diameter, were surrounded by the Al-O grain boundary. The Al-O grain boundary was of~10 Å in thickness. The BLS technique observes SWs with an in-plane wavelength λ // defined by 2π/Q // . The in-plane wavelength λ // (typically~3500 Å) is much longer than the characteristic lengths of granules but much shorter than the in-plane lengths of magnetic structures. For long-wavelength SWs observed with the BLS technique, the real magnetic structure may not be important, and the magnetic properties averaged over in-plane wavelength λ // within the laser-illuminated area determine the SW response in BLS spectra. The peak assignment of these spectra was quite obvious as will be discussed soon. The labels DE and B refer to the Damon-Eshbach (DE) surface wave and the bulk SWs, respectively. Note that the DE peak appears only on the anti-Stokes side in these spectra in contrast to the bulk SW peaks. When we changed the polarity of the external magnetic field, the DE peak appeared on the Stokes side of a spectrum. Another interesting observation is the line shape of the bulk peaks. The bulk peaks are asymmetric with tails to higher-frequency sides. This is due to the relaxation of the momentum conservation law as already mentioned and the SW dispersion (energy as a function of the wave vector) [54]. RISM machine shop. During a spectral accumulation time over several hours, the lowest temperature of 15 K could be fully stabilized within ±0.5 K. Figure 3 shows typical BLS spectra obtained from nanogranular Co-Al-O films of 1~2 μm thickness prepared by means of radio frequency-reactive magnetron sputtering onto glass substrates at the Research Institute of Electric and Magnetic Materials (RIEMM) [54]. These spectra were excited by the p-polarized 4880 Å line from an Ar + laser operated in a single-cavity mode with the output power below 30 mW to protect films from local heating by the laser beam. Typical spectrum accumulation time was about 4 h. An external magnetic field of H = 2.0 kOe was applied parallel to the film plane and perpendicular to the scattering plane (x-z plane). The incident angle was chosen to be the same as the scattered angle (ϑ = 45°) in these measurements. We observed very similar BLS spectra from a sputtered Fe film and from Fe-Al-O nanogranular films deposited on oxidized Si(001) substrates at the Institute of Multidisciplinary Research for Advanced Materials (IMRAM), Tohoku University. These spectra were excited by the p-polarized 5320 Å line from a DPSS laser [55] The TM-Al-O nanogranular films (TM = Fe, Co), which consist of crystalline TM particles with several tens of angstrom in diameter, were surrounded by the Al-O grain boundary. The Al-O grain boundary was of ~10 Å in thickness. The BLS technique observes SWs with an in-plane wavelength λ// defined by 2π/Q//. The in-plane wavelength λ// (typically ~3500 Å) is much longer than the characteristic lengths of granules but much shorter than the in-plane lengths of magnetic structures. In order to understand the characteristic feature of the SW spectra and the magnetic field dependence of the SW frequencies for qualitative discussions and determination of the magnetic constants, I will describe a standard magnetostatic theory rather in-detail for readers not familiar in SW BLS [22,56]. Because we are interested in the magnetization dynamics below 10 11 Hz, which is well below the optical frequency of~6 × 10 14 Hz, we ignore the time-dependent terms in Maxwell's equations. We consider a magnetic film of the magnetization M and thickness L prepared on a nonmagnetic substrate. For the sake of convenience, we can ignore the exchange coupling and the magnetic anisotropy energy (MAE) at this stage, and then set the surfaces as x = 0 and −L. Let us introduce the SW variables m as the small amplitude precession motion around the static magnetization M and the demagnetization field h. The Landau-Lifshitz (LL) equation of motion on M(t) = M + m(t) is given as follows: The effective magnetic field H e f f consists of the external magnetic field H and the demagnetization field h as below : The demagnetization field h should satisfy Maxwell's magnetostatic equations, given below: The first equation of Equation (32) guarantees the introduction of a magnetic scalar potential φ satisfying the below condition: h = −∇φ (33) We consider SWs propagating along the (0, cosθ, − sin θ) direction measured from the y-axis to be within the film plane, and we assume plane-wave-type space-time dependence for the dynamical variables: Furthermore, the continuity conditions of the variables φ and b x = h x + 4πm x at x = 0 and −L should be satisfied. Outside of the magnet, the scalar potentials are given Here, we should choose the exp(−Q // x) term for x ≥ 0 and the exp(Q // x) term for x ≤ −L. Inside the magnet, we have We can solve the LL equation in terms of the susceptibilities: and m y (ω) = χ yx (ω)h x + χ yy (ω)h y = −iM(ω/γ) Combining Equations (32) and (36)- (38), we obtain The perpendicular wave vector q ⊥ is found to be Because q ⊥ should be real for the bulk mode, we obtain a SW band given by The SW band became a single level at θ = 0 (along the y-axis) and gradually spread wider as θ approached π/2 (along the z-axis) as shown in Figure 4a. It is important to note that at the upper-bound SW frequency, the perpendicular component q ⊥ became much larger than the in-plane component Q ⁄⁄ . On the contrary, we have q ⊥ = 0 for the lowerbound frequency.   The SW band became a single level at θ = 0 (along the y-axis) and gradually spread wider as θ approached π/2 (along the z-axis) as shown in Figure 4a. It is important to note that at the upper-bound SW frequency, the perpendicular component q⊥ became much larger than the in-plane component Q⁄⁄ . On the contrary, we have q⊥ = 0 for the lower-bound frequency. There is another type of SW solution which satisfies the magnetic continuity conditions at the boundaries. This solution is the surface-localized SW and is known as the Damon-Eshbach (DE) mode. For discussions of the surface mode, it is convenient to rewrite Equations (34) and (40) as follows: and q ⊥ = 1 + 4πχ xx cos 2 θ 1 + 4πχ xx By eliminating the potential amplitudes φ ∓ outside of the magnet using the boundary conditions, we obtain a set of homogeneous equations on φ(±) to determine the surfacelocalized SW frequency. Per the requirement for nontrivial solutions of the homogeneous equations, we obtain: Here, we define β = exp(−2Q // L). To obtain the DE mode frequency, we must solve numerically Equation (44). Fortunately, we can obtain an analytical expression of the DE mode frequency in following two cases. Case 1: θ = 0 (α = 1) For a film with a thickness larger than L ∼ = λ/2, the exponential term in Equation (45) can be safely neglected. It means that a film thicker than L ∼ = λ/2 can be treated as a semi-infinite magnet. After some calculations, we obtain the below: Equation (47) at θ = 0 gives the frequency which is exactly the same as the frequencies given by Equation (45) because of β = 0. However, we should check whether these modes are truly eligible for the surface mode or not. From the boundary conditions, we obtain Because the semi-infinite magnet occupies the space below x ≤ 0, the perpendicular component q ⊥ should be positive for the eligible surface mode. By substituting the frequencies in Equation (47) into Equation (48), we find that the positive frequency in Equation (47) always gives q ⊥ < 0 and fails to satisfy the localization condition. We should abandon the positive frequency solution. Meanwhile, the negative frequency gives Therefore, the negative frequency mode propagating within the critical angle given by Equation (50) can be the surface-localized DE mode [24]: Figure 4a shows the propagation angle θ development of the bulk SW band given by Equation (41) and the DE mode frequency given by Equation (47). At the critical angle θ C , the DE mode frequency coincided with the upper bound of the bulk SW band. In this calculation, we used a set of the magnetic constants suitable for Fe: g = 2.09, H = 4.0 kOe, and 4πM = 21.0 kG. Figure 4b shows the critical angle θ C as a function of the external magnetic field for the same set of the magnetic constants. The critical angle was gradually squeezed with the increasing magnetic field. The inset displays a schematic illustration of the nonreciprocal propagation characteristics of the DE mode. The DE mode always propagates from the left to the right across the magnetization as indicated by the arrows. At an angle θ beyond the critical angle θ C , the attenuation factor for the DE mode q ⊥ in Equation (49) becomes negative, and the DE mode is no longer allowed above θ C . The existence of the critical angle is the reason why the DE peak appears on only one side of a SW BLS spectrum. The nonreciprocal propagation characteristics of the DE mode are schematically illustrated in Figure 4c. Note that the counterpart of the DE mode propagating along the opposite direction is located on the bottom surface of the magnet. The incident laser photon can never interact with the counterpart DE mode because of the absorption effect of visible light, as we have already mentioned. The DE mode frequency was always above the bulk SW frequency band. At the critical angle, the DE mode frequency was just on the upper bound of the bulk SW band (see Equation (41)). Sandercock and Wettling nicely presented how the DE mode behaves as the propagation angle approaches the critical angle [13]. Their results clearly show that the DE mode decays into the bulk SW band, and no surface mode is allowed beyond the critical angle. Figure 5 shows the SW frequencies as a function of the magnetic field for Fe 64 Al 19 O 17 nanogranular film [54]. The inset shows a BLS spectrum observed at H = 0.5 kOe. Because we have not performed the polarization selection for the scattered beam, SAW peaks appear as a pair of small peaks just below the B-peaks. I will soon explain the solid lines, the broken line, and the dots and dashes. In these measurements, both the bulk and DE modes were propagating perpendicular to the magnetization, and their frequencies are given by value. It is clear that Equation (51) fails to reproduce the observed magnetic field dependence of the bulk SW frequency. Because our model is oversimplified in the first attempt, we will try to include the exchange coupling term into Equation (51). For long wavelength SWs, the exchange coupling can be represented by the differential operator The solid lines in Figure 5 are calculated bulk SW frequencies with the exchange field term (DQ 2 = 0.34 kOe) in Equation (53). Although the exchange field value is usually much smaller than the other fitting parameters, typically less than 0.5 kOe, agreements between the observed and calculated bulk SW frequencies are excellent. In spite of the Spin wave frequency (GHz) In these measurements, both the bulk and DE modes were propagating perpendicular to the magnetization, and their frequencies are given by and Because the frequency shifts ∆ν B and ∆ν DE were directly obtained from the BLS spectrum, both frequencies should be reproduced by the same magnetic constants as a function of the magnetic field. However, sometimes we encountered somewhat different 4πM values for the bulk and DE modes. The broken line and dots and dashes in Figure 5 are the calculated bulk SW frequencies using Equation (51) by changing the 4πM value. It is clear that Equation (51) fails to reproduce the observed magnetic field dependence of the bulk SW frequency. Because our model is oversimplified in the first attempt, we will try to include the exchange coupling term into Equation (51). For long wavelength SWs, the exchange coupling can be represented by the differential operator H ex = −D∇ 2 . Because the external magnetic field and the magnetization are collinear, we can replace the external magnetic field H with H + DQ 2 . Here, Q is the wave vector of the SW, and D is the SW stiffness constant and related to the exchange stiffness constant A through the relation of D = 2A/M. With the exchange field term, Equation (51) is replaced by The solid lines in Figure 5 are calculated bulk SW frequencies with the exchange field term (DQ 2 = 0.34 kOe) in Equation (53). Although the exchange field value is usually much smaller than the other fitting parameters, typically less than 0.5 kOe, agreements between the observed and calculated bulk SW frequencies are excellent. In spite of the importance of the exchange field term for qualitative fitting of the bulk SW frequencies, as shown in Figure 5, we cannot determine the D constant from the fitting because we have no information on the SW wave vector Q in Equation (53) due to the relaxation of the momentum conservation law. The DE mode frequency is rather insensitive to the exchange term because of the linear dependence of the frequency on the external magnetic field. Furthermore, the existence of the DE mode was derived from the boundary conditions, and the negligibly small DQ 2 // term was completely masked by the other quantities in Equation (52). Up to this stage, we considered soft ferromagnetic materials with negligibly small MAE. For such small-MAE cases, it may be an easy-or hard-axis type, and we can readily align the magnetization along the external magnetic field. Let us consider the uniaxial in-plane MAE given by Here, K // is the in-plane MAE constant. When we apply the magnetic field along the easy direction and examine SWs propagating perpendicular to the magnetization (θ = 0), the SW frequencies are given by and Here, we defined the in-plane anisotropy field H K// = 2K // /M. Meanwhile, for the magnetic field along the hard direction, we have and The bulk SW in an isotropic magnet forms the bulk SW band given by Equation (41). The bandwidth depends on the in-plane propagation direction θ. On the other hand, when the external magnetic field is applied along the hard direction, the MAE introduces the SW band for the bulk SW propagating even for the θ = 0 direction. The bandwidth given by 4πMH K// depends on both the strength of the MAE and the perpendicular component of the SW wave vector. The main contribution for the SW BLS is from Q ⊥ /Q < 0.5, and the bandwidth 4πMH K// is usually smaller than the bulk SW's peak width. Hence, it is impractically difficult to determine the MAE parameters from BLS measurement by itself. Next, we consider the out-of-plane type MAE given by We can also define the out-of-plane anisotropy field H K⊥ = 2K ⊥ /M. For a weak anisotropy field, which satisfies the in-plane magnetization condition given by 4πM − H K⊥ ≥ 0, the magnetization is confined within the film plane and aligned colinear to the external magnetic field. In this case, the upper and lower bounds of the SW band are given by and the DE mode frequency is given by In this case, the main contribution for the bandwidth is from Q ⊥ /Q > 0.5. For weak magnetic fields which satisfy H + DQ 2 − H K⊥ ≤ 0, the lower bound of the SW band should be set to zero. On the contrary, the out-of-plane MAE is large enough to overcome the inplane magnetization condition, and the perpendicular magnetization state is the ground state under zero magnetic field. We will discuss this case later. Thin Films As already mentioned, a magnetic film thicker than~λ/2 can be treated as a semiinfinite magnet. Now, what happens for thinner films? For thinner films with thicknesses less than~1000 Å, new aspects of SWs, known as the standing SWs (SSWs), appear in a BLS spectrum. The first BLS observation of the SSWs was reported by Grimsditch and Malozemoff on metallic amorphous Fe 80 B 20 films. They determined the SW stiffness constant of D BLS = (1.4 ± 0.2) × 10 −9 Oe·cm 2 [57]. Successively, BLS from the SSWs has been reported on various ferromagnetic thin films [11]. Neutron scattering is the best technique to investigate SW dynamics in the entire Brillouin zone and can be used to determine the SW stiffness constant D NS . However, neutron scattering requires a reactor and eventually becomes a huge project. In this section, in order to distinguish the SW stiffness constant obtained from BLS and neutron scattering, we add the subscript BLS and NS to the D constant. Otherwise, we simply use the symbol D for the BLS SW stiffness constant. By observing the SSW structure, we can precisely determine the SW stiffness constant D BLS even in a small optical laboratory. This is one of the virtues of the SW BLS technique. The research group in Brookhaven National Laboratory (NBL) extensively investigated SWs in Fe, Ni, and Co in the 1960s [58]. Note that the BLS technique gives information near the Brillouin zone center thanks to visible laser light as an excitation source. We can change the D BLS value in the 10 −9 Oe·cm 2 unit to the D NS value in the meV·Å 2 unit used in neutron scattering and magnetization studies by a formula of D BLS = 1.728 × 10 −2 D NS /g. Neutron scattering and BLS give D NS~2 80 meV·Å 2 for Fe, and these results are in good agreement [58,59]. Figure 6 shows an example of SSW spectrum observed from a 450 ± 10 Å thick epitaxial 1010 Co film deposited on a 500 Å thick Cr (211) buffer layer prepared on MgO (110) substrate at IMRAM, Tohoku University [60,61]. This spectrum was excited by the p-polarized 5320 Å line from a DPSS laser. Because we have not performed the polarization selection for the scattered light in this measurement, the SAW structures, indicated by phonons, were also observed. The peaks indicated by labels 1 and 2 are the first and second SSW peaks. The peak indicated by 1 + DE on the anti-Stokes side consists of the first SSW and the DE peaks. Note that the DE mode in thinner films also retained the nonreciprocal propagating character. When we define the critical angle θ C , at which the DE mode frequency given by Equation (45) is equal to the upper bulk band frequency, we obtain Equation (50) again by using Equation (44). For brevity's sake we ignore the exchange term for the upper bulk band. For thinner magnetic films, the perpendicular component q ⊥ of the SW wave vector was quantized into q ⊥ (n) = nπ/L (n = 1, 2, . . . ). In this case, the perpendicular components q ⊥ (n) were well-defined, and the momentum conservation law during the scattering process recov-ered. This is the reason why we can observe sharp SSW peaks in our spectrum. On our BLS results from this epitaxial 1010 Co film, we will discuss details later. Co thin film 45 nm thick at room temperature. Because no polarization selection has been done, scattering from SWs and SAWs was observed. The structure labeled "phonons" within ±30 GHz around the elastic RS peak was due to SAWs. These were assigned to the Rayleigh wave and the first and second order Sezawa waves with increasing frequency. SW peaks up to the second order SSW appeared above ±40 GHz. Note that the phonon peak intensities are symmetrical for the Stokes and anti-Stokes peaks. On the other hand, the SW peaks were asymmetric. The insert shows calculated n = 1 (−) and 2 (−) SSW and DE mode (−) profiles [60]. For clarity's sake, I display the profiles of the DE mode localized on the top surface and the SSW modes for the bottom surface. This spectrum was excited by the p-polarized 5320 Å line from a DPSS laser. Because we have not performed the polarization selection for the scattered light in this measurement, the SAW structures, indicated by phonons, were also observed. The peaks indicated by labels 1 and 2 are the first and second SSW peaks. The peak indicated by 1 + DE on the anti-Stokes side consists of the first SSW and the DE peaks. Note that the DE mode in thinner films also retained the nonreciprocal propagating character. When we define the critical angle C θ , at which the DE mode frequency given by Equation (45) is equal to the upper bulk band frequency, we obtain Equation (50) again by using Equation (44). For brevity's sake we ignore the exchange term for the upper bulk band. For thinner magnetic films, the perpendicular component ⊥ q of the SW wave vector was quantized into ( ) L n n q / π = ⊥ (n = 1, 2, …). In this case, the perpendicular components ( ) n q ⊥ were well-defined, and the momentum conservation law during the scattering process recovered. This is the reason why we can observe sharp SSW peaks in our spectrum. On our BLS results from this epitaxial ( ) Co film, we will discuss details later. Figure 7 shows another example of SSWs observed from 1000 ± 50 Å-thick sputtered Co85Nb12Zr3 film on a glass substrate prepared at RISM, Tohoku University [62]. Figure 6. BLS spectrum observed from an epitaxial 1010 Co thin film 45 nm thick at room temperature. Because no polarization selection has been done, scattering from SWs and SAWs was observed. The structure labeled "phonons" within ±30 GHz around the elastic RS peak was due to SAWs. These were assigned to the Rayleigh wave and the first and second order Sezawa waves with increasing frequency. SW peaks up to the second order SSW appeared above ±40 GHz. Note that the phonon peak intensities are symmetrical for the Stokes and anti-Stokes peaks. On the other hand, the SW peaks were asymmetric. The insert shows calculated n = 1 (−) and 2 (−) SSW and DE mode (−) profiles [60]. For clarity's sake, I display the profiles of the DE mode localized on the top surface and the SSW modes for the bottom surface. Figure 7 shows another example of SSWs observed from 1000 ± 50 Å-thick sputtered Co 85 Nb 12 Zr 3 film on a glass substrate prepared at RISM, Tohoku University [62]. We set ϑ =15° and H = 0.5 kOe. We could observe the SSW peaks up to the fifth order in this spectrum. Note that the peak intensities are highly asymmetric between the Stokes and anti-Stokes sides. This is a characteristic feature of SW BLS, as I have already mentioned, as the interference effects it. The DE peak appears only on the anti-Stokes We set ϑ = 15 • and H = 0.5 kOe. We could observe the SSW peaks up to the fifth order in this spectrum. Note that the peak intensities are highly asymmetric between the Stokes and anti-Stokes sides. This is a characteristic feature of SW BLS, as I have already mentioned, as the interference effects it. The DE peak appears only on the anti-Stokes side. The DE peak intensity is not high compared to the SSW peaks because of the small incident angle. In fact, when we increase the incident angle ϑ, the DE peak intensity gradually increases. Figure 8 shows the SW frequencies as a function of the magnetic field. We set ϑ =15° and H = 0.5 kOe. We could observe the SSW peaks up to the fifth order in this spectrum. Note that the peak intensities are highly asymmetric between the Stokes and anti-Stokes sides. This is a characteristic feature of SW BLS, as I have already mentioned, as the interference effects it. The DE peak appears only on the anti-Stokes side. The DE peak intensity is not high compared to the SSW peaks because of the small incident angle. In fact, when we increase the incident angle ϑ, the DE peak intensity gradually increases. Figure 8 shows the SW frequencies as a function of the magnetic field. The open symbols stand for the SSWs, and the filled circles stand for the DE mode. Above H = 1.0 kOe, we could not fully resolve the DE mode from the second-lowest-order SSW peak. In order to determine the magnetic constants of the Co 85 Nb 12 Zr 3 film while taking into account the quantization effect on the q ⊥ components, we employed a conventional formula given below: and It can be readily recognized from Equation (62) that the D constant governs the splitting between the SSW frequencies. The solid lines in Figure 8 are the calculated SSW frequencies from Equation (62), and the broken line is the calculated DE mode frequency from Equation (63) with the magnetic constants listed in Table 1. We obtained 4πM VSM = 10.1 ± 0.2 (kG) from a VSM measurement and evaluated the exchange stiffness constant A = 0.98 ± 0.14 (×10 −6 erg/cm) using the magnetic constants. With these constants, an excellent agreement between the calculation and observation was obtained. This agreement is not only for this case. Usually, Equations (62) and (63) gave good agreement between calculations and observations. − − 0.9 Figure 9 shows the SW stiffness constant D in Co 100−x Cr x binary alloy of 300~500 Å in thickness as a function of the Cr at % [63,64]. These alloys were prepared at RISM, Tohoku University. For the CoCr binary alloy system, phase separation occurred from the Co-rich uniform state below x~10 at % to the phase-separated state. In the phase-separated state, the Co-rich ferromagnetic regions were surrounded by the nonmagnetic Cr-rich grain boundaries above x~12 at %. Because the exchange coupling is purely a quantum mechanical effect due to the electron itineracy, overlapping of the electron wave functions, or both, we can expect that the exchange coupling strength is very sensitive to the microscopic atomic structure inside a film. Figure 9 clearly shows that a drastic change of the exchange coupling scheme from the direct coupling in the Co-rich uniform state to a weak indirect coupling via Cr-rich regions took place around x = 10~15 at %. In this way, the BLS technique can provide quantitative information on these magnetic interactions. Because we have phenomenologically introduced Equation (62), we must take into account the exchange coupling and the MAE and derive more rigorous descriptions of the SW frequencies. We applied the dipole-exchange model with continuum approximation to discuss the SWs. The exchange coupling can be calculated with the second derivative operator given by H ex = −D∇ 2 in this model. Recent developments in film preparation techniques allowed us to examine various types of epitaxial structures. For materials with hexagonal or tetragonal structures, the MAE played more important roles than for the cubic structures. Thinking of the epitaxial Co 1010 films, we considered the uniaxial in-plane MAE up to the fourth order as below [60]: Figure 10 shows the coordinate systems and scattering geometry used in our discussions. The crystallographic coordinates are shown by (x c , y c , z c ). . Schematic illustration on the magnetization rotation by the external magnetic field applied in the y-z plane. The magnetic field was applied along the θ direction measured from the crystal easy axis zC. The rotation angle of the magnetization measured from the easy axis φ can be determined by Equation (65). BLS observed the SWs propagating along the y direction, which is always perpendicular to the external magnetic field. Here, the xc axis is along the surface normal direction, and the film surfaces are located at x = ±L/2. The easy axis is along the zc direction. We applied a magnetic field H to make an angle θ between the magnetic field H and the zc axis, and we always measured SWs propagating perpendicular to H, as shown in Figure 10. Then, the magnetization M rotates from the easy axis. However, M will be not collinear with H because of the MAE. For convenience's sake, we introduced the magnetization coordinates (x,y,z) by rotating the crystallographic coordinates around the xc axis by an angle φ. The z direction is along the magnetization direction. The rotation angle φ can be determined by the competition between the external magnetic field and the MAE as given by We introduced the SW variable m as the small amplitude precession motion around the static magnetization M. The LL equation of motion on M(t)=M+m(t) is already given in Equation (30). The effective magnetic field Heff consists of the external magnetic field H, the uniaxial magnetic anisotropy field When H and M are not collinear with each other, we must include the longitudinal components mz and hz in addition to the their transverse components. Because the MAE in Equation (39) is defined in the crystallographic coordinates, we must rewrite it in the magnetization coordinate variables in order to evaluate the anisotropy field. It is given by Figure 10. Schematic illustration on the magnetization rotation by the external magnetic field applied in the y-z plane. The magnetic field was applied along the θ direction measured from the crystal easy axis z C . The rotation angle of the magnetization measured from the easy axis ϕ can be determined by Equation (65). BLS observed the SWs propagating along the y direction, which is always perpendicular to the external magnetic field. Here, the x c axis is along the surface normal direction, and the film surfaces are located at x = ±L/2. The easy axis is along the z c direction. We applied a magnetic field H to make an angle θ between the magnetic field H and the z c axis, and we always measured SWs propagating perpendicular to H, as shown in Figure 10. Then, the magnetization M rotates from the easy axis. However, M will be not collinear with H because of the MAE. For convenience's sake, we introduced the magnetization coordinates (x,y,z) by rotating the crystallographic coordinates around the x c axis by an angle ϕ. The z direction is along the magnetization direction. The rotation angle ϕ can be determined by the competition between the external magnetic field and the MAE as given by We introduced the SW variable m as the small amplitude precession motion around the static magnetization M. The LL equation of motion on M(t) = M + m(t) is already given in Equation (30). The effective magnetic field H eff consists of the external magnetic field H, the uniaxial magnetic anisotropy field H K = −∇ M E K , the exchange field H ex = (D/M)∇ 2 M, and the demagnetization field h as follows: When H and M are not collinear with each other, we must include the longitudinal components m z and h z in addition to the their transverse components. Because the MAE in Equation (39) is defined in the crystallographic coordinates, we must rewrite it in the magnetization coordinate variables in order to evaluate the anisotropy field. It is given by The linearized LL equation into a compact form given by Here, we define H a , H 1 , and H 2 as follows: and Equation (70) confirms that the longitudinal component m z does not contribute to the SWs. We assume plane-wave-type space-time dependence for the dynamical variables: The first equation of Equation (32) gives only two independent equations-for example, and Combining these equations, we obtain the (∇ × h) x component. The second equation of Equation (32) can be regarded as an additional equation of motion to the LL equation. A set of five homogeneous equations for the five variables m x , m y , h x , h y , h z gives an equation for the nontrivial solutions: Here, we define P 2 = q 2 ⊥ + Q 2 // /Q 2 // and adapt the partial wave technique in which SWs are constructed by a sum of bulk partial waves. By adjusting coefficients for the partial waves, we made the constructed SWs satisfy the proper boundary conditions. Equations (75) and (76) give six q ⊥ solutions allowed for the bulk partial waves [34]. Combining the linearized LL equation with the effective magnetic field given by Equation (66) and Maxwell's equations, as seen in Equation (32), we obtain in terms of the partial waves and Here, we introduced ω = ω/γ for convenience and used the demagnetization fields instead of the magnetic potential φ. By adapting the magnetic boundary conditions at x = ±L/2, we can eliminate the demagnetization fields outside of the magnet and obtain a set of two equations given by The upper compound symbols in Equation (81) are for the top surface at x = L/2 and the lower symbols for the bottom surface at x = −L/2, respectively. Because we have six unknown variables h xj , we need four more boundary conditions. Rado and Weertman derived generalized boundary conditions, which is known as the Rado-Weertman surface pinning conditions [26], based on the LL equation. It is given by where E sur f is the surface magnetic anisotropy (SMA) energy and we assume a SMA energy with the in-plane and out-of-plane terms as given by Now, we have a set of six homogeneous equations for six unknown variables. The SW frequencies can be obtained by solving numerically a 6 × 6 boundary condition determinant (BCD) equation. When we have no SMEs (no surface pinning) and the limiting case of Q // ≈ 0, we can obtain asymptotic expression for the SW frequencies given by and These are exactly the same as Equations (62) and (63). As already we have seen, these expressions can describe well the magnetic field dependence of the SW frequencies. Figure 11 shows an example of the SW frequencies of bcc Fe propagating with Q // = 1.85 × 10 5 cm −1 along the [110] direction on the (001) surface at H = 3.0 kOe as a function of film thickness down to 10 Å. These are exactly the same as Equations (62) and (63). As already we have seen, these expressions can describe well the magnetic field dependence of the SW frequencies. Figure 11 shows an example of the SW frequencies of bcc Fe propagating with Q// = 1.85 × 10 5 cm −1 along the [110] direction on the (001) surface at H = 3.0 kOe as a function of film thickness down to 10 Å. The SW frequencies were obtained by solving the 6 × 6 BCD equation. We used a set of the magnetic parameters listed Table 1 in this calculation. It is known that the demagnetization factor 4π for thick films should be replaced by the effective demagnetization factor 4πf ⊥ . The f ⊥ factor was given by 1−0.4245/n for the bcc (001) structure and 1−0.2338/n for the fcc (001) structure [67]. Here, n is the number of the atomic layers stacked on the film. We have included the 4πD ⊥ term in our calculations. In Figure 11, the label DE stands for the DE mode frequency, and the labels 1, 2, and 3 stand for the first, second, and third SSW frequencies. For Fe films with thicknesses less than 100 Å, only the DE mode appears in a BLS spectrum, and the SSW peaks appear well above 50 GHz. Another interesting observation is the anticrossing effect between the DE mode and each SSW mode. Therefore, it is clear that Equations (84) and (85) give a good description of the DE and SSW frequencies when these frequencies are well separated. For thinner films with thicknesses less than~300 Å, the DE mode localized on the opposite surface may appear in a spectrum. The amplitude of the opposite DE mode was roughly estimated to be exp(−Q // L)~0.57 on the laser-illuminated surface. Of course, the spectrum exhibited quite asymmetric peak intensities. Various structures of thin Co films, including polycrystalline films [63,68], bcc films [69][70][71], and fcc, bcc, and hcp films [72], have been extensively investigated by BLS since the early stage of the BLS SW studies. One of the most pronounced magnetic properties of Co in the hcp structure is the large uniaxial MAE, which is about one order of magnitude larger than the MAE of cubic Fe and Ni. The MAE for hcp Co is given by Equation (64). The K 1 and K 2 constants take positive values of~10 6 erg/cm 3 at room temperature. Because of the large MAE, hcp Co-based alloys have been widely applied for many industrial applications. As I have already mentioned, it is possible to grow epitaxial 1010 Co films which possess both the easy direction (hcp [001] axis) and the hard direction (hcp [010] axis) within a film plane. A BLS analysis from the epitaxial 1010 Co thin films was performed by Grimsditch, Fullerton, and Stamps [73]. The SW stiffness constant of hcp Co has been a subject of controversy. The D NS values of 490-510 meV·Å 2 for hcp Co crystals by the BNL group [58] were obviously larger than the recent BLS values of D BLS = 430-470 meV·Å 2 [63,68,71,73]. We performed BLS measurements on the epitaxial hcp 1010 Co thin films [73,74]. When we adapt the scattering geometry (A) in which the magnetic fields were applied along the easy direction, only the K 1 term in Equation (61) contributed to the SW frequencies because θ = ϕ = 0 • in Equations (72) and (73). An example of a BLS spectrum observed from the scattering geometry (A) is shown in Figure 6. The first and second order SSWs and the DE mode can be seen. The inset shows the mode profiles of these SSWs. In order to determine the K 2 constant, we must adapt the scattering geometries (B) and (C). The magnetic field dependence of the SW frequencies obtained from the scattering geometry (B) is shown in Figure 12. In this case, the magnetic field was always applied along the yc-direction in Figure 10. The calculated SW frequencies and the rotation angle are shown by the solid lines. We summarized the parameters used in our calculations in Table 1. These MAE constants, K1 and K2, and the SW stiffness constant are in good agreement with the ones reported by Grimsditch et al. [73], The present SW stiffness constant of DBLS = 3.39 × 10 −9 Oe·cm 2 is equivalent to DBLS = 427 meV·Å 2 and in good agreement with the previous values. There are independent NS results on hcp Co reported by using the Kraków neutron spectrometer [75]. Both NS groups employed a two-parameter model for SW dispersion given by εQ = DNSQ 2 (1-βQ 2 ). The Kraków group obtained DNS = 437±20 meV·Å 2 and β = 0.345 Å 2 , whereas the BNL group obtained a set of DNS = 510 meV·Å 2 , β = 1.8 Å 2 , DNS = 490 meV·Å 2 , and β = 3.3 Å 2 . The calculated dispersion curves with these three parameter sets are very close in the limited range of Q between 0.08Å and 0.25 Å. More extensive studies on bulk Co are strongly recommended. The mode profile calculation revealed complicated mode conversion schemes as a function of the magnetic field. The lowest mode possessed a uniform amplitude across the film (n = 0 SSW mode) under zero magnetic field and gradually changed into the DE-like mode as field strength increased, finally changing into the n = 1 SSW mode at well above H = 5.0 kOe. The second-lowest mode retained the n = 1 SSW character up to H~5.0 kOe and finally changed into the DE mode at well above H = 5.0 kOe. Figure 13 shows the SW frequencies obtained from the scattering geometry (C) as a function of the field direction at H = 3.0 kOe. Figure 12. Magnetic field dependence of the SW frequencies (•) in the scattering geometry (B). The solid lines were calculated for the SW frequencies (−) and the rotation angle ϕ of the magnetization (−) using the full magnetic constants given in the text [74]. In this case, the magnetic field was always applied along the y c -direction in Figure 10. The calculated SW frequencies and the rotation angle are shown by the solid lines. We summarized the parameters used in our calculations in Table 1. These MAE constants, K 1 and K 2 , and the SW stiffness constant are in good agreement with the ones reported by Grimsditch et al. [73], The present SW stiffness constant of D BLS = 3.39 × 10 −9 Oe·cm 2 is equivalent to D BLS = 427 meV·Å 2 and in good agreement with the previous values. There are independent NS results on hcp Co reported by using the Kraków neutron spectrometer [75]. Both NS groups employed a two-parameter model for SW dispersion given by ε Q = D NS Q 2 (1 − βQ 2 ). The Kraków group obtained D NS = 437 ± 20 meV·Å 2 and β = 0.345 Å 2 , whereas the BNL group obtained a set of D NS = 510 meV·Å 2 , β = 1.8 Å 2 , D NS = 490 meV·Å 2 , and β = 3.3 Å 2 . The calculated dispersion curves with these three parameter sets are very close in the limited range of Q between 0.08Å and 0.25 Å. More extensive studies on bulk Co are strongly recommended. The mode profile calculation revealed complicated mode conversion schemes as a function of the magnetic field. The lowest mode possessed a uniform amplitude across the film (n = 0 SSW mode) under zero magnetic field and gradually changed into the DE-like mode as field strength increased, finally changing into the n = 1 SSW mode at well above H = 5.0 kOe. The second-lowest mode retained the n = 1 SSW character up to H~5.0 kOe and finally changed into the DE mode at well above H = 5.0 kOe. Figure 13 shows the SW frequencies obtained from the scattering geometry (C) as a function of the field direction at H = 3.0 kOe. Figure 13. Field angle dependence of the SW frequencies (•) in the scattering geometry (C). The solid lines were calculated for the SW frequencies (−) and the rotation angle φ of the magnetization (−) using the full magnetic constants given in the text [74]. Here, e.a. and h.a. mean the easy axis (yC-direction) and the hard axis (zC-direction) in Figure 10. The solid lines give the calculated SW frequencies and the rotation angle φ of the magnetization. We also performed the mode profile calculation in this geometry. The lowest-frequency mode at θ = 0° was the n = 1 SSW, which gradually changed into the 0th SSW mode at θ = 90°. The second-lowest mode changed the character from the generalized DE mode at θ = 0° into the n = 1 SSW mode at θ = 90°. As shown in Figure 6, the n = 1 SSW mode and the De mode frequencies were very close. In this interpretation, we took account of the anti-crossing effect shown in Figure 11. The SW stiffness constants of the Heusler compounds and alloys, Co2MnSi, Co2MnAlxSi1−x, Co2FeAl, and Co2Cr0.6Fe0.4Al were intensively investigated with BLS [76][77][78]. Ultrathin Films Let us define magnetic films with thicknesses less than the exchange length , which is estimated to be ~30 Å for Fe as an ultrathin film. It is possible to prepare epitaxial ultrathin films by means of the molecular beam epitaxy (MBE) technique. Magnetic properties of such ultrathin films will be strongly affected by the SMA [67]. Here we have a question: what does BLS observe from such ultrathin films? When the film thickness L goes to zero, the DE mode frequency is well-separated from the bulk SSW frequencies as shown in Figure 11. Because the surface dispersion parameter Q//L is quite small, the DE mode amplitude is almost uniform across the film. It seems to be preferable to name this the uniform DE (UDE) mode or simply as the uniform mode. I will use the term "UDE mode" in this section. The DE mode was originally discussed for an isotropic slab taking account of the magnetic boundary conditions. The UDE mode is quite different from the slab DE mode because of the SMAs. Then, how can we take into account the SMAs in our discussions? If it is possible, we can positively apply the BLS technique to investigate the SMAs of ultrathin films. We consider an ultrathin film with surfaces at x = 0 and -d, and the LL equation given by Figure 13. Field angle dependence of the SW frequencies (•) in the scattering geometry (C). The solid lines were calculated for the SW frequencies (−) and the rotation angle ϕ of the magnetization (−) using the full magnetic constants given in the text [74]. Here, e.a. and h.a. mean the easy axis (y C -direction) and the hard axis (z C -direction) in Figure 10. The solid lines give the calculated SW frequencies and the rotation angle ϕ of the magnetization. We also performed the mode profile calculation in this geometry. The lowest-frequency mode at θ = 0 • was the n = 1 SSW, which gradually changed into the 0th SSW mode at θ = 90 • . The second-lowest mode changed the character from the generalized DE mode at θ = 0 • into the n = 1 SSW mode at θ = 90 • . As shown in Figure 6, the n = 1 SSW mode and the De mode frequencies were very close. In this interpretation, we took account of the anti-crossing effect shown in Figure 11. Ultrathin Films Let us define magnetic films with thicknesses less than the exchange length ℓ ex = (D/4πM) 1/2 [67], which is estimated to be~30 Å for Fe as an ultrathin film. It is possible to prepare epitaxial ultrathin films by means of the molecular beam epitaxy (MBE) technique. Magnetic properties of such ultrathin films will be strongly affected by the SMA [67]. Here we have a question: what does BLS observe from such ultrathin films? When the film thickness L goes to zero, the DE mode frequency is well-separated from the bulk SSW frequencies as shown in Figure 11. Because the surface dispersion parameter Q // L is quite small, the DE mode amplitude is almost uniform across the film. It seems to be preferable to name this the uniform DE (UDE) mode or simply as the uniform mode. I will use the term "UDE mode" in this section. The DE mode was originally discussed for an isotropic slab taking account of the magnetic boundary conditions. The UDE mode is quite different from the slab DE mode because of the SMAs. Then, how can we take into account the SMAs in our discussions? If it is possible, we can positively apply the BLS technique to investigate the SMAs of ultrathin films. We consider an ultrathin film with surfaces at x = 0 and −d, and the LL equation given by 1 γ Here we ignored the in-plane exchange term. For an ultrathin film, the UDE mode profile is regarded as uniform across the film. Then, we can readily integrate the LL equation across the film and obtain When we notice ∂/∂n = −d/dx in this case, the last term in the right-hand side can be replaced by the Rado-Weertman pinning boundary condition given by Equation (82), thus producing and where H K,e f f is the effective magnetic anisotropy field consisting of the MAE and IMA SMA terms. We assumed the same SMA for both surfaces. When we adapt the bulk MAE given by Equations (54) and (59) and the SMA given by Equation (83), we obtain and With these results, we obtain the UDE frequency as follows: Here, we define the effective saturation magnetization as The 1/d factor in Equation (93) gives a multiplication factor of 10 8 /d in the angstrom unit, and the SMA term dominates the out-of-plane magnetic anisotropy field for the UDE mode. As already discussed, the 4πM eff variable is the essential parameter to distinguish the magnetization state under zero magnetic field. For a positive 4πM eff , the film is in the in-plane magnetization state under zero magnetic field, and it is in the perpendicular magnetized state for a negative 4πM eff . We performed a BLS study on an ultrathin epitaxial Fe wedge with Fe layer thicknesses up to 8.9 Å under the magnetic fields of up to 4.5 kOe [79]. The wedge was prepared at the Electrotechnical Laboratory (ETL), Tsukuba using the MBA technique. Figure 14 shows a schematic illustration of the structure of the MBE-prepared wedge with the crystallographic coordinate systems showing the epitaxial relations. When the magnetization is directed along the [1−10] direction, in terms of the SW variables, the MAE can be written as [35] ( ) Taking account of the possible tetragonal distortion along the surface normal direction, the SMA energy can be reduced into a simple form given by (96) in which ( ) s u k is the uniaxial out-of-plane SMA constant due to tetragonal distortion. Figure 15 shows the thickness development of the BLS spectra of the wedge at H = 3.0 kOe. The thickness was indicated on each spectrum. A 20 Å-thick Au (001) cap layer was deposited to protect the wedge from crucial surface deterioration. We applied the magnetic field along the crystallographic [1-10] direction, and the SWs propagating along the [110] direction were measured. For a cubic symmetry crystal, the MAE is given by When the magnetization is directed along the [1-10] direction, in terms of the SW variables, the MAE can be written as [35] Taking account of the possible tetragonal distortion along the surface normal direction, the SMA energy can be reduced into a simple form given by in which k (s) u is the uniaxial out-of-plane SMA constant due to tetragonal distortion. Figure 15 shows the thickness development of the BLS spectra of the wedge at H = 3.0 kOe. The thickness was indicated on each spectrum. These spectra were excited by the p-polarized 4880 Å line of an Ar + ion laser in a singlecavity mode with a power of 80 mW directed at the wedge. The incident angle ϑ was fixed to 45 • (Q // = 1.82 × 10 5 cm −1 ). The Fe layer thickness was thin enough to observe scattering from the UDE modes existing on both surfaces. Typical accumulation time for a spectrum was only less than 1 h. Because we had not performed polarization selection, both the USW and SAW peaks were observed in a spectrum. The SAW peaks were masked by the Rayleigh peak in Figure 15. The intensity asymmetry between the Stokes side and the anti-Stokes side is one of the characteristic features of SW scattering, as already discussed. The UDE frequency rapidly decreased from about 20 GHz to 12 GHz with decreasing Fe thickness. Figure 16 shows the external field development of BLS spectra observed at an Fe thickness of d = 2.6 Å. These spectra were excited by the p-polarized 4880 Å line of an Ar + ion laser in a single-cavity mode with a power of 80 mW directed at the wedge. The incident angle ϑ was fixed to 45° (Q// = 1.82 × 10 5 cm −1 ). The Fe layer thickness was thin enough to observe scattering from the UDE modes existing on both surfaces. Typical accumulation time for a spectrum was only less than 1 h. Because we had not performed polarization selection, both the USW and SAW peaks were observed in a spectrum. The SAW peaks were masked by the Rayleigh peak in Figure 15. The intensity asymmetry between the Stokes side and the anti-Stokes side is one of the characteristic features of SW scattering, as already discussed. The UDE frequency rapidly decreased from about 20 GHz to 12 GHz with decreasing Fe thickness. Figure 16 shows the external field development of BLS spectra observed at an Fe thickness of d = 2.6 Å. These spectra were excited by the p-polarized 4880 Å line of an Ar + ion las single-cavity mode with a power of 80 mW directed at the wedge. The incident an was fixed to 45° (Q// = 1.82 × 10 5 cm −1 ). The Fe layer thickness was thin enough to o scattering from the UDE modes existing on both surfaces. Typical accumulation t a spectrum was only less than 1 h. Because we had not performed polarization se both the USW and SAW peaks were observed in a spectrum. The SAW peak masked by the Rayleigh peak in Figure 15. The intensity asymmetry between the side and the anti-Stokes side is one of the characteristic features of SW scattering ready discussed. The UDE frequency rapidly decreased from about 20 GHz to 1 with decreasing Fe thickness. Figure 16 shows the external field development spectra observed at an Fe thickness of d = 2.6 Å. The applied magnetic field was indicated on each spectrum. The UDE peaks are indicated by the arrows, and the SAW peaks are indicated by the broken lines. The sharp UDE peaks shown in Figures 15 and 16 indicate that a well-defined ferromagnetic order was realized in the wedge at room temperature even at 2.6 Å, which corresponds to a~1.8 atomic layer. Taking into account the extremely short photon-UDE interaction length, although multiple reflections would take place, the large BLS efficiency seems to have been closely related to the enhancement of the magneto-optical Kerr rotation and ellipticity observed around 2.5 eV of incident photon energy due to the plasma-edge effect of the Au layers [80]. This energy is close to our 4880 Å laser photon energy (2.54 eV). Figure 17 shows the intensity ratio between the Stokes peak and the anti-Stokes peak. The applied magnetic field was indicated on each spectrum. The UDE peaks are indicated by the arrows, and the SAW peaks are indicated by the broken lines. The sharp UDE peaks shown in Figures 15 and 16 indicate that a well-defined ferromagnetic order was realized in the wedge at room temperature even at 2.6 Å, which corresponds to a ~1.8 atomic layer. Taking into account the extremely short photon-UDE interaction length, although multiple reflections would take place, the large BLS efficiency seems to have been closely related to the enhancement of the magneto-optical Kerr rotation and ellipticity observed around 2.5 eV of incident photon energy due to the plasma-edge effect of the Au layers [80]. This energy is close to our 4880 Å laser photon energy (2.54 eV). Figure 17 shows the intensity ratio between the Stokes peak and the anti-Stokes peak. Above d > 40 Å, the intensity ratio was about 0.5 in a wide range of Fe thicknesses, whereas it became larger than 1 below d < 40 Å. The intensity ratio for the 45° incident p-polarized geometry can be written as follows in terms of the matrix components defined in Equations (15) (97) As shown in Equation (29), the K coefficient and possibly also the G coefficient are sensitive to the spin-orbit coupling constant and the wave functions for the ground state and intermediate state. Because the ultrathin Fe layer was sandwiched by the thicker Au layers, the Fe wave functions were strongly modified by mixing between the Au wave functions at d < 40 Å. This situation can be regarded as a quantum well effect. Figure 18 shows the thickness development of the SW frequencies of the Au/Fe/Au films, including the wedge and the epitaxial films, with thicknesses between 80 and 1000 Å for H = 3.0 kOe. Above d > 40 Å, the intensity ratio was about 0.5 in a wide range of Fe thicknesses, whereas it became larger than 1 below d < 40 Å. The intensity ratio for the 45 • incident p-polarized geometry can be written as follows in terms of the matrix components defined in Equations (15)-(18): As shown in Equation (29), the K coefficient and possibly also the G coefficient are sensitive to the spin-orbit coupling constant and the wave functions for the ground state and intermediate state. Because the ultrathin Fe layer was sandwiched by the thicker Au layers, the Fe wave functions were strongly modified by mixing between the Au wave functions at d < 40 Å. This situation can be regarded as a quantum well effect. Figure 18 shows the thickness development of the SW frequencies of the Au/Fe/Au films, including the wedge and the epitaxial films, with thicknesses between 80 and 1000 Å for H = 3.0 kOe. These thin Au/Fe/Au epitaxial films were also prepared at ETL, Tsukuba using the MBE technique. The solid circles are the DE and UDE mode frequencies, and the open circles are the 1-st SSW frequencies. The solid lines are the calculated SW frequencies with the same magnetic constants used in Figure 11. It is obvious that the observed UDE frequencies decreased more rapidly than the calculated frequency as the Fe thickness decreased below 100 Å. In contrast to the present Au/Fe/Au films, for the ultrathin Fe (110) film deposited on the W (110) substrate, the UDE frequency increased with decreasing Fe thickness because of the negative k (s) ⊥ constant (this means that the surface normal is the hard direction) [34,81]. Because we have not included the SMA terms in Equations (92) and (93) in our calculations, our calculations gave higher SW frequencies below 100 Å. Figure 19 shows the UDE frequencies as a function of the external magnetic field for four Fe thicknesses: 8.9, 6.5, 3.9, and 2.6 Å. Figure 11. Although the MAE and the effective demagnetization factor were included in the calculation, the surface perpendicular anisotropy term was not included [61]. The broken lines are just for eye. These thin Au/Fe/Au epitaxial films were also prepared at ETL, Tsukuba using the MBE technique. The solid circles are the DE and UDE mode frequencies, and the open circles are the 1-st SSW frequencies. The solid lines are the calculated SW frequencies with the same magnetic constants used in Figure 11. It is obvious that the observed UDE frequencies decreased more rapidly than the calculated frequency as the Fe thickness decreased below 100 Å. In contrast to the present Au/Fe/Au films, for the ultrathin Fe (110) film deposited on the W (110) substrate, the UDE frequency increased with decreasing Fe thickness because of the negative ( ) s k ⊥ constant (this means that the surface normal is the hard direction) [34,81]. Because we have not included the SMA terms in Equations (92) and (93) in our calculations, our calculations gave higher SW frequencies below 100 Å. Figure 19 shows the UDE frequencies as a function of the external magnetic field for four Fe thicknesses: 8.9, 6.5, 3.9, and 2.6 Å. Figure 11. Although the MAE and the effective demagnetization factor were included in the calculation, the surface perpendicular anisotropy term was not included [61]. The broken lines are just for eye. Magnetic field (kOe) Figure 19. SW frequencies for four Fe thicknesses, 8.9, 6.5, 3.9, and 2.6 Å, as a function of the magnetic field. The solid lines are the SW frequencies calculated by solving numerically the BCD equation [79]. In order to analyze these results, we solved numerically the 6×6 BCD equation with the SMA terms given by Equation (95). The solid lines in Figure 19 are the calculated UDE frequencies by solving the BCD equation as a function of the magnetic field. Because we found that the bulk MAE was negligibly small, we retained only the SMA terms and a bulk spin wave stiffness constant of D = 2.34 × 10 −9 Oe·cm 2 throughout the present BCD analyses, which can also be applied for the in-plane magnetized films. For d = 2.6 Å, we can solve the BCD equation above H = 1.57 kOe. Figure 20 shows the in-plane and Figure 19. SW frequencies for four Fe thicknesses, 8.9, 6.5, 3.9, and 2.6 Å, as a function of the magnetic field. The solid lines are the SW frequencies calculated by solving numerically the BCD equation [79]. In order to analyze these results, we solved numerically the 6 × 6 BCD equation with the SMA terms given by Equation (95). The solid lines in Figure 19 are the calculated UDE frequencies by solving the BCD equation as a function of the magnetic field. Because we found that the bulk MAE was negligibly small, we retained only the SMA terms and a bulk spin wave stiffness constant of D = 2.34 × 10 −9 Oe·cm 2 throughout the present BCD analyses, which can also be applied for the in-plane magnetized films. For d = 2.6 Å, we can solve the BCD equation above H = 1.57 kOe. Figure 20 shows the in-plane and out-ofplane SMA constants and the 4πM eff defined in Equation (93) ⊥ . Therefore, 4πM eff changes its sign from positive to negative at around d~3 Å, and the stable direction of the magnetization under zero magnetic field turns over from the in-plane direction to the surface normal direction. The in-plane to out-of-plane transition of the stable magnetization configuration under zero magnetic field has been observed in several ultrathin films [81][82][83][84]. The solid lines are just for eye. [79] Apart from the ultrathin films, let us consider SWs in magnetic films with the out-of-plane MAE as given by Equation (59). Dutcher et al. [82] and Rahman and Mills [85] treated this problem using the magnetostatic framework. Here, the K⊥ coefficient can be regarded as an effective constant including both the bulk and surface terms, which depend on the crystallographic structure of the magnet and the film thickness. The magnet occupies the x-z plane between x = 0 and -L, and the applied magnetic field is always fixed to the z-axis. The magnetization always lies in the x-z plane with the equilibrium angle φ measured from the x-axis, as shown in Figure 21. Apart from the ultrathin films, let us consider SWs in magnetic films with the out-ofplane MAE as given by Equation (59). Dutcher et al. [82] and Rahman and Mills [85] treated this problem using the magnetostatic framework. Here, the K ⊥ coefficient can be regarded as an effective constant including both the bulk and surface terms, which depend on the crystallographic structure of the magnet and the film thickness. The magnet occupies the x-z plane between x = 0 and -L, and the applied magnetic field is always fixed to the z-axis. The magnetization always lies in the x-z plane with the equilibrium angle ϕ measured from the x-axis, as shown in Figure 21. We adapted the LL equation of motion with the effective fields given by Equation (66). Because the magnetization M is tilted in the x-z plane, we should add the demagnetization field H d term given by Then, we obtain a set of linearized LL equations: Here, we define the critical field H C as The equilibrium angle ϕ is determined from the torque-free condition around the y axis given by We obtain three angles: We introduce a 3 × 3 susceptibility matrix χ by m = χ·h through the LL equation. For the present scattering geometry shown in Figure 21, among the nine components of χ, four components,χ xx , χ xy , χ yx , and χ yy , are relevant, and their explicit expressions are given as and We also define We have obtained the relevant susceptibilities, and we can follow our calculations performed for the in-plane thin films. Therefore, we will not repeat them here. When we ignore the exchange terms, we can summarize our results on the bulk SW as follows: The upper and lower bound frequencies are given by and For The upper and lower bound frequencies are given by and and for H C ≥ H > 0 We have two boundary frequencies given by and Note that these boundaries are crossed at H = H C [H C /(H C + H K⊥ )] 1/2 in Equation (114). We also performed numerical fits using the upper bounds in Equations (108) and (111) with our BLS results shown in Figure 19, and we obtained reasonable agreement between the calculated and observed SW frequencies. The discussions on the DE and UDE modes are a little more complicated. The surface mode frequency is determined by a BCD equation given as Here we define q ⊥ = αQ // = 1 + 4πχ yy /(1 + 4πχ xx ) 1/2 Q // and β 2 = exp(−2αQ // L). We rewrite Equation (117) as Note that the case of tanh(αQ // L) = 0 corresponds to an ultrathin film, and the case of tanh(αQ // L) = 1 corresponds to a semi-infinite magnet. It is not difficult to show that the DE and UDE modes are always allowed for in-plane films (ϕ = π/2) with the frequency given by Equation (62). On the other hand, because of negative α, these surface modes are forbidden for the out-of-plane films above H > H C . Next let us consider an out-of-plane film under a magnetic field of H ≤ H C . For a semi-infinite magnet, Equation (115) becomes a simple form given by These equations can be analytically solved and give solutions with opposite signs. We solved the first one here and obtained a surface mode solution with a frequency given by ω/γ = H 2 C /2H in a limited range of the external field Figure 22 shows a simulation of the SW frequencies as a function of the magnetic field for an out-of-plane, magnetic, semi-infinite slab calculated using the magnetic constants obtained from the Fe wedge: γ/2π = 2.8 GHz/kOe, H C = 1.57 kOe, 4πM = 18.6 kG, and H k = 20.17 kOe. Note that the case of tanh(αQ//L) = 0 corresponds to an ultrathin film, and the case of tanh(αQ//L) = 1 corresponds to a semi-infinite magnet. It is not difficult to show that the DE and UDE modes are always allowed for in-plane films (ϕ = π/2) with the frequency given by Equation (62). On the other hand, because of negative α, these surface modes are forbidden for the out-of-plane films above H > HC. Next let us consider an out-of-plane film under a magnetic field of H ≤ HC. For a semi-infinite magnet, Equation (115) These equations can be analytically solved and give solutions with opposite signs. We solved the first one here and obtained a surface mode solution with a frequency given by in a limited range of the external field Figure 22 shows a simulation of the SW frequencies as a function of the magnetic field for an out-of-plane, magnetic, semi-infinite slab calculated using the magnetic constants obtained from the Fe wedge: For an ultrathin film, Equation (118) merely gives the bulk upper and lower bounds, and we have no UDE solution, as already discussed. For a finite thickness film, we must solve Equation (118) numerically, but we will discuss no more on this case. Figure 23 shows an example of a trilayer structure consisting of a nonmagnetic layer sandwiched between magnetic layers [35]. For an ultrathin film, Equation (118) merely gives the bulk upper and lower bounds, and we have no UDE solution, as already discussed. For a finite thickness film, we must solve Equation (118) numerically, but we will discuss no more on this case. Figure 23 shows an example of a trilayer structure consisting of a nonmagnetic layer sandwiched between magnetic layers [35]. Multilayers and Superlattices These magnetic layers need not be the same with respect to their thicknesses and materials [36]. The thicknesses of the magnetic layers are usually set to be less than 100 Å in order to fully separate the bulk SSWs and the DE mode as shown in Figure 11. In our following discussions based on [36], we consider the SWs in the trilayer by combining the DE modes in each magnetic layer. We have two choices on the origin of the x-coordinate. In this example, we use the first setting shown on the left-hand side of Figure 23. These magnetic layers need not be the same with respect to their thicknesses and materials [36]. The thicknesses of the magnetic layers are usually set to be less than 100 Å in order to fully separate the bulk SSWs and the DE mode as shown in Figure 11. In our following discussions based on [36], we consider the SWs in the trilayer by combining the DE modes in each magnetic layer. We have two choices on the origin of the x-coordinate. In this example, we use the first setting shown on the left-hand side of Figure 23. At first, we consider the SWs propagating along the y-axis in Figure 23 and solve the LL equation for each magnetic layer in terms of the susceptibilities and the magnetic potentials as follows: Here, the superscript j (= 1, 2) specifies the magnetic layer. The magnetic potentials outside and inside of the trilayer are given as follows: The magnetic boundary conditions at each surface and interface give us a set of six homogeneous equations for the unknown variables A to F. To obtain the non-trivial solutions of the set of the homogeneous equations, the BCD should vanish at the SW frequencies. At first, we consider the SWs propagating along the y-axis in Figure 23 and solve the LL equation for each magnetic layer in terms of the susceptibilities and the magnetic potentials as follows: and m (j) Here, the superscript j (=1, 2) specifies the magnetic layer. The magnetic potentials outside and inside of the trilayer are given as follows: and The magnetic boundary conditions at each surface and interface give us a set of six homogeneous equations for the unknown variables A to F. To obtain the non-trivial solutions of the set of the homogeneous equations, the BCD should vanish at the SW frequencies. and Λ (j) For isotropic (no MAE) magnetic layers, the SW frequencies are given by the solution of Equation (127) as below: The composite symbols in Equations (131) and (132) correspond to the SW propagation directions given by exp(±iQ // y). It is clear that for identical magnetic layers, Equation (132) gives two SW frequencies which are independent of the propagation direction. When the surface dispersion parameter Q // d is zero, the SW frequencies are given by the DE mode frequency for a 2L thick film (set L→2L in Equation (45)), and the bulk SW frequency is given by Equation (51). For the large Q // d values, the SW frequencies approach to the DE mode frequency is given by Equation (45) for two different isolated magnetic layers. Now, we consider parallel and anti-parallel arrangements of two different isotropic magnetic layers [35]. Figure 24a shows the SW frequencies as a function of the Q // d parameter for the parallel arrangement under zero magnetic field. In the calculations, we set the surface wave vector Q// to 2.0 × 10 −7 m −1 and used 4πM1 = 10.6 kG, g = 2.13, and L1 = 50 Å for layer 1 (CoNbZr) and 4πM2 = 19.8 kG, g = 2.17, and L2 = 150 Å for layer 2 (Co). Figure 24(b) shows the SW frequencies as a function of the Q//d parameter for the anti-parallel arrangement under zero magnetic field. It is important to note that the anti-parallel arrangement of the magnetizations is unstable even for weak external magnetic fields. We have various spin valve devices [86]. The spin valve struc- In the calculations, we set the surface wave vector Q // to 2.0 × 10 −7 m −1 and used 4πM 1 = 10.6 kG, g = 2.13, and L 1 = 50 Å for layer 1 (CoNbZr) and 4πM 2 = 19.8 kG, g = 2.17, and L 2 = 150 Å for layer 2 (Co). Figure 24b shows the SW frequencies as a function of the Q // d parameter for the anti-parallel arrangement under zero magnetic field. It is important to note that the anti-parallel arrangement of the magnetizations is unstable even for weak external magnetic fields. We have various spin valve devices [86]. The spin valve structure consists of the pinned layer, in which the magnetization is pinned to prevent free motion against the external magnetic field, and the free layer, in which the magnetization can be freely aligned along the external field even under a weak magnetic field. Because the trilayer consists of two different magnets, the BLS spectrum is eventually asymmetric between the Stokes and anti-Stokes sides for the Q // d parameters below~1. The frequency difference between levels 3 and 4 in the anti-parallel arrangement was 1.9 GHz, whereas the difference between levels 1 and 2 was 0.7 GHz in the parallel arrangement. Note that the frequency difference of 0.7 GHz is rather difficult to detect by means of the BLS technique; of course, it thus depends on the free spectral range setting. The lower frequency peaks were be masked by the intense Rayleigh peak. When the magnetization arrangement was changed from the parallel to anti-parallel arrangements, the frequency difference between the levels 3 and 1 was 2 GHz, and it was easily detected by the BLS technique. For thinner spacer layers with thicknesses below the exchange length, we must take into account the interlayer exchange coupling (IEC) between the magnetic layers across the spacer layer [40]. The simplest form of IEC is given by the Heisenberg-type coupling given as in which the IEC constant A 12 depends on the spacer layer thickness d and changes its sign similar to the Ruderman-Kittel-Kasuya-Yoshida (RKKY) coupling [87]. For a positive A 12 , the parallel arrangement of M 1 and M 2 is preferable, and the anti-parallel arrangement is preferable for negative A 12 . The equations of motion for M 1 and M 2 are given as and 1 γ The ICE acts as torque for the magnetization on each layer. We can take into account the torque through the Hoffmann boundary conditions [27] at the magnet-spacer interfaces. The Hoffmann boundary conditions are given by and Here, A j (=D j M j /2) is the intra-layer exchange stiffness constant in the j-th magnetic layer, and ∂/∂n is always directed to the inside of the magnets. For brevity's sake, we adapted the same energy form for both of the interfacial magnetic anisotropy energy (IME) and the SME. After some calculations, we obtained the explicit expressions of the linearized Hoffmann boundary conditions as follows [40]: and For the magnetic layer 2:  Because the Hoffmann boundary conditions were adapted at the magnet-spacer interfaces and the Rado-Weertman boundary conditions at the top and bottom magnetic surfaces, we must use the dipole-exchange framework which has been already discussed for thin films and results in a 6 × 6 BCD equation to determine the SW frequencies for a single magnetic film. In the present case, we have two mutually coupled magnetic layers and must solve a 12 × 12 BCD equation to obtain the SW frequencies. We have already given the explicit expressions of m x and m y in Equations (37) and (38). Grünberg and his coworkers successfully showed the oscillatory behavior of the A 12 constant as a function of the spacer layer thickness in Fe/Cr/Fe trilayers using the BLS technique [40]. They found the giant magnetoresistance (GMR) effect in the anti-parallel state of the layer magnetizations [88]. In the anti-parallel magnetization state, the spin-flop phenomena can be induced by the external magnetic field [89,90]. In magnetic multilayers (MMLs) and superlattices (MSLs), a fundamental structure, for example magnetic layer 1 on magnetic layer 2, is stacked an arbitrary number of times. SWs, as a whole of the multilayers and superlattices, are constructed from the DE or UDE mode in each magnetic layer. As an example, we consider a (M1/M2) 3 multilayer. Here, the (M1/M2) 3 symbol means that the fundamental unit consisting of magnetic layers M1 and M2 is repeated three times. We can successively apply the appropriate boundary conditions at each interface and solve a BCD equation to obtain the MML and MSL SW frequencies [37]. In this example, we employed the second setting for the origin of the x-axis shown in the right-hand side of Figure 23. We set x = 0 at the center of the top M1 layer. When we set susceptibilities to zero in the M2 layers, the M2 layers can be treated as nonmagnetic spacer layers. In contrast, when a nonmagnetic layer undergoes a magnetic phase transition, for example Fe/Gd multilayers, we can examine SW dynamics near the phase transition. In order to make our discussions clear and easy, we assumed no MAE and IEC for the magnetic layers. Because the calculations are rather straightforward but tedious even under these simplifications, I only show the final result. It is a 12 × 12 BCD equation given by Equation (143). in which α = exp(−Q // L 1 ), β = exp(−Q // L 2 ), γ = exp(−Q // Λ) (Λ = L 1 + L 2 ), and Λ (j) ± (j = 1, 2) have been defined in Equation (131). The rest of the determinant and matrix elements are all zero. It is clear that the elements m 1 to m 16 appear as a set in Equation (143). This set is the algebraic description of the M1/M2 structure. When we insert an additional set into Equation (143), we have SWs in the (M1/M2) 4 multilayer. In this way, we can generate an arbitrary number of (M1/M2) stacking MMLs. This approach is quite intuitive and, of course, it is possible to take into account the MAEs and IECs into the above framework of the SW frequency calculations; to do so, we must solve a huge and complicated BCD equation. Another approach to obtain the MSL SWs was developed by Camley, Rahman, and Mills [39]. For an infinite stack of periodic structures with the periodic length Λ, the property of translational invariance gives us Bloch's theorem. In the present case, Bloch's theorem requires the magnetic potential φ(x) to satisfy and where u(x) is a periodic function which satisfies u(x + Λ) = u(x), and Q ⊥ is a wave vector confined to the first Brillouin zone, 0 ≤ Q ⊥ ≤π/Λ. For nth layer, we can write the magnetic potential in the nth layer as for -L 1 ≤ x + nΛ ≤ −Λ, respectively. By virtue of Bloch's theorem, four amplitude variables, A to D, are enough for our discussions. We consider MSL SWs propagating along the y-direction and apply the magnetic boundary conditions at x = −Λ and −Λ−L 1 . Finally, we obtain a 4 × 4 BCD equation to determine the bulk SL SW frequencies. When we replace magnetic layer 2 with a nonmagnetic layer, Λ (2) ± = 1, Equation (149) can be reduced into We write Λ ± as Λ ± for convenience's sake. The bulk MSL SW frequency is found to be ω γ Here, the MSL band factor ∆(Q // , Q ⊥ ) is defined by For a semi-infinite MSL, we cannot apply Bloch's theorem because of the MSL surface, which violates the translational invariance. Taking into account the infinite MSL discussions, let us assume the magnetic potentials at the nth period are as follows: for -L 1 ≤ x + nΛ ≤ −Λ. Here, ε is a positive attenuation parameter of MSL surface mode. We can eliminate C and D from the boundary conditions and obtain another homogeneous equation on A and B given as and Λ (1) From Equations (155) and (156), the attenuation parameter ε is required to satisfy Finally, we must include the boundary conditions at the surface. By eliminating the magnetic potential outside MSL, we obtain α Λ (1) The dispersion relation of the MSL surface SWs is given as Here, we defined κ = exp(−εΛ). Equation (159) gives the dispersion relation of the surface modes. Now, we replace magnetic layer 2 by a nonmagnetic layer again and write Λ ± as Λ ± for convenience's sake. Equation (160) gives Equation (161) gives us three solutions: Λ + − 1 = 0, Λ − + 1 = 0, and sinhQ // L 1 = 0. However, all three of these solutions cannot be the eligible surface SWs. Readers should note that Λ ± in this review corresponds to Λ ∓ in the original paper. For example, we obtained A̸ =0 and B = 0 for the Λ + − 1 = 0 solution from Equation (158). Equation (155) is automatically satisfied, and Equation (156) gives exp(−Q // L 1 ) exp(−εΛ) − exp(Q // L 2 ) = 0. Because this equation gives a negative ε, the Λ + − 1 = 0 solution cannot be an eligible surface mode. On the other hand, the Λ − + 1 = 0 solution gives a positive ε for L 1 > L 2 and thus can be an eligible surface mode. In fact, Grimsditch et al. observed BLS from the surface mode only for an MSL with a magnetic layer thicker than the nonmagnetic spacer layer [91]. The surface mode frequency was found to be [39] This frequency is exactly the same as the DE mode frequency given by Equation (52) for a semi-infinite magnet. We can readily reject the sinhQ // L 1 = 0 solution for SWs propagating along the y direction because of the real quantity Q // L 1 ̸ =0. This mode can exist within the bulk SW band given by Equation (41). Because a large number of reports on BLS from MMLs and MSLs have been already published, I have not enough space to mention them. I recommend that the reader refer to the references in this review and the most recent publications. We examined the SWs in [Fe (30 Å)/Cr(x Å)] (x = 8-60 Å) MMLs prepared at IMR by rfsputtering on quartz substrate [92]. The total thickness of each MML was fixed at~1000 Å. (152). It is clear that the SW band frequency became independent of the Q ⊥ Λ parameter above Q ⊥ Λ~π/10. It means that BLS observed the lower bound of the bulk SW band. Because the magnetic layer thickness of the tested MML was thicker than the nonmagnetic Cr spacer layer, we expected to observe scattering from the surface SW peak at the frequency given by Equation (162). The surface SW frequency was expected to be~27 GHz at H = 0.5 kOe and 35 GHz at H = 3.0 kOe. Because our FPI mirror spacing was set to 5 mm in this study, the surface SW peak was unfortunately masked by the intense ghost peak from the adjacent interference order. Figure 25b shows the SW frequencies as a function of the magnetic field for Fe (30 Å)/Cr(13 Å) MML. I performed an additional measurement at H = 0.9 kOe and added the result in Figure 25b. The in-plane hysteresis loop from the Fe (30 Å)/Cr(13 Å) MML indicates the antiferromagnetic structure of the magnetizations between adjacent Fe layers under zero magnetic field. The loop indicates the in-plane canted structure below 1.2 kOe and the ferromagnetic aligned structure above~1.2 kOe. We can define the transition of the magnetic field H C from the canted state to the ferromagnetic state. We observed an SW doublet on both frequency sides below H = 1.1 kOe. To the contrary, we observed an SW singlet above 1.2 kOe. When we take account the hysteresis loop result, the SW doublet is clearly related to the canted magnetization state. Generally speaking, the magnetic unit cell in the canted state is essentially the same as in the antiferromagnetic state and is double the unit cell in the ferromagnetic state. Therefore, we can expect a doublet of SW peaks in the canted state and a singlet bulk SW peak in the ferromagnetic state. Nörtemann et al. calculated SW frequencies of the dipole modes in an exchange-coupled MML with a canted ground state in terms of the effective-medium theory [90]. The most striking feature of their results is the mode crossing between the upper bulk SW band and the sur-face mode around~H C /2. For a semi-infinite canted stack without the MAE and the SWs propagating perpendicular to the net magnetization, the SW frequencies, except the mode crossing region around~H C /2, in the canted state are given as and The equilibrium canting angle α (note that 2α is the true canting angle between the adjacent magnetizations) is given as Here, H E is the interlayer exchange field and N is the number of atomic layers in the MML. The solid lines in Figure 25b were calculated by Equations (163)-(165) below H C and by Equation (151) above H C with the magnetic constants g = 2.06 (γ/2π = 2.88 GHz/kOe), 4πM = 17.5 kG, H C =1.15 kOe, L 1 = 30 Å, and L 2 = 13 Å. Agreement in the canted state below H C was not satisfactory. For a possible reason, we consider that our MML consists of only 11~12 canted units, which is given by 2[Fe (30 Å)/Cr (13 Å)]. On the other hand, the effective-medium theory assumes a semi-infinite or a large number of canted stacks with ideal sharp interfaces. So far, our discussion on IEC has been based on the assumption that the asymptotic limit is applicable. The term "asymptotic limit" means that the coupling is independent of the ferromagnetic layer thickness, and that the interlayer thickness is large as compared to the Fermi wavelength λ F = 2π/k F , which is typically either~5 Å or a few monolayers (MLs) in many metals [93]. For thicknesses below λ F , IEC cannot be described by our previous theoretical framework, and we should apply more a fundamental numerical method, for example, ab initio, by the self-consistent full-potential linearized augmented-plane-wave (FLAPW) method. Fine-layered [Fe (n ML)/Au (n ML)] m SLs with n = 1 to 5, for which we use the abbreviation (n) m , were prepared by means of the MBE on MgO(001) substrates at IMR, Tohoku University. The total numbers of Fe and Au atomic planes were kept constant. We use the term "fine-layered" SLs (FLSLs) for SLs with layer thicknesses comparable or smaller than the Fermi wavelength. The Fe(1 ML)/Au(1 ML) FLSL corresponds to the ordered alloys with the tetragonal L1 0 structure, which exists in the equilibrium phase diagram for Fe 1 Pt 1 alloy but not for the Fe 1 Au 1 alloy. BLS spectra at 300 K were excited by the p-polarized 5320 Å/150 mW line from a DPSS laser, and a cross-polarized analyzer was inserted in front of a tandem FPI to eliminate scattering from SAWs [94,95]. Magnetic fields of up to 7 kOe were applied along the crystallographic 110 direction, and the SWs propagating along the (110) direction were examined. Typical spectrum accumulation time was around 2h. Figure 26 displays BLS spectra observed from (n) m FLSLs in an external magnetic field of 3.0 kOe. Here, the symbol of n = 2 ± δ indicates an average ML, and δ = 0.25 and 0.5. The total thickness L SL of the (1.5) 30 FLSL is 145 Å, and it is 269 Å for the (2.5) 30 FLSL. The SSW structure can be clearly seen, and the labels 1 to 4 on each spectrum stand for the corresponding SSW mode number. Figure 27 shows a comparison of the BLS spectra observed from the integer-type FLSLs. Here, the symbol of n = 2 ± δ indicates an average ML, and δ = 0.25 and 0.5. The total thickness LSL of the (1.5)30 FLSL is 145 Å, and it is 269 Å for the (2.5)30 FLSL. The SSW structure can be clearly seen, and the labels 1 to 4 on each spectrum stand for the corresponding SSW mode number. Figure 27 shows a comparison of the BLS spectra observed from the integer-type FLSLs. The even integer-type FLSLs exhibited clear SSW structures, but not the odd integertype FLSLs. Furthermore, the noninteger-type spectra shown in Figure 26 can be smoothly connected between the (2) 50 and (3) 33 spectra. These observations indicate that the interactions leading to the occurrence of SSW systematically changed as a function of the ML number n. In order to analyze these results, we regarded the FLSLs as diluted magnetic Fe/Au alloy films with uniaxial MAEs perpendicular to the film plane and anisotropic exchange couplings. The magnetization of the alloy was assumed to be given by M = M Fe d Fe /(d Fe + d Au ) = M Fe d Fe /Λ SL . Here, d is the thickness of each layer (d Fe = 3.00 Å and d Au = 4.27 Å) and M Fe = 2.65 ± 0.2 µ B per Fe atom. The nth SSW frequency is given as and Here, 4πM eff has been already given by Equation (93), and L SL is the total thickness of the FLSL, which was found to be 364 Å with XRD measurement. Figure 28 shows the SSW frequencies as a function of the external magnetic field for the (2) 50 FLSL. . Figure 27. A comparison of BLS spectra observed from the integer-type FLSLs. The experimental conditions were the same as the ones given in Figure 25. The labels 1 to 5 and DE stand for the SSW modes and the DE mode, respectively [95]. The even integer-type FLSLs exhibited clear SSW structures, but not the odd integer-type FLSLs. Furthermore, the noninteger-type spectra shown in Figure 26 can be smoothly connected between the (2)50 and (3)33 spectra. These observations indicate that the interactions leading to the occurrence of SSW systematically changed as a function of the ML number n. In order to analyze these results, we regarded the FLSLs as diluted magnetic Fe/Au alloy films with uniaxial MAEs perpendicular to the film plane and anisotropic exchange couplings. The magnetization of the alloy was assumed to be given by M = MFedFe/(dFe + dAu) = MFedFe/ΛSL. Here, d is the thickness of each layer (dFe = 3.00 Å and dAu = 4.27 Å) and MFe = 2.65 ± 0.2 μB per Fe atom. The nth SSW frequency is given as Here, 4πMeff has been already given by Equation (93), and LSL is the total thickness of the FLSL, which was found to be 364 Å with XRD measurement. Figure 28 shows the SSW frequencies as a function of the external magnetic field for the (2)50 FLSL. Figure 27. A comparison of BLS spectra observed from the integer-type FLSLs. The experimental conditions were the same as the ones given in Figure 25. The labels 1 to 5 and DE stand for the SSW modes and the DE mode, respectively [95]. It can be readily seen that the magnetic moment was not fully saturated below H < 4 kOe. Because we confirmed that the DE mode was superimposed on the n = 2 SSW peak through the surface dispersion examination by changing the scattering angle ϑ in Equation (1) from 15° (Q// = 0.61v10 5 cm −1 ) to 45° (Q// = 1.67 × 10 5 cm −1 ) and that the DE mode does not contribute to the information on the exchange, the DE mode was therefore not included in the present analysis. We obtained a set of the magnetic constants given by γ/2π = 2.8 GHz/kOe (g = 2.0), 4πMeff = 0.75 kG, and D⊥ = 2.4×10 −10 Oe·cm 2 . Figure 29 shows It can be readily seen that the magnetic moment was not fully saturated below H < 4 kOe. Because we confirmed that the DE mode was superimposed on the n = 2 SSW peak through the surface dispersion examination by changing the scattering angle ϑ in Equation (1) from 15 • (Q // = 0.61 × 10 5 cm −1 ) to 45 • (Q // = 1.67 × 10 5 cm −1 ) and that the DE mode does not contribute to the information on the exchange, the DE mode was therefore not included in the present analysis. We obtained a set of the magnetic constants given by γ/2π = 2.8 GHz/kOe (g = 2.0), 4πM eff = 0.75 kG, and D ⊥ = 2.4 × 10 −10 Oe·cm 2 . Figure 29 shows the IEC constant Ji as a function of the atomic plane n. 0.75 kG, and D⊥ = 2.4 × 10 −10 Oe·cm 2 [94]. It can be readily seen that the magnetic moment was not fully saturated below H < 4 kOe. Because we confirmed that the DE mode was superimposed on the n = 2 SSW peak through the surface dispersion examination by changing the scattering angle ϑ in Equation (1) from 15° (Q// = 0.61v10 5 cm −1 ) to 45° (Q// = 1.67 × 10 5 cm −1 ) and that the DE mode does not contribute to the information on the exchange, the DE mode was therefore not included in the present analysis. We obtained a set of the magnetic constants given by γ/2π = 2.8 GHz/kOe (g = 2.0), 4πMeff = 0.75 kG, and D⊥ = 2.4×10 −10 Oe·cm 2 . Figure 29 shows the IEC constant Ji as a function of the atomic plane n. We observed a pair of DE peaks for a (1) 30 FLSL but observed the SSW structure up to n = 3 SSW for a (1) 100 FLSL. The J i constant was evaluated by using Ji = 2MD ⊥ /a Fe , in which a Fe = 2.87 Å is the lattice constant of Fe. The open symbols stand for the ab initio results. Overall agreement between the BLS and ab initio results was fairly good, except for n = 3. The ab initio calculation predicted an antiferromagnetic ground state. We consider the discrepancy between the BLS and ab initio results for n = 3 stemmed from interface roughness. Because the IECs for n = 2 and 4 were ferromagnetic, the antiferromagnetic IEC for the ideal n = 3 FLSL may have been smeared for long-wavelength SWs detected with the BLS technique. For pure Fe films in full contact, the J i value was expected to be 140 erg/cm 2 . Hence, the present value of J i~4 4 erg/cm 2 for the (1) 100 FLSL seems to be reasonable. According to the ab initio calculations, d electrons from the Fe atom were almost isolated even by 1 ML of the Au layer. The IEC was transmitted by itinerant sp electrons via second-order processes. Figure 30 shows the perpendicular anisotropy field and the g-factor as a function of the atomic layer. The solid line represents the 1/n dependence given by H A (n) = 22/n − 1.8 (kOe). The 1/n dependence was expected from the interface out-of-plane anisotropy, as we have already discussed for the Fe wedge sandwiched by the Au layers. The anisotropy field H A (n) is defined as When we ignore the bulk term in Equation (168) Figure 20) [79]. It is also clear that 4πM eff = 4πM SL − H A (n) changes its sign from positive to negative around n~2. As we have already discussed for the Fe wedge, it means the in-plane magnetization state changes into the perpendicular magnetization state under zero magnetic field. The present Fe/Au FLSL and the Fe wedge sandwiched by the Au layers gave consistent results on the in-plane and out-of-plane transitions of the zero field magnetization state. Another interesting observation is the rather rapid change of the g-factor from the bulk value of 2.09 above n ≥ 4 to the free electron value of 2.00 below n ≤ 2. We also found that the g-factor of the Fe wedge at 2.6 Å was very close to 2.00. which aFe = 2.87 Å is the lattice constant of Fe. The open symbols stand for the ab initio results. Overall agreement between the BLS and ab initio results was fairly good, except for n = 3. The ab initio calculation predicted an antiferromagnetic ground state. We consider the discrepancy between the BLS and ab initio results for n = 3 stemmed from interface roughness. Because the IECs for n = 2 and 4 were ferromagnetic, the antiferromagnetic IEC for the ideal n = 3 FLSL may have been smeared for long-wavelength SWs detected with the BLS technique. For pure Fe films in full contact, the Ji value was expected to be 140 erg/cm 2 . Hence, the present value of Ji~44 erg/cm 2 for the (1)100 FLSL seems to be reasonable. According to the ab initio calculations, d electrons from the Fe atom were almost isolated even by 1 ML of the Au layer. The IEC was transmitted by itinerant sp electrons via second-order processes. Figure 30 shows the perpendicular anisotropy field and the g-factor as a function of the atomic layer. Figure 20) [79]. It is also clear that 4πMeff = 4πMSL − HA(n) changes its sign from positive to negative around n~2. As we have already discussed for the Fe wedge, it means the in-plane magnetization state changes into the perpendicular magnetization state under zero magnetic field. The present Fe/Au FLSL and the Fe wedge sandwiched by the Au layers gave consistent results on the in-plane and out-of-plane transitions of the zero field magnetization state. Another interesting observation is the rather rapid change of the g-factor from the bulk value of 2.09 above n Nanogranular Films Transition metal (TM = Fe, Co)-based granular films are higher-potential materials for various magnetic applications, for example, high-density longitudinal magnetic recording media (CoPt-SiO 2 ), high-frequency micromagnetic cores (CoFeB)-(SiO 2 ), GMR sensors (Co-Al-O), and so on [55]. Among these granular materials, TM-Al-O granular films are interesting materials for both basic magnetic research and for technological applications. Over the last twenty years, we have performed systematic studies on the magnetic and transport properties of TM-Al-O nanogranular films with the research groups of IMRAM and IMR, Tohoku University, and RIEMM, Sendai [54,55,[96][97][98][99][100][101]. Readers who are interested in our results on transport and magnetization properties can refer to our references; here, I concentrate on our BLS results. It is well-known that the magnetic properties of TM-Al-O (TM = Fe, Co) granular films strongly depend on the TM composition. For example, for the Co composition x(Co) above~70 at. %, Co-Al-O films are in a ferromagnetic (FM) ground state with a lower coercive field of H C > 10 Oe. On the other hand, for x(Co) = 60~70 at %, TM-Al-O films exhibit reasonable soft magnetic properties with H C < 10 Oe, whereas Co-Al-O films with x(Co) = 40~60 at % are in a superparamagnetic (SPM) state. I have already shown a BLS spectrum obtained from FM Co-Al-O films prepared at RIEMM in Figure 3 and the SW frequencies as a function of the magnetic field for a FM Fe 64 -Al 19 -O 17 film prepared at RIEMM in Figure 5 [54]. The TM-Al-O nanogranular films consist of TM crystalline particles of up to several nm in diameter. The TM particles are surrounded by a nonmagnetic Al-O grain boundary of~1 nm in thickness. In spite of the granular structure, we could observe well-defined SW spectra as shown in Figure 3, and we found that the SW frequencies, as a function of the magnetic field, are fully described by using Equations (49) and (50) as developed for uniform FM films (see Figure 5). As already shown in Figure 5, the small exchange field term H ex = DQ 2 = 0.32 kOe is important to reproduce the observed bulk SW frequency. As I have already mentioned, we cannot determine the SW stiffness constant D from the exchange field term. We found the resistivity ρ of the FM TM-Al-O granular films obeys the T 2 law in a wider temperature range below 200 K. Although the T 2 law can be expected from magnon scattering of conduction electrons at low temperatures, it has not been fully confirmed yet for FM metals, probably due to the much larger T 5 term by phonon scattering. Because magnon resistivity also depends on the exchange stiffness constant D, and the magnon T 2 term can be replaced by (T/D) 2 , we can therefore expect the inverse-square law ρ∝(H ex ) −2 . Figure 31 shows a log ρ vs log H ex plot. We obtained ρ Fe = 30.3(H ex ) −2 µΩ·cm and ρ Co = 22.1(H ex ) −2 µΩ·cm, respectively. Here, H ex is in the kOe unit [54]. Hereafter, I would like to concentrate on the Co-Al-O granular films prepar RIEMM. Figure 32 shows a series of cross-polarized BLS spectra observed at room perature from the FM and SPM Co-Al-O granular films in a magnetic field of H = 2.0 [101]. Hereafter, I would like to concentrate on the Co-Al-O granular films prepared at RIEMM. Figure 32 shows a series of cross-polarized BLS spectra observed at room temperature from the FM and SPM Co-Al-O granular films in a magnetic field of H = 2.0 kOe [101]. These spectra were excited by the p-polarized 4730 Å line from a DPSS laser with an output power of 30 mW and accumulated over 5 h to improve the S/N ratio. The peak intensity of each spectrum was normalized by the total spectrum accumulation times. By virtue of the p → s polarization selection, the SAW scattering was completely suppressed. The solid squares in Figure 33 show the integrated intensity of the Stokes peak. The integrated intensity of the bulk SW in the FM state suddenly jumped in the FIM state. In order to determine the FM-SPM boundary, the exchange field H ex proved a good guide. The inverse-square relation between the exchange field H ex (kOe) and the resistivity ρ (µΩ·cm) gives us H ex = (22.1/ρ) 1/2 kOe for FM Co-Al-O films [54]. We calculated the exchange field H ex using our resistivity data ρ, and we show the calculated exchange fields by the open circles in Figure 33. Hereafter, I would like to concentrate on the Co-Al-O granular films prepared at RIEMM. Figure 32 shows a series of cross-polarized BLS spectra observed at room temperature from the FM and SPM Co-Al-O granular films in a magnetic field of H = 2.0 kOe [101]. These spectra were excited by the p-polarized 4730 Å line from a DPSS laser with output power of 30 mW and accumulated over 5 h to improve the S/N ratio. The pe intensity of each spectrum was normalized by the total spectrum accumulation times. virtue of the p → s polarization selection, the SAW scattering was completely suppress The solid squares in Figure 33 show the integrated intensity of the Stokes peak. The tegrated intensity of the bulk SW in the FM state suddenly jumped in the FIM state. order to determine the FM-SPM boundary, the exchange field Hex proved a good gui The inverse-square relation between the exchange field Hex (kOe) and the resistivity (μΩ⋅cm) gives us Hex = (22.1/ρ) 1/2 kOe for FM Co-Al-O films [54]. We calculated the change field Hex using our resistivity data ρ, and we show the calculated exchange fie by the open circles in Figure 33. We also expected a linear relation Hex∝ x-xC, in which xC is the FM-SPM bounda concentration. The solid line in Figure 33 displays Hex = 23.9 × 10 −3 (x(Co) − 59.3) (kO determined using the least-squares method. With this result, we determined the SPM-F We also expected a linear relation H ex ∝ x − x C , in which x C is the FM-SPM boundary concentration. The solid line in Figure 33 displays H ex = 23.9 × 10 −3 (x(Co) − 59.3) (kOe) determined using the least-squares method. With this result, we determined the SPM-FM boundary in the Co-Al-O nanogranular system was located at x C (Co) = 59.3 ± 1.3. The SPM-FM boundary can be characterized as a Co atomic concentration at which the exchange stiffness constant D vanishes. Therefore, we should take into account both the exchange and dipole coupling for the FM films. On the other hand, we can ignore the exchange coupling in the BLS spectrum calculation for the FIM state of the SPM films. There is a distinct difference between the bottom three FM spectra, (a) through (c), and the top two SPM spectra, (d) and (e). The FM spectrum exhibited a characteristic dual peak structure on the positive frequency anti-Stokes (SW annihilation process) side and a single peak on the negative frequency Stokes (SW creation process) side under the present experimental conditions. These spectral features are typical for a SW spectrum from a thick FM film. An opaque magnetic film with a thickness of~λ/2 can be treated as a semi-infinite magnet in the BLS experiment, as I have already mentioned. Here, the labels DE and B refer to the DE and bulk SW peaks, and the subscript S and AS refer to the Stokes and anti-Stokes processes. Note that the DE peak height is higher than that of the FM bulk peaks in (a) through (c). On the other hand, it is quite interesting to note that only a broad but intense peak appears on both frequency sides in the SPM spectra. It seems to be a general feature of BLS spectra from the SPM state. In fact, broad BLS peaks have been also observed in CoPt-SiO 2 granular films [102] and (SiO 2 ) 100−x Co x granular films [103]. Hereafter, for convenience's sake, let us define the magnetization-induced state under an external magnetic field in an SPM film as the field-induced magnetization (FIM) state. These FIM peak frequencies were quite sensitive to the external magnetic field H. The peak frequencies increased with the increasing magnetic field. It is also an interesting observation that the peak frequency in the anti-Stokes side was about 1 GHz higher at most than the peak frequency in the Stokes side. Damon and Eshbach discussed a dipolecoupled ferromagnetic slab and obtained a nonreciprocal DE mode in addition to the bulk SW band [24]. This means that we can expect a singlet-doublet SW structure for a BLS spectrum from an FIM slab. The peak frequency difference of~1 GHz between the Stokes and anti-Stokes sides is probably due to the DE mode, which only appears in the anti-Stokes side in our scattering geometry (see Figure 32a-c). In order to separate the bulk and DE peaks and determine the peak frequency, peak width, and intensity from the observed broad peak, numerical spectrum analysis is strongly required. For quantitative analyses of BLS spectra beyond the peak frequency discussions we have performed so far, we employed the CM theory, which has been developed for semi-infinite magnets, by taking into account both the dipole and exchange couplings [29]. The CM theory can fully reproduce the singlet-doublet SW structure for a BLS spectrum from a ferromagnetic slab, as shown in Figure 32a-c. According to the CM theory, the SW response function S(Q // , ω) is given by rather complicated formulae: Here, n(ω, T) is the Bose-Einstein (BE) factor, R zz , R xx , R zx , and R xz are the SW-photon coupling constants, and χ zz , χ xx , χ zx , and χ xz are the dynamical susceptibilities defined in Equations (79) and (80). The observed BLS spectrum should be compared with a convoluted spectrum between the SW response function S(Q // , ω) and an instrumental function. We employed the intensity-attenuated Rayleigh peak as the instrumental function (see Figure 32). The solid lines in Figures 32 and 34 display a comparison between the observed spectra and the calculated spectra in the FM state and in the FIM state. SW-photon coupling constants, and χzz, χxx, χzx, and χxz are the dynamical susceptibilities defined in Equations (79) and (80). The observed BLS spectrum should be compared with a convoluted spectrum between the SW response function ( ) ω , Q S // and an instrumental function. We employed the intensity-attenuated Rayleigh peak as the instrumental function (see Figure 32). The solid lines in Figures 32 and 34 display a comparison between the observed spectra and the calculated spectra in the FM state and in the FIM state. As shown in Figures 32 and 34, we could fully reproduce the observed BLS spectrum by taking into account only the dipole coupling for SPM spectra with the damping constant of Γ/2π = 2.66 GHz for the H = 2.0 kOe spectrum and 2.26 GHz for the 4.6 kOe spectrum. In order to clarify the singlet-doublet structure, we recalculated the response function S(Q // ,Ω) with a small damping constant of 0.07 ≤ Γ/2π ≤ 0.15 GHz. We adjusted the peak height of the Stokes peak of each spectrum in Figure 34. We found the doublet peaks located at 16.5 GHz and 13.4 GHz at H = 2.0 kOe, and at 25.0 GHz and 22.8 GHz at H = 4.6 kOe. Because the frequency splitting of 2~3 GHz between the doublet peaks and the peak width were comparable, the FIM doublet actually appears as a single peak in a BLS spectrum. The present numerical analysis reasonably explains why the anti-Stokes peak frequency is higher than the Stokes peak frequency. Figure 35a is the magnetic field dependence of the bulk-type and DE-type peak frequencies in the FIM state determined by the numerical spectrum fitting with small damping constants. Because we have confirmed no remanent magnetization at zero magnetic field, the FIM modes should be forbidden at H = 0. This means that the FIM mode frequencies should approach zero as the magnetic field approaches zero. On the other hand, the peak frequencies displayed in Figure 35a seem to stay finite even at zero magnetic field. As an attempt to solve this difficulty, we replaced the gyromagnetic ratio γ with a field-dependent gyromagnetic ratio γ(H), and the magnetization M with a field-induced magnetization M // (H), which is given by a sum of the Langevin functions. We thus rewrote Equations (51) The corrected results are shown as a function of the magnetic field in Figure 35b the peak height of the Stokes peak of each spectrum in Figure 34. We found the doublet peaks located at 16.5 GHz and 13.4 GHz at H = 2.0 kOe, and at 25.0 GHz and 22.8 GHz at H = 4.6 kOe. Because the frequency splitting of 2∼3 GHz between the doublet peaks and the peak width were comparable, the FIM doublet actually appears as a single peak in a BLS spectrum. The present numerical analysis reasonably explains why the anti-Stokes peak frequency is higher than the Stokes peak frequency. Figure 35a is the magnetic field dependence of the bulk-type and DE-type peak frequencies in the FIM state determined by the numerical spectrum fitting with small damping constants. Because we have confirmed no remanent magnetization at zero magnetic field, the FIM modes should be forbidden at H = 0. This means that the FIM mode frequencies So far, I have shown that BLS is a unique technique for investigation of the magnetization dynamics of various opaque magnetic structures in the GHz frequency range. However, most of the BLS studies of opaque magnetic structures have been performed at room temperature, and few BLS studies have been performed at low temperature [12,[104][105][106][107]. BLS studies of magnetization dynamics as a function of temperature is quite an interesting subject. As I have already pointed out, BLS intensity from opaque surfaces is much weaker compared to the phonon scattering in transparent materials, even at room temperature. Another inevitable difficulty for scattering experiments stems from the BE factor, which appears in the response function given by Equation (169). For conventional BLS studies performed above 15 K and with a narrow frequency range below 30 GHz, we can reasonably approximate the BE factor as follows: This approximation is known as the high-temperature approximation (HTA). Now, BLS intensity is directly proportional to the absolute temperature T. It is obvious that weak BLS intensity from opaque surfaces, even at room temperature, gets weaker at lower tem-peratures. Of course, we can apply an intense laser beam to increase BLS signals. However, the intense laser beam results in a local-heating effect at the beam spot. The local heating effect will be critical for phase transition studies. We must overcome these difficulties to step forward into new frontiers of BLS studies on spin dynamics or magnetization dynamics at low temperatures. Figure 36 shows a comparison of the spectra observed under field-cooling (FC) (+) and zero-field-cooling (ZFC) ( ) conditions with an external magnetic field of 4.0 kOe at 20 K [108]. After spectrum accumulation times over 6h, we observed FC and ZFC spectra with reasonable signal to noise ratios at 20 K. We found no substantial difference between these spectra, as shown in Figure 36. For more detailed comparisons of these spectra, we properly adjusted the peak heights of the singlet Stokes peak. The vertical broken lines indicate a frequency range in which the AOMs were activated to protect the PMT from optical damage by an intense Rayleigh peak. From our ZFC and FC magnetization measurements at IMR, we estimated the blocking temperature TB of our sample to be ∼110 K. The FIM changed the temperature dependence from the SPM Langevin type above TB to the FM power-law type below TB. An effective magnetic anisotropy with an easy axis along the applied magnetic field appeared in SPM granular systems below TB. For convenience's sake, we assumed a uniaxial-type anisotropy field HK (= 0.27 kOe). The solid lines in Figure 36 show the calculated ZFC spectrum with the anisotropy field, and the broken lines are the calculated ZFC spectrum without the anisotropy field term. It is obvious that the anisotropy field term improves agreement between the observed and calculated spectra. Although the HK term was much smaller than the other fitting parameters, it plays an essential role, as we will discuss later. The inclusion of the HK term is equivalent to replacing the external magnetic field H with the effective field H + HK in Equations (170) and (171). In order to display the SPM peak frequencies, we included a calculated spectrum for a small peak-width of Γ/2π = 0.1 GHz in Figure 36. Figure 37 shows the temperature development of the BLS spectrum at 300, 100, and 15 K at H = 4.5 kOe. After spectrum accumulation times over 6 h, we observed FC and ZFC spectra with reasonable signal to noise ratios at 20 K. We found no substantial difference between these spectra, as shown in Figure 36. For more detailed comparisons of these spectra, we properly adjusted the peak heights of the singlet Stokes peak. The vertical broken lines indicate a frequency range in which the AOMs were activated to protect the PMT from optical damage by an intense Rayleigh peak. From our ZFC and FC magnetization measurements at IMR, we estimated the blocking temperature T B of our sample to be~110 K. The FIM changed the temperature dependence from the SPM Langevin type above T B to the FM power-law type below T B . An effective magnetic anisotropy with an easy axis along the applied magnetic field appeared in SPM granular systems below T B . For convenience's sake, we assumed a uniaxial-type anisotropy field H K (=0.27 kOe). The solid lines in Figure 36 show the calculated ZFC spectrum with the anisotropy field, and the broken lines are the calculated ZFC spectrum without the anisotropy field term. It is obvious that the anisotropy field term improves agreement between the observed and calculated spectra. Although the H K term was much smaller than the other fitting parameters, it plays an essential role, as we will discuss later. The inclusion of the H K term is equivalent to replacing the external magnetic field H with the effective field H + H K in Equations (170) and (171). In order to display the SPM peak frequencies, we included a calculated spectrum for a small peak-width of Γ/2π = 0.1 GHz in Figure 36. Figure 37 shows the temperature development of the BLS spectrum at 300, 100, and 15 K at H = 4.5 kOe. Figure 32 an intensity-attenuated (×1/5000) Rayleigh peak for the 300 K spectrum. The solid lines on each spectrum give the calculated BLS spectra. In the calculations, we used a common peak width of Γ/2π for both the bulk and DE-type modes. In spite of this simplification, agreements between the observed and calculated spectra were reasonable. The horizontal bar on each spectrum shows the peak width for each calculated spectrum. We found that the peak width at 100 K was wider than the peak widths at 300 and 15 K. Figure 38 shows the SPM excitation frequencies and the peak width for a magnetic field H = 4.5 kOe as a function of temperature. Figure 32 an intensity-attenuated (×1/5000) Rayleigh peak for the 300 K spectrum. The solid lines on each spectrum give the calculated BLS spectra. In the calculations, we used a common peak width of Γ/2π for both the bulk and DE-type modes. In spite of this simplification, agreements between the observed and calculated spectra were reasonable. The horizontal bar on each spectrum shows the peak width for each calculated spectrum. We found that the peak width at 100 K was wider than the peak widths at 300 and 15 K. Figure 38 shows the SPM excitation frequencies and the peak width for a magnetic field H = 4.5 kOe as a function of temperature. I included in The labels B and DE refer to the bulk and DE-type peak frequencies, respectively. These peak frequencies were nearly insensitive to temperature but slightly increased below 50 K. The H K term in Equations (170) and (171) gives the increasing frequencies at lower temperatures. To the contrary, the peak width Γ/2π exhibited a broad peak centered at 200 K, as shown in Figure 39. Figure 39 displays the peak width for the external fields of H = 3.0 kOe (∆), 4.0 kOe (□), and 4.5 kOe ( ) as a function of temperature. The peak width clearly depends on both temperature T and the magnetic field H. We observed a narrower width for a higher magnetic field. We performed an additional BLS measurement under H = 2.0 kOe at 15 K. The inset shows a summary of the magnetic field development of the peak width at 15 K. From the results shown in the inset, we can estimate a limiting value of Γ(0, 15K)/2π~4 GHz for the peak width at 15 K and with zero magnetic field. With these observations, it seems to be reasonable to decompose the peak width into the following terms: Γ(H, T)/2π = φ/2π + ς(H)/2π + ξ(T)/2π Here, the ϕ/2π term describes the peak width due to the scattering of the FIM excitations by the nonuniformity of granule sizes (or granule moments) within a film, and it is expected to be temperature-and magnetic field-independent. The ζ(H)/2π term describes the suppression of the incoherent motion of granule moments by the external magnetic field. This term is responsible for our observation of the narrower widths for higher magnetic fields. Finally, the ξ(T)/2π term describes the damping due to couplings between the FIM excitations and another freedom of thermally-excited magnetization dynamics. We will concentrate on this term in our following discussions. Figure 37. Temperature development of the BLS spectrum in a temperature range between 300 and 15 K under a magnetic field of 4.5 kOe. The baseline of each spectrum, indicated by the broken line, was properly shifted for clarity. The solid lines give the numerical fits obtained by the CM theory, and the horizontal bars show the damping constant of the calculated spectra [108]. I included in Figure 32 an intensity-attenuated (×1/5000) Rayleigh peak for the 300 K spectrum. The solid lines on each spectrum give the calculated BLS spectra. In the calculations, we used a common peak width of Γ/2π for both the bulk and DE-type modes. In spite of this simplification, agreements between the observed and calculated spectra were reasonable. The horizontal bar on each spectrum shows the peak width for each calculated spectrum. We found that the peak width at 100 K was wider than the peak widths at 300 and 15 K. Figure 38 shows the SPM excitation frequencies and the peak width for a magnetic field H = 4.5 kOe as a function of temperature. Figure 36. The solid lines show the calculated SPM peak frequency and peak width when taking into account the magnetization relaxation dynamics [108]. Figure 36. The solid lines show the calculated SPM peak frequency and peak width when taking into account the magnetization relaxation dynamics [108]. The labels B and DE refer to the bulk and DE-type peak frequencies, respectively. These peak frequencies were nearly insensitive to temperature but slightly increased below 50 K. The HK term in Equations (170) and (171) gives the increasing frequencies at lower temperatures. To the contrary, the peak width Γ/2π exhibited a broad peak centered at ∼200 K, as shown in Figure 39. Figure Magnetic field (kOe) Figure 39. The peak width Γ/2π as a function of temperature at H = 3.0 kOe (Δ), 4.0 kOe (□), and 4.5 kOe (○). The inset shows a magnetic field dependence of the peak widths at 15 K. [108] It is very interesting to note that the attempt time of τ 0~1 0 −12 s for magnetic relaxation is three orders or more long, and that the activation energy (height of the potential barrier to jump) is one order or more lower than the barrier height for the structural relaxation. The characteristic features of SPM magnetization relaxation can be summarized as slow motion within a shallow potential minimum. Here, we return to Equation (173). At 15 K, the magnetization relaxation time τ in Equation (174) is calculated as τ = 3.23 × 10 −6 s. At the BLS frequency of f B~2 0 GHz, the condition 2πf B τ >> 1 is fully satisfied. This condition means that the SPM excitation frequency is too fast to couple with the magnetization relaxation process. Therefore, we can set ξ(15 K)/2π = 0 in Equation (173). Now, assuming that the φ/2π + ς(H)/2π term in Equation (173) is independent of temperature at H = 4.5 kOe, the relaxation amplitude is given by When we put the available constants ∆Γ R /2π = 0.35 GHz, γ/2π = 3.11 GHz/kOe, H K = 0 kOe, H = 4.5 kOe, and 4πM(∞) = 6.76 kG into Equation (180), we obtain 2π∆M = 0.36 kG and the relaxation amplitude of 0.70 GHz. Finally, we obtain the SPM frequency and the peak width ξ(T)/2π for the bulk-type mode as follows: and ξ 2π = 2.52 + 0.70 The solid lines in Figure 38 are the calculated SPM excitation frequency f B and the peak width ξ/2π at H = 4.5 kOe as a function of temperature by using Equations (181) and (182). The magnetic relaxation model qualitatively reproduces the observed temperature development of the peak width, except for the higher temperatures above 250 K. The calculated frequency below 100 K was lower than the observed frequencies. Because we have not included the temperature dependence of the γ/2π constant and the H K term in Equation (181), it is possible to improve agreement between the observed and calculated frequencies. We can also qualitatively reproduce the results for H = 3.0 and 4.0 kOe with the above relaxation parameters by changing the relaxation amplitude. As we have already discussed, we can ignore the ξ/2π term at 15 K in Equation (173) because of the condition ω B τ >> 1. Then, we can rewrite Equation (173) as follows: The inset in Figure 39 shows the magnetic field dependence of the peak width Γ(H, 15 K)/2π. As the magnetic field increased, the peak width decreased. This behavior can be attributed to the magnetic field dependence of the ζ(H, 15 K)/2π term. Unfortunately, the highest magnetic field available in our BLS system with the closed-cycle refrigerator is not enough to fully separate these ϕ/2π and ζ/2π terms in Equation (183). However, as a rough estimation, we obtained ϕ/2π ≈ 2 GHz and ζ(0 kOe, 15 K)/2π ≈ 2 GHz, respectively. So far, I have demonstrated in this granular section that the BLS technique involves higher potential for investigation of fast magnetization dynamics. In the Co-Al-O system, we investigated the dynamics in the frequency range of around 20 GHz. However, it is easy to extend the frequency range below several GHz and over a few hundred GHz for the BLS technique by utilizing a tandem FPI. Furthermore, we can adjust the SPM excitation frequencies by applying an appropriate external magnetic field according to Equations (170) and (171). I would like to emphasize these advantages of the BLS technique in the magnetization dynamics study of SPM materials. Figure 40 shows the peak intensities at H = 4.5 kOe obtained from the fittings and normalized by the total spectrum accumulation time as a function of temperature. higher potential for investigation of fast magnetization dynamics. In the Co-Al-O system, we investigated the dynamics in the frequency range of around 20 GHz. However, it is easy to extend the frequency range below several GHz and over a few hundred GHz for the BLS technique by utilizing a tandem FPI. Furthermore, we can adjust the SPM excitation frequencies by applying an appropriate external magnetic field according to Equations (170) and (171). I would like to emphasize these advantages of the BLS technique in the magnetization dynamics study of SPM materials. Figure 40 shows the peak intensities at H = 4.5 kOe obtained from the fittings and normalized by the total spectrum accumulation time as a function of temperature. Although all peak intensities monotonously decreased as temperature decreased, the peak intensities at 15 K kept about 50% of the highest intensities observed at 250 K. It is important to note that the BE factor in Equation (169) is an inevitable sequence of the quantum-mechanical fluctuation-dissipation theorem [117] and independent of the details of the real physical systems. The solid lines in Figure 40 show the least-squares fits below 150 K to a linear function of temperature given by I(T) = A+BT. The linear function well describes the observed temperature dependence of the peak intensities. Now, we rewrite the linear function in the following form. Here, I(T)/T is proportional to Equation (169) divided by the BE factor. Because the optical properties and the dynamical susceptibilities appearing in Equation (169) are essentially independent of temperature, we can explain the B term. However, we cannot explain the existence of the A/T term within the framework of the single-site magneto-optic coupling mechanism already discussed. Nevertheless, the A/T term in Equation (184) is the most crucial in our present results. When we use the least-squares parameters A = 45.0 and B = 0.246 for the Stokes peak (■), it is obvious that the A/T term dominates Although all peak intensities monotonously decreased as temperature decreased, the peak intensities at 15 K kept about 50% of the highest intensities observed at 250 K. It is important to note that the BE factor in Equation (169) is an inevitable sequence of the quantum-mechanical fluctuation-dissipation theorem [117] and independent of the details of the real physical systems. The solid lines in Figure 40 show the least-squares fits below 150 K to a linear function of temperature given by I(T) = A + BT. The linear function well describes the observed temperature dependence of the peak intensities. Now, we rewrite the linear function in the following form. I(T)/T = A/T + B Here, I(T)/T is proportional to Equation (169) divided by the BE factor. Because the optical properties and the dynamical susceptibilities appearing in Equation (169) are essentially independent of temperature, we can explain the B term. However, we cannot explain the existence of the A/T term within the framework of the single-site magneto-optic coupling mechanism already discussed. Nevertheless, the A/T term in Equation (184) is the most crucial in our present results. When we use the least-squares parameters A = 45.0 and B = 0.246 for the Stokes peak (■), it is obvious that the A/T term dominates the BLS scattering intensity below 100 K. At 15 K, the A/T term is more than ten times larger than the B term. This can be the reason why we could observe relatively intense scattering even at 15 K. Non-T-proportional behaviors of SW BLS intensity have already been reported in several conductive FM materials, for example, semiconductor EuS and EuO single crystals [104], a FM/AFM Co (2.8 nm)/CoO (0.7 nm) bilayer thin film [106], and so on. These materials are magnetically quite different from the present SPM Co-Al-O granular system. Because we cannot explain the A/T term in Equation (184) within the phenomenological description of the conventional single-site magneto-optical coupling theory given by Equations (14)- (18), some collective motion of electrons might contribute to the SW light scattering in conductive FM materials. In order to elucidate the light scattering mechanism, more extensive BLS studies for various conductive magnetic materials are strongly recommended. SWs in Confined Structures and Devices Unfortunately, I had no chance to perform BLS measurement in this research area. This area is closely related to the rapidly growing magnonics and spintronics fields, and is a promising area for the next generation of SW BLS study. For readers interested in this field, I give a few of recent references [118][119][120]. Summary Since my SW BLS reports on CoZr and FeSi films published in 1989, I spent more than 30 years in the field of SW BLS. BLS as an optical technique to determine a set of the basic magnetic constants of a magnetic thin film was fully established during the period, as I have presented in this review. Now, the BLS technique is recognized as one of the best techniques not only to investigate the SWs and the magnetization dynamics in the GHz frequency range, but also to characterize magnetic interactions induced in various artificial magnetic structures. There is no doubt that developments both in the sample preparation techniques and the BLS technique will go hand in hand to open new frontiers of functional devices as well as basic material sciences. When I started my SW BLS research in the middle of the 1980s, quasi-two-dimensional magnetism was just a textbook subject, and I never imagined that I could measure SWs in such a thin film of a few monolayers in thickness. However, the MBE technique made it possible. Because of the sensitivity and flexibility of the light scattering technique, the importance and usefulness of the BLS technique in materials science and engineering is still growing and growing. The downsizing of magnetic devices will result in magnetic instability due to direct or indirect interactions between magnetic elements. The microfocused BLS technique and micromagnetic simulation can be applied to characterize interactions between the magnetic elements. BLS detection of the spin current and the spin accumulation in spintronics devices will be another interesting challenge. Through my research career in the BLS field spans over 45 years, including structural and ferroelectric phase transitions, low temperature liquids, glass formers, and SAWs and SWs from opaque surfaces, I learned many new physics, ideas, and the importance of the challenges when going into a new research field. The BLS study from opaque surface was exactly a challenge for me. When I was asked to examine BLS from SPM granular samples, to speak frankly, I had no assurance that I could observe BLS signals because of my preconception that short range interactions such as the exchange coupling were necessary to generate magnetic excitations. Fortunately, my idea was completely wrong as I have presented in this review, and BLS can be a new tool to investigate the magnetization dynamics of SPM granular materials.
37,601.2
2023-01-24T00:00:00.000
[ "Physics" ]
The Impact of Blood Rheology on Drug Transport in Stented Arteries: Steady Simulations Background and Methods It is important to ensure that blood flow is modelled accurately in numerical studies of arteries featuring drug-eluting stents due to the significant proportion of drug transport from the stent into the arterial wall which is flow-mediated. Modelling blood is complicated, however, by variations in blood rheological behaviour between individuals, blood’s complex near-wall behaviour, and the large number of rheological models which have been proposed. In this study, a series of steady-state computational fluid dynamics analyses were performed in which the traditional Newtonian model was compared against a range of non-Newtonian models. The impact of these rheological models was elucidated through comparisons of haemodynamic flow details and drug transport behaviour at various blood flow rates. Results Recirculation lengths were found to reduce by as much as 24% with the inclusion of a non-Newtonian rheological model. Another model possessing the viscosity and density of blood plasma was also implemented to account for near-wall red blood cell losses and yielded recirculation length increases of up to 59%. However, the deviation from the average drug concentration in the tissue obtained with the Newtonian model was observed to be less than 5% in all cases except one. Despite the small sensitivity to the effects of viscosity variations, the spatial distribution of drug matter in the tissue was found to be significantly affected by rheological model selection. Conclusions/Significance These results may be used to guide blood rheological model selection in future numerical studies. The clinical significance of these results is that they convey that the magnitude of drug uptake in stent-based drug delivery is relatively insensitive to individual variations in blood rheology. Furthermore, the finding that flow separation regions formed downstream of the stent struts diminish drug uptake may be of interest to device designers. Introduction Although blood is a non-Newtonian fluid, flow in stented arteries has often been modelled numerically with a Newtonian blood model [1][2][3]. In order to capture the shear-thinning properties of blood, some computational studies of stented arteries [4][5][6] have attempted to more accurately model blood flow through the implementation of non-Newtonian blood rheological models [7][8][9][10]. Despite the fact that each model has been created by parameter-fitting to experimental measurements of blood viscosity, they vary widely in the prediction of the blood viscosity at the same strain rates. As blood viscosity not only differs significantly between males and females [11], but also between persons of the same sex [11], it is unlikely that any rheological model can be developed which would capture the highly variable rheological properties of blood. One of the primary causes of these variations is the haematocrit [9], usually defined as the ratio of the volume taken up by red blood cells to the total volume of blood. Phillips et al. [12] showed that the haematocrit decreases significantly in the aftermath of the angioplasty procedures used in stent implantation, likely from blood loss and fluid resuscitation. They found that whereas the average haematocrit prior to the operation had been 40% in men and 38% in women, in the 12 hours following the procedure these values dropped to 34% and 33% respectively. The Carreau non-Newtonian blood rheological model has been utilised in some past numerical analyses of stented arteries [4,5] but unlike some other rheological models, such as the Walburn-Schneck and Casson models, it cannot simulate the effects of differences in haematocrit. Hence, past analyses of steady-state drug deposition have not taken into account the effect of the reduced haematocrit on the resulting blood flow. It is important to characterise these post-angioplasty reductions in haematocrit to better predict the early 'burst' release of drug from strut coatings, which can take place within the first few days of stent implantation [13]. Following our previous work [5], the same computational model is used to explore the factors governing the fluid dynamic environment within the vasculature and their effects on drug distribution patterns. We also previously used a custom-designed bench-top experiment consisting of a single drug-eluting stent (DES) strut and tissue bed with a Newtonian working fluid to validate the computational findings and showed experimentally that pulsatile flow only has a small effect on drug transport when the strut is well-apposed. This was in qualitative agreement with numerically generated data, thereby justifying the use of steady-state simulations, which are simulations in which all flow parameters remain constant with respect to time. A series of these steady-state computational fluid dynamics (CFD) analyses are performed in the present study with the primary aim of determining the impact of different blood rheological models on the haemodynamic flow details and drug transport behaviour. We anticipate only subtle changes in drug delivery on account of rheology based on earlier studies [4], which may be obscured or masked in in-vitro or animal models. The use of numerical methods represents the ideal platform in which to study the impact of rheology owing to the multiscale resolution of the phenomena in the computational domain and the tight control on boundary conditions. The numerical results obtained with the traditional Newtonian viscosity model are compared with data generated with the Power Law, Walburn-Schneck, Casson, Carreau and Generalised Power Law non-Newtonian viscosity models, chosen because they span the clinical spectrum of apparent viscosity variations. These results are further compared against those obtained with an additional model with the fluid properties of plasma to ensure that the full range of near-wall blood behaviours are modelled. A secondary objective is to determine the effect of changing the haematocrit on the flow details and drug transport, so as to determine whether the post-angioplasty reduction in haematocrit results in significant changes in the flow and drug transport behaviour of the stent. Geometry This two-dimensional numerical study of blood flow in a stented artery encompassed the modelling of the flow field and drug concentration distribution in the artery lumen, as well as the drug concentration distribution in the arterial tissue. The lumen was modelled as a 3mm radius fluid domain, whilst the tissue was modelled as a 1mm thick, homogeneous, porous cylindrical tube. A single 0.1mm square cross-section DES strut was also modelled halfway between the inlet and outlet of the computational domains, as may be seen in Fig 1. These dimensions are identical to those used in our prior analysis of stent-based drug therapy in the renal vasculature [5] and represents the application of established numerical techniques [1,4] to implants in non-coronary vasculatures, reflecting current clinical interest [14][15][16][17]. We constructed an idealised geometry that still captured the breadth of flow features present in 3D complex geometries. This included a single stent strut obstructing near wall flow and generating strut adjacent recirculation regions where drug can pool [18]. A more realistic model of 3D implants would likely consider multiple struts. When considered in this context, upstream struts act to shield downstream struts from flow, and superposition of drug occurs [1]. Flow-mediated drug uptake was always most significant at the foremost proximal strutwhere the flow disruptions were most significant and the contribution from neighboring struts was quite small [1]. Selecting then a geometry that most significantly enhanced the sensitivity of drug uptake to arterial flow changes, we refined the geometry to be a single-stent strut isolated in the boundary layer of flow. A square edge was similarly chosen in place of rounded or chamfered edges to yield the most exaggerated flow field. Mathematical Model In steady incompressible flow the equations of conservation of mass and momentum are written: and in which ρ' is the blood density (kg/m 3 ), μ' is the dynamic viscosity of blood (PaÁs), v 0 l is the velocity vector of blood in the lumen (m/s), P' is the thermodynamic pressure (Pa) and r' is the gradient operator. The prime means a dimensional variable and the absence of a prime indicates a non-dimensional parameter. The steady-state drug transport was represented in the lumen by and in the tissue by c represents the normalised drug concentration, defined as the ratio of the local drug concentration, c' (kg/m 3 ), to the concentration of drug at the strut surfaces c 0 0 (kg/m 3 ) on which it is assumed to be uniform, viz and D 0 l and D 0 t represent the diffusivity of the drug in the blood and tissue respectively. Blood is assumed incompressible with a density of ρ' = 1060 kg/m 3 . The antiproliferative drug Paclitaxel served as a model compound in this analysis, chosen for its use as the active agent in DES and balloon catheters with a large corpus of data regarding its vascular penetration and transport in blood and within arterial tissue. Its diffusivity coefficients are D 0 l = 3.89×10 -11 m 2 /s [19] and D 0 t = 3.65×10 -12 m 2 /s [20] respectively. A constant diffusivity of drug in blood was assumed, independent of local haematocrit or shear rate, since drug transport was modelled in the boundary layer in which such effects were assumed negligible. Specifically, erythrocytes are not in high concentration in the boundary layer and hence their impact on modulating the drug transport would be small. The flow rates in the boundary layer are also small and therefore assumed to be reasonably measured by a diffusivity measured statically [19]. Furthermore, the global diffusivity of drug in the tissue utilised in this study does not take into consideration the effects of tissue anisotropy and heterogeneity. More complex drug transport models have been described elsewhere which implement negative sink terms to account for drug binding effects in the vascular wall [21]. However, our simplified convection-diffusion model will help to isolate the luminal flow patterns which enhanced or diminished drug uptake. The finite volume solver ANSYS FLUENT 14.5 (ANSYS Inc.) was used to perform the numerical simulations. A semi-implicit (SIMPLEC) algorithm coupled the pressure and velocity while a second order central differencing scheme spatially discretised the pressure and momentum variables. A second order upwind scheme was also used to discretise the scalar drug concentration. flux (@c/@y 0 = 0), or the drug can be completely washed away (c = 0) [22]. In this study, the latter condition was assumed to be more realistic as the vasa vasorum are continuously replenished with fresh blood [2]. Finally, zero mass flux of drug was specified on the remaining boundaries. In order to see the effects of different flow rates on the resulting drug distribution in the tissue, three flow rates were implemented: Q 0 mean representing the mean flow during the cardiac cycle, while Q 0 high and Q 0 low were twice and half Q 0 mean respectively. These flow rates were then used in conjunction with each rheological model and a Poiseuille parabolic inlet velocity profile established fully developed flow in each case. A volumetric flow rate of 6.64 mL/s was used for Q 0 mean , which corresponds to a Reynolds number of 427 under the assumption of a constant dynamic viscosity, μ' = 0.00345 PaÁs. These low Reynolds numbers were consistent with the mean flow conditions of the renal vasculature [5] and enabled blood flow to be modelled as laminar in all cases. All remaining boundary conditions remained the same in each of the simulations performed. A uniform, zero gauge pressure boundary condition was specified at the outlet, whilst no-slip conditions were prescribed on the strut-lumen and lumen-tissue interfaces. A fixed wall assumption was also implemented in light of findings which suggest that stented arteries are considerably stiffer than unstented arteries and that minimal artery motion occurs [23]. Finally, a symmetry boundary condition was specified at the top of the lumen domain. Blood Rheology Models In the Newtonian fluid model, the dynamic viscosity is assumed to remain at a constant value of m 0 N = 0.00345 PaÁs [7]. Although this assumption greatly simplifies the modelling of blood, it has been found to only be acceptable in flows in which strain rates above 100s -1 are encountered [24]. Such high strain rates are found in larger arteries and consequently many investigators have justified their assumption of blood being a Newtonian fluid by emphasising the large size of the arteries that they were modelling [1,3]. However, the separated flow regions near the stent are characterised by low velocities, meaning that strain rates are modest. As non-Newtonian behaviour may significantly affect velocity distributions in these regions, it may also affect the rate at which drug is removed from the surface and convected by blood. Hence, five non-Newtonian models of blood rheology which incorporated shear-and haematocrit-dependent viscosities were also examined in this study. These models were chosen because they span the spectrum of viscosity variations with respect to strain-rate which have been observed in clinical data (Fig 2 [25][26][27][28][29][30][31][32][33][34]). The mathematical formulations of these models are given in Table 1. The rheological behaviour of blood close to a wall is a particularly complex phenomenon and its viscosity in these regions in not precisely known. In steady, fully-developed flow, red blood cells migrate towards the vessel axis, leaving a plasma-rich region near the walls which is relatively void of red blood cells [35]. We therefore studied the full spectrum of rheology considering the extreme cases of a boundary layer entirely depleted of red blood cells and a boundary layer rich in red blood cells. The fluid properties of plasma (ρ = 1025kg/m 3 and μ = 0.00122 PaÁs [36]) were used to approximate the former case whilst the aforementioned Newtonian and non-Newtonian blood rheological models were used to represent the latter case. As shown in Fig 2, the apparent viscosity predicted by each rheological model is approximately constant for shear strain rates ( _ g 0 ) greater than 400 s -1 . As _ g 0 ! 0s À1 however, each of the non-Newtonian models predicts a different increase in the apparent viscosity of blood, thus demonstrating shear-thinning behaviour, so that at low strain rates they differ significantly from each other and from values obtained from viscometric data which is also presented in Walburn-Schneck models each increase towards infinity. Furthermore, the original formations of the Power Law and Walburn-Schneck models predict zero viscosities as _ g 0 ! 1s À1 . Although the behaviour of blood as _ g 0 ! 0s À1 is still debated, zero viscosity at infinite _ g 0 is unphysical and hence limitations have been placed on the Power Law and Walburn-Schneck models in order to artificially mimic the Newtonian behaviour of blood at high strain rates. The Generalised Power Law and Carreau model at high strain rates are each approximately asymptotic to the usually used value of the Newtonian model without any need for artificial limitations. The original Casson model was developed for a yield-pseudo-plastic [30]. This means that unlike a fluid, which is defined by the fact that motion is induced if a shear stress is applied, a yield-pseudo-plastic behaves as if it were a solid if a shear stress less than the yield stress is Casson [10,37] applied. To alleviate these effects, a modified Casson model was proposed by Razavi et al. [37] in which no yield stress is present, and it was this model which was implemented in this current study. This modified Casson model was implemented with three haematocrit levels in order to determine how the haematocrit affects the haemodynamics and drug transport. The first of these models incorporated a haematocrit of 40%, typical of a healthy adult female [11], whilst the second incorporated a lowered, post-angioplasty haematocrit level of 33% [12]. Finally, the third Casson model implemented a haematocrit of 45%, normal for an adult male [11]. Grid Description and Refinement The mesh density used was greatest in the regions closest to the stent strut and also near the interface between the tissue and lumen. This enabled the resolution of thin boundary layers which occurred along the no-slip boundaries defining the artery wall and the stent strut walls. Furthermore, the high mesh density in the tissue and close to the stent strut facilititated the resolution of high drug concentration gradients. This final mesh contained 2,009,929 elements. Mesh convergence testing was carried out to ensure that the solutions obtained were independent of the size of the grid used. This testing was performed under mean flow conditions using the Newtonian blood rheological model. The flow was deemed to be adequately resolved once the grid convergence index (GCI) corresponding to the recirculation lengths proximal and distal to the stent strut fell below 1%. The GCI is defined as [38]: where L 0 fine and L 0 coarse are the recirculation length proximal or distal to the stent strut for a fine and coarse mesh respectively (mm), r is the refinement factor, and p is the order of accuracy of the solution. In this case, r = ffiffi ffi 2 p and p = 2. The results of this analysis are listed in Table 2. Although the flow was shown to be clearly resolved in each case, the mesh-dependence of the drug-transport behaviour also needed to be evaluated. This was accomplished by comparing the area-weighted average concentration (AWAC) value for each case, which represents the average concentration of drug in a representative area of arterial tissue. This representative area was chosen as that of a rectangle bounded by the upper and lower extents of the tissue domain and the vertical lines x' = -0.35mm and x' = 0.35mm. This axial extent was chosen on the basis that a typical inter-strut distance is 7 struth widths [1]. Mesh convergence was defined to occur once <2% change was observed between two successive mesh refinements, similar to the drug transport convergence criteria used in prior numerical DES studies [4,5]. These results are listed in Table 3. Rheological Effects on Blood Flow Recirculation Length and Normalised Mean Viscosity. In each steady-state analysis performed, recirculating flow regions were observed to form proximal and distal to the stent strut, and were also observed to correspond with regions of high drug concentration, as found in past DES studies [4,5]. This phenomenon is shown in Fig 3 for a case employing the Newtonian blood viscosity model at an inlet flow rate of Q 0 low . The lengths of these recirculating regions were affected by both the flow rate and the choice of blood rheological model, as may be seen in Fig 4a and 4b. Specifically, increases in the flow rate were associated with smaller proximal and larger distal recirculating flow regions for each rheological model investigated. The Newtonian model yielded larger proximal and distal recirculating regions than most of the non-Newtonian models at all flow rates, with the exception of two of the Casson models in the proximal region. In contrast, the Power Law model tended to produce the smallest recirculating flow regions, 18% smaller than the Newtonian model in the proximal region and 24% smaller in the distal region at Q 0 low . This difference diminished as the flow rate increased, becoming 11% smaller in the proximal region and 6% lower than the Newtonian model in the distal region at Q 0 high . The Generalised Power Law, Walburn-Schneck and Carreau models produced similar recirculation length results to one another, smaller than the Newtonian model and generally larger than the Power Law model. Each of these non-Newtonian models also tended to converge towards the haemodynamic behaviour of the Newtonian model as the flow rate increased, although this behaviour was less evident in the three Casson models. It was also less evident in the model implementing plasma as the working fluid, which yielded a 24% smaller proximal recirculation length and a 59% larger distal recirculation length at Q 0 high . To help ascertain why the non-Newtonian models tended to produce smaller recirculation regions, a new non-dimensional parameter dubbed the normalised mean viscosity, m, was introduced. This parameter, defined as m ¼ measures the average value of the apparent blood viscosity, μ' (PaÁs), in the area, A' (mm 2 ), of the proximal or distal recirculation zone being investigated, normalised by the dynamic viscosity associated with the Newtonian blood rheological model, m 0 N = 0.00345 PaÁs. As the size of the proximal and distal recirculation zones varied between models and flow rates, the value of A' was different in each case investigated. Despite the significant differences between the rheological behaviour of the different models, the results of Fig 4b showed that the relationship between the recirculation lengths and m approximated linearity at most flow rates. This linear relationship indicates that the elevated viscosities of the non-Newtonian models in the recirculation zones is directly linked to smaller recirculation lengths as the higher viscosities mean that higher stresses are needed to generate the same shear rates so that there is greater resistance to motion. The R 2 index reveals that these linear trends are most noticeable at Q 0 low , in which R 2 = 0.41 in the proximal region and 0.62 in the distal region. However, these values dropped to 0.00 and 0.48 respectively at Q 0 high . The rheological models most responsible for this loss of linearity were the plasma model and two of the Casson models (H = 40% and 45%). These Casson models produced significantly larger proximal recirculation lengths than the Newtonian model at Q 0 high , despite having larger m values. Conversely, the plasma model yielded significantly larger distal recirculation lengths than the Newtonian model despite possessing a smaller m. The ensuing section outlines how these discrepancies may have transpired. Non-Newtonian Importance Factor. In order to quantify the significance of non-Newtonian flow behaviour, Ballyk et al. [7] introduced a concept referred to as the non-Newtonian importance factor: where μ' is the apparent blood viscosity, while m 0 N = 0.00345 PaÁs is the dynamic viscosity associated with blood at high strain rates. As non-Newtonian behaviour was generally most pronounced at low flow rates, only the results obtained at the lowest flow rate,Q 0 low , are shown in Fig 5. The graphs of I L obtained in this study confirm that the regions of highest dynamic viscosity occurred in both recirculation regions for each non-Newtonian blood rheology model. However, the third Casson model (H = 45%) was also found to yield high I L values in high strain rate regions, such as near the tissue. The high dynamic viscosities found in these regions caused an increased resistance to blood flow, thereby reducing the size of the distal recirculation zone and increasing the size of the proximal zone. This could account for why this model produced significantly larger proximal recirculation lengths than the Newtonian model at each flow rate despite having larger m values, and why it yielded a significantly smaller distal recirculation length than the other models at Q 0 high . These results also demonstrated that the magnitude of non-Newtonian behaviour is affected by the haematocrit level. The low-haematocrit Casson model (H = 33%) yielded smaller I L values than the other Casson models in high and low strain rate regions alike. The associated decrease in resistance to flow in high strain rate regions and in recirculating flow regions resulted in larger distal recirculation lengths. It also yielded smaller proximal recirculation lengths, although the reduced resistance to vortex formation in the proximal recirculating flow region antagonises this behaviour. Hence, this low-haematocrit Casson model was found to produce the haemodynamic environment most similar to that of the Newtonian model. The smallest I L values were associated with the plasma model, which yielded a constant I L of 0.35. The associated decrease in resistance to flow in high strain rate regions and in recirculating flow regions resulted in the largest distal recirculation lengths of any of the models examined. It also yielded some of the smallest proximal recirculation lengths, although the reduced resistance to vortex formation in the proximal recirculating flow region antagonised this behaviour somewhat. This antagonistic behaviour could explain why the plasma model's proximal recirculation lengths remained comparable to the other models at all flow rates whilst its distal lengths differed more dramatically. Rheological Effects on Drug Transport Diffusive mass transport across the lumen-tissue interface. Fick's Law of diffusion states that the diffusive mass transfer across the lumen-tissue interface is proportional to the concentration gradient: where _ m 0 is the mass flux of the drug species (kg/m 2 s), D 0 t is the diffusivity of the drug in the tissue, and @c'/@n' is the dimensional concentration gradient of drug. c' is the concentration of drug (kg/m 3 ) and n' is the direction normal to the lumen-tissue interface (m), taken positive in the positive y' direction. Using the definition of the normalised drug concentration in Eq (5) and non-dimensionalising n' in Eq (9), so that The Impact of Blood Rheology on Drug Transport in Stented Arteries where L' inter-strut is the same inter-strut distance used in the AWAC calculations (m), a normalised drug concentration gradient parameter was created which reveals the mass transport behaviour at the aforementioned representative section of the lumen-tissue interface as The normalised drug concentration gradient distribution associated with the Newtonian model in Fig 6a revealed five local peaks and troughs, labelled A, B, C, D and E. Although drug matter was evident in the proximal and distal segments, the concentration gradients at the lumen-tissue interface were only significant in magnitude in the far upstream (points A and B), and downstream (point E) regions. This behaviour could be readily explained by the pooling of drug in the proximal and distal regions driving diffusion processes, and local velocity vectors driving convection processes (Fig 6b). In particular we see at point A, upstream of the proximal recirculation region, low luminal drug concentrations and a velocity vector driving flow into the lumen and giving rise to highly negative @c/@n values. Closer to the proximal strut surface we see drug concentration gradients of more or less zero, likely due to the low local velocities preventing convection, and a balance in luminal and tissue drug concentration preventing diffusion. We also noted that the contribution to drug uptake by the distal recirculation zone was inferior to that of the proximal zone. Specifically, integrating @c/@n along the sections of the lumen-tissue interface occupied by the proximal and distal recirculation regions, we could reveal that the distal zone detracted from the drug transport into the tissue ( R (@c/@n)dx' = -0.009mm) whilst the proximal zone enhanced drug transport ( R (@c/@n)dx' = 0.45mm). For the distal region one can see that a larger recirculation region in which the drug can pool would act to dilute the luminal drug concentration, as well as a normal velocity vector (Fig 6b) that acts to drive drug away from the tissue, minimising any convective transport processes. The significance of these results is that they convey that designing haemodynamic stents struts which mitigate this distal recirculation zone may enhance drug uptake. An additional series of simulations were performed to investigate whether the distal recirculation zone displayed a similar tendency to drive drug out of the tissue with drugs other than Paclitaxel. These simulations were each performed with the Newtonian model at Q Similar drug concentration gradient profiles to the Newtonian model were similarly observed in the non-Newtonian and plasma cases in Fig 6d as well, although with magnitudes scaled. Particularly, the magnitudes of the peaks and troughs differed by up to 59% from those of the Newtonian model. These differences could be attributed to the fact that a change in the models would act to 1) vary the local velocity-but not to the extent that it would change the balance of convection and diffusion-and 2) vary the relative size of the recirculation regions. Hence the Newtonian model, with its larger recirculation regions, generally yielded the highest magnitude @c/@n values of the blood models whilst the Power Law model's smaller recirculation regions Comparison with the red dc/dn line confirmed that the upward flow at points A and E resulted in loss of drug from the tissue due to convection. The purple @c/@x' line revealed that drug transport in the yielded the smallest values. The plasma model yielded the highest magnitude @c/@n values overall, likely because of the high local velocities facilitated by its lower viscosity. The highest @c/@n values were found at the strut-tissue interface, particularly at the corners at points C and D in each case. Examination of the high magnitude longitudinal concentration gradient (@c/@x') values at these points in Fig 6b revealed that large quantities of drug were removed longitudinally from beneath the strut. It was this removal of drug beneath the strut which facilitated the high @c/@n values between points C and D and especially at the strut corners themselves. A similar phenomenon was observed at point B, which also featured high magnitude @c/@x' values due to the loss of drug upstream, facilitating a local @c/@n peak. A higher magnitude @c/@n was observed at point D than at point C due to the velocity vector aft of point D which drives drug out of the tissue (and thereby enhances the concentration gradient in the y' direction). The effects of flow and blood rheological model selection on the drug transport are further outlined in the ensuing sections. Area-Weighted Average Concentration. The AWAC of drug in the arterial tissue was dependent on both the flow rate and on the choice of the blood rheological model. An inverse relationship was found to exist between the AWAC and the flow rate, and, more precisely, between the AWAC and the sizes of the proximal and distal recirculating flow regions. The Newtonian model and the post-angioplasty Casson model (H = 33%), with their larger recirculation zones, therefore tended to produce the lowest AWAC of each of the blood rheological models at each flow rate while the Power Law model generally yielded the highest AWAC, as shown in Fig 7. However, these AWAC values deviated less than 5% from those of the Newtonian model at all of the flow rates tested. In fact, it was only when modelling plasma instead of blood that any considerable deviations from the Newtonian model's AWAC were observed and even these only occurred at Q 0 high . This suggests that the Newtonian model is appropriate to use in place of non-Newtonian blood rheological models in studies seeking to quantify the magnitude of arterial drug uptake. The clinical significance of these results is that they convey that the magnitude of drug uptake in stent-based drug delivery is relatively invariant of individual variations in blood rheology. However, transient simulations implementing pulsatile inlet velocity profiles may be needed to confirm whether or not the Plasma model's AWAC deviates significantly from the Newtonian model's over several cardiac cycles. Although these results appear to convey that the Newtonian model is appropriate for use in DES studies, they have neglected to account for the effects of blood rheology on the spatial distribution of drug in the tissue. This investigation and its outcomes are outlined in the ensuing section. Non-Newtonian Drug Concentration Difference Factor. Another new parameter being introduced in this study is the non-Newtonian drug concentration difference factor, I D , and it represents the difference between the normalised drug concentration of the concerned non-Newtonian rheological model and the Newtonian model at the same flow rate, divided by the integrated absolute concentration gradient along a representative length of the lumen tissue interface: À0:35mm j @c N @n 0 jdx 0 : ð12Þ horizontal direction was significant between points A and B, and at points C and E. c) the distal recirculation zone was found to remove drug from the tissue for non-Paclitaxel drugs, although this behavior diminished as D 0 l =D 0 t ! 100. d) The non-Newtonian blood rheological models produced similar dc/dn patterns to the Newtonian model; however, their local maxima and minima were up to 59% smaller in magnitude. The size of these peaks and troughs was found to be directly related to the recirculation lengths predicted by each model. Hence, the choice of rheological model not only influenced the fluid dynamics, but also the drug transport behaviour. doi:10.1371/journal.pone.0128178.g006 As earlier, this representative length is chosen to lie between the points x' = -0.35mm and x' = 0.35mm along the line y' = 0mm. The subscripts NN and N indicate non-Newtonian and Newtonian cases respectively. The effect of blood rheology on the distribution of drug in the artery wall was ascertained by a comparison of the I D plots at the three flow rates investigated. The results of this study are shown in Fig 8. Once again, only the results corresponding with a flow rate of Q 0 low are displayed, as this is where non-Newtonian effects were generally most pronounced. The regions highlighted in red (I D >0) depict where the non-Newtonian blood model predicts a greater drug concentration, whilst the blue regions (I D <0) show where the Newtonian model predicts a higher drug concentration. Regions in which |I D | >0.06 were deemed to correlate with regions in which the non-Newtonian model's spatial distribution of drug matter departed significantly from that of the Newtonian model. Although the Newtonian model was deemed adequate for predicting the AWAC, the presence of regions with high |I D | established that non-Newtonian effects were significant when investigating the spatial distribution of drug matter in arteries featuring DES. The drug pools which formed in the smaller recirculating flow regions generally associated with the non-Newtonian models were significantly more concentrated than those of the Newtonian models, as conveyed by the red areas found immediately proximal and distal to the stent struts in Fig 8. Although the smaller recirculation regions of the non-Newtonian models did facilitate higher concentration drug pools, the strain rates associated with the Newtonian model were greater in the regions of the recirculation zones which were close to the lumen-tissue interface. These higher strain rates facilitated an increased convective transport of drug and may account for why I D became negligibly small near the proximal lumen-tissue interface in each case, despite Increases in bulk flow rate were found to correspond with reduced area-weighted average drug concentrations (AWAC) in the tissue. Rheological models which produced larger recirculation lengths were also found to produce lower AWAC values; however, the deviation from the AWAC obtained with the Newtonian model was observed to be less than 5% for each non-Newtonian case. It was only when modelling plasma instead of blood that any considerable deviations from the Newtonian model's AWAC were observed and even these were only observed at Q <0) show where the Newtonian model predicts a higher drug concentration. The plasma case (a) is different from the other cases examined in that its larger distal recirculation zone resulted in a less concentrated distal drug pool than that of the Newtonian model. This larger pool allowed a greater region of the lumen-tissue interface to be exposed to recirculating drug; the higher concentrations of the drug pools in the non-Newtonian cases. I D did not become negligible, however, at the distal lumen-tissue interface as the drug matter which reached the interface was already significantly more dilute in the Newtonian case. The result of these effects is that the Newtonian model tended to yield higher tissue drug concentrations upstream of the strut whilst the non-Newtonian models produced higher concentrations in the downstream region. The combination of these effects could account for why the AWAC values of the non-Newtonian models deviated less than 5% from that of the Newtonian model despite the significant differences in drug spatial distribution which were observed. Although a non-Newtonian blood rheological model may more accurately describe the spatial distribution of drug in arterial tissue, it is difficult to convey which model is best suited to this task. Comparison of the results obtained with the three Casson models revealed that patients with higher haematocrits may yield higher drug uptake globally but particularly in regions distal to the stent struts. However, as the Power Law model typically yielded the most significant non-Newtonian behaviour, it is suggested that both Newtonian and Power Law models be implemented in future studies concerned with drug transport details. This method may be used to determine a range of potential drug transport behaviours and thus be of potential use to stent designers. A plasma model may also be appropriate to incorporate on the basis of the relatively low tissue drug concentrations that it yields in both upstream and downstream tissue aspects. However, transient simulations implementing a pulsatile inlet velocity may be needed to confirm if these results depart significantly with those achieved with the Power Law and Newtonian models over several cardiac cycles. Conclusions Non-Newtonian effects were generally most pronounced at low flow rates and the choice of blood rheological model was found to influence flow patterns and drug transport. The largest non-Newtonian haemodynamic and drug transport effects were observed in the Power Law model, while these effects were more modest in cases employing the Walburn-Schneck, Casson, Carreau and Generalised Power Law models. These non-Newtonian effects typically manifested through significantly reduced proximal and distal recirculation lengths when compared with the Newtonian model. An additional blood plasma model was also implemented to account for red blood cell depletion in the near-wall regions. This model yielded smaller proximal and larger distal recirculation lengths than the Newtonian model. In each model investigated, the flow separation regions which formed downstream of the stent struts were found to diminish the drug uptake. These results, when considered in conjunction with relevant experimental data, could lead to the design of more haemodynamic DES struts which mitigate this distal recirculation zone and potentially enhance drug uptake. Numerical methods allowed us to appreciate the subtle but still significant differences in drug delivery due to blood rheology. We found that non-Newtonian effects can be significant and the choice of a non-Newtonian rheological model is contextually important. Specifically, a Newtonian model was found to be appropriate to use in studies seeking to quantify the magnitude of arterial drug uptake, although non-Newtonian effects were found to impact the spatial distribution of drug in the tissue. It was therefore suggested that both Newtonian and Power however, no significant positive I D values were observed in the tissue. In contrast, the larger proximal drug pool of the Newtonian model did facilitate significant negative I D values in the proximal sections of the tissue. In contrast, cases b-h show that the smaller recirculation lengths of the non-Newtonian models enable the formation of higher concentration drug pools than the Newtonian model. Although significant negative I D values were again observed in the proximal aspects of the tissue, significant positive I D values were also observed in the distal tissue aspects in some models. The non-Newtonian blood rheological models therefore typically produced much higher tissue drug concentrations than the Newtonian model in the distal regions and significantly lower concentrations in the proximal regions. doi:10.1371/journal.pone.0128178.g008 The Impact of Blood Rheology on Drug Transport in Stented Arteries Law rheological models be implemented in future numerical studies concerned with drug transport details, in order to establish a range of potential drug concentration distributions. A plasma model may also be appropriate to incorporate on the basis of its relatively small tissue concentrations in both proximal and distal regions. Clinically, these results conveyed that the magnitude of drug uptake in stent-based drug delivery is relatively invariant of individual variations in blood rheology. Furthermore, it was also suggested that patients with higher haematocrits may yield higher drug concentrations globally but particularly in regions distal to the stent struts. As our understanding of near-wall viscosities is limited however, an in-depth discussion of which rheological models to use in specific cases (e.g. females, males, post-surgery) was deemed beyond the scope of the current study.
9,251.2
2015-06-12T00:00:00.000
[ "Engineering", "Medicine" ]
The Evolution of the WUSCHEL-Related Homeobox Gene Family in Dendrobium Species and Its Role in Sex Organ Development in D. chrysotoxum The WUSCHEL-related homeobox (WOX) transcription factor plays a vital role in stem cell maintenance and organ morphogenesis, which are essential processes for plant growth and development. Dendrobium chrysotoxum, D. huoshanense, and D. nobile are valued for their ornamental and medicinal properties. However, the specific functions of the WOX gene family in Dendrobium species are not well understood. In our study, a total of 30 WOX genes were present in the genomes of the three Dendrobium species (nine DchWOXs, 11 DhuWOXs, and ten DnoWOXs). These 30 WOXs were clustered into ancient clades, intermediate clades, and WUS/modern clades. All 30 WOXs contained a conserved homeodomain, and the conserved motifs and gene structures were similar among WOXs belonging to the same branch. D. chrysotoxum and D. huoshanense had one pair of fragment duplication genes and one pair of tandem duplication genes, respectively; D. nobile had two pairs of fragment duplication genes. The cis-acting regulatory elements (CREs) in the WOX promoter region were mainly enriched in the light response, stress response, and plant growth and development regulation. The expression pattern and RT-qPCR analysis revealed that the WOXs were involved in regulating the floral organ development of D. chrysotoxum. Among them, the high expression of DchWOX3 suggests that it might be involved in controlling lip development, whereas DchWOX5 might be involved in controlling ovary development. In conclusion, this work lays the groundwork for an in-depth investigation into the functions of WOX genes and their regulatory role in Dendrobium species’ floral organ development. Introduction The homeobox transcription factors (HB TFs) are key regulators of plant and animal cell fates and differentiation, and homeobox genes were first discovered in Drosophila [1,2].Meanwhile, more homeobox members continue to be found in other eukaryotes.The WUSCHEL (WUS) gene is the prototypic member of the plant-specific WUS homeobox (WOX) protein family, one of several HB TF families [3].A total of 14 homologs of AtWUS were searched in the Arabidopsis genome, and these genes were named WOXs [4].The WOX TFs contain a short stretch of amino acids that folds into a DNA-binding domain (called a homeodomain), which forms helix-loop-helix-turn-helix structures in space [5]. In plants, the WOX genes are extensively distributed.According to the evolutionary origins among genes, the members of the WOX gene family in plants can be clustered into three clades: the ancient clade, the intermediate clade, and the WUS/modern clade.All plant species (from algae to angiosperms) contain varying amounts of WOX genes belonging to the ancient clade; the intermediate clade is found in pteridophytes, gymnosperms, and angiosperms; and the WUS/modern clade is exclusively found in angiosperms [6].In the WOX gene family of Arabidopsis, there are three members (AtWOX10, AtWOX13-14) in the ancient clade, four members (AtWOX8-9, AtWOX11-12) in the intermediate clade, and eight members (AtWUS, AtWOX1-7) in the WUS/modern clade. The WOX gene family is involved in plant growth and development, as well as in the stress response.WOX family members belonging to different clades fulfill different biological functions in the development of plant flowers, floral meristems, roots, and other organs.The WOX genes of the ancient clade participate in the regulation of plant roots and flower development.AtWOX13 is expressed in floral meristem tissues, inflorescences, and young flower buds and is particularly highly expressed in developing carpels.WOX13 promotes replum development by negatively regulating the JAG/FIL genes [7].AtWOX14 is found only in Brassicaceae, where it is expressed early in lateral root formation and specific to the development of anthers [8].The intermediate clade mainly affects embryo patterning and root organogenesis.WOX8 and WOX9 are homologous genes that play vital roles in embryo and inflorescence development and are species-specific in their functions [2,[9][10][11].The genes WOX11 and WOX12, which are homologous, participate in the process of de novo root organogenesis in Arabidopsis [12].The WUS clade mainly affects the development of the floral meristem and leaf and stem cell maintenance.For example, Arabidopsis WUS genes can maintain stem cell homeostasis at all developmental stages in the shoot apical meristem (SAM) [13,14].Meanwhile, WUS genes are also able to act as activators to regulate the size of the floral meristem tissue [15].WOX1 and WOX3 redundantly regulate abaxial-adaxial growth in the leaf and floral meristem [16,17].AtWOX2 is required to initiate the embryogenic shoot meristem stem cell program in Arabidopsis [18].WOX5 is critical for stem cell maintenance in the root apical meristem (RAM) [19,20].In addition, the WOX gene family plays an important role in the response to environmental stresses, such as salt, cold, and drought.For example, GhWOX4 positively regulates drought tolerance in cotton; PagWOX11/12a positively regulates the salt tolerance of poplar [21,22]. The WOX genes act as transcription factors to activate or repress the expression of other genes on the one hand, as described above for the role played by WOXs in plants.On the other hand, the upstream part of the WOX coding region contains abundant CREs to receive the action of other regulatory factors.For example, maize ZMSP10/14/26 regulates the expression of the ZmWOX3A gene in coat precursor cells by directly binding to its promoter [23].In summary, the combination of cis-and trans-acting factors exerts a regulatory effect on gene expression, while playing an indispensable role in plant growth, development, and evolution [24]. Orchidaceae, one of the largest angiosperm groups, contains over 750 genera and 28,000 species [25,26].It is widely distributed, with the exception of the North and South Poles and extremely arid desert areas, and has the greatest distribution in the tropics.Orchids are highly evolved taxa within angiosperms and are one of the most studied taxa in biological research [27].Dendrobium is the second-largest genus in the orchid family and is a typical epiphyte [28].Most Dendrobium species have valuable medicinal stems, while their flowers and leaves have excellent ornamental value.In recent years, the completion of the whole-genome sequencing of D. catenatum [29], D. chrysotoxum [30], D. huoshanense [31], and D. nobile [32], etc., has provided valuable information revealing the genetic and molecular mechanisms of the formation of important traits in Dendrobium.The regulatory function of the WOX genes in model plants such as Arabidopsis has been relatively comprehensively researched.However, there is little knowledge about how the WOX genes affect the growth and development of Dendrobium species. In our study, we identified the WOX gene family in three Dendrobium species (D. chrysotoxum, D. huoshanense, and D. nobile), and systematically analyzed their basic traits, including their chromosomal localization, phylogenetics, motif compositions, gene structures, collinearity, and CREs.Meanwhile, the expression pattern of WOXs in the D. chrysotoxum flower parts was analyzed.This project aimed to preliminarily elucidate the evolutionary and potential biological roles of the WOX gene family in Dendrobium species, and to provide new insights into the study of the molecular regulatory mechanisms of the WOX genes in D. chrysotoxum flower development. Identification and Physicochemical Properties of the WOX Gene Family The WOX genes in three Dendrobium species were screened by BLAST and HMMER.The result showed that ten, nine, and 11 WOXs were identified in the genomes of D. chrysotoxum, D. huoshanense, and D. nobile, respectively.According to the order distribution of the chromosomes, these WOXs were named DchWOX1-10, DhuWOX1-9, and DnoWOX1-11. To characterize the WOX genes of the Dendrobium species in more detail, we predicted the physicochemical properties of 30 WOX proteins using ExPASy.The results are as follows (Table 1).The number of amino acids (AA) varied from 110 aa (DchWOX1) to 328 aa (DchWOX4), and the molecular weight (Mw) ranged from 12.84 kDa (DchWOX1) to 35.30 kDa (DnoWOX4).Among the 30 WOXs, 12 were basic proteins with an isoelectric point (pI) higher than 8.00; the remaining 17, with a pI ranging from 5.26 (DhuWOX8) to 7.79 (DnoWOX4), were neutral or weakly acidic proteins.Additionally, the grand average of hydrophilic (GRAVY) values of all WOX proteins were less than zero, suggesting their strong hydrophilicity.The instability indexes (II) of all WOX members exceeded 40, implying that these proteins are unstable [33].All WOX proteins were found to be in the nucleus according to subcellular location predictions, indicating that they might also function there, like most TFs. Phylogenetic Analysis of WOXs We created a phylogenetic tree of the WOX genes to analyze the evolution of the WOX genes in the Dendrobium species (Figure 2).The evolutionary tree included 30 WOXs from three Dendrobium species, 15 AtWOXs from A. thaliana, and 13 OsWOXs from O. sativa.All WOX protein sequences have been collected with Table S1.According to the classification of the WOXs' evolutionary relationships in A. thaliana, the 30 WOXs in the Dendrobium species can be similarly clustered into the ancient clade (six WOX genes), the intermediate clade (ten WOX genes), and the WUS/modern clade (14 WOX genes).The WUS/modern clade has the largest number of WOX genes, while the ancient clade has the fewest.sification of the WOXs' evolutionary relationships in A. thaliana, the 30 WOXs in the Dendrobium species can be similarly clustered into the ancient clade (six WOX genes), the intermediate clade (ten WOX genes), and the WUS/modern clade (14 WOX genes).The WUS/modern clade has the largest number of WOX genes, while the ancient clade has the fewest. Gene Structure and Conserved Motifs of WOXs The conserved motifs of the 30 WOXs in the three Dendrobium species were evaluated through the online prediction website MEME (Figure 3B).The results demonstrated that whereas the motif structures varied by clade, WOXs within the same clade had comparable motif structures.Ten conserved motifs were detected in the 30 WOXs.Table S2 has listed all motif sequences.All WOXs contain motif 1 and motif 3 simultaneously; motif 6 motif 7, and motif 10 are exclusive to the WUS/modern clade; and motif 9 is found only in the intermediate clade.The distinct roles of various WOXs may be conferred by the particular distributions of various structures. We visualized the number and distribution of the WOXs' introns and exons to further reveal the gene structures of the WOXs in the three Dendrobium species (Figure 3C).Most Dendrobium WOXs contain 1-2 introns.Notably, three introns were detected in DnoWOX4 and four introns were detected in DchWOX9, while DchWOX1 and DhuWOX5 had no introns.The gene structures of WOX members belonging to the same clade are similar.In particular, in the ancient clade, the phylogenetic tree divides six genes into two structurally similar subclades.DhuWOX8, DnoWOX5, and DchWOX6 are clustered as subclades with two introns, while DchWOX10, DhuWOX1, and DnoWOX11 are clustered as a subclade with one intron. Gene Structure and Conserved Motifs of WOXs The conserved motifs of the 30 WOXs in the three Dendrobium species were evaluated through the online prediction website MEME (Figure 3B).The results demonstrated that, whereas the motif structures varied by clade, WOXs within the same clade had comparable motif structures.Ten conserved motifs were detected in the 30 WOXs.Table S2 has listed all motif sequences.All WOXs contain motif 1 and motif 3 simultaneously; motif 6, motif 7, and motif 10 are exclusive to the WUS/modern clade; and motif 9 is found only in the intermediate clade.The distinct roles of various WOXs may be conferred by the particular distributions of various structures. We visualized the number and distribution of the WOXs' introns and exons to further reveal the gene structures of the WOXs in the three Dendrobium species (Figure 3C).Most Dendrobium WOXs contain 1-2 introns.Notably, three introns were detected in DnoWOX4 and four introns were detected in DchWOX9, while DchWOX1 and DhuWOX5 had no introns.The gene structures of WOX members belonging to the same clade are similar.In particular, in the ancient clade, the phylogenetic tree divides six genes into two structurally similar subclades.DhuWOX8, DnoWOX5, and DchWOX6 are clustered as subclades with two introns, while DchWOX10, DhuWOX1, and DnoWOX11 are clustered as a subclade with one intron. Multiple sequence pairs of the 30 WOXs showed that all WOXs contained a helixturn-helix-loop-helix region unique to the homeodomain (Figure 4A).Twelve WOXs in the WUS/modern clade contain the WUS-box (TL-LFP-) (Figure 4B). Multiple sequence pairs of the 30 WOXs showed that all WOXs contained a helixturn-helix-loop-helix region unique to the homeodomain (Figure 4A).Twelve WOXs in the WUS/modern clade contain the WUS-box (TL-LFP-) (Figure 4B).Multiple sequence pairs of the 30 WOXs showed that all WOXs contained a helixturn-helix-loop-helix region unique to the homeodomain (Figure 4A).Twelve WOXs in the WUS/modern clade contain the WUS-box (TL-LFP-) (Figure 4B). Synteny Analysis and Ka/Ks Value of WOX Gene Family The D. chrysotoxum genome contains a pair of segmental duplication genes, DchWOX4 and DchWOX8 on Chr06 and Chr15 (Figure 5A).Similarly, the D. huoshanense genome contains one pair of fragment duplication genes, DhuWOX2 and DhuWOX5 on Chr6 and Chr11 (Figure 5B).Two pairs of segmental duplicates were found in the D. nobile genome, DnoWOX2 and DnoWOX9 on CM039718.1 and CM039732.1,and DnoWOX3 and DnoWOX8 on CM039723.1 and CM039732.1,respectively (Figure 5C).Furthermore, the Ka/Ks ratios of these four gene pairs were all less than 0.5, ranging from 0.13 to 0.2 (Table 2). Synteny Analysis and Ka/Ks Value of WOX Gene Family The D. chrysotoxum genome contains a pair of segmental duplication genes, DchWOX4 and DchWOX8 on Chr06 and Chr15 (Figure 5A).Similarly, the D. huoshanense genome contains one pair of fragment duplication genes, DhuWOX2 and DhuWOX5 on Chr6 and Chr11 (Figure 5B).Two pairs of segmental duplicates were found in the D. nobile genome, DnoWOX2 and DnoWOX9 on CM039718.1 and CM039732.1,and DnoWOX3 and DnoWOX8 on CM039723.1 and CM039732.1,respectively (Figure 5C).Furthermore, the Ka/Ks ratios of these four gene pairs were all less than 0.5, ranging from 0.13 to 0.2 (Table 2). Cis-Acting Elements Analysis of WOXs We extracted 2000 bp upstream of the CDS of the 30 WOX genes to identify the CREs to predict the potential regulatory functions of the WOX genes in the Dendrobium species. In total, 569 CREs belonging to 35 types and 17 response functions were found in the three Dendrobium species (Figure 6 and Table S3). We classified the retrieved CREs into four categories: growth and development elements, phytohormone responsiveness, stress repressiveness, and light responsiveness.The growth and development element category includes endosperm expression, circadian control, and meristem expression.Interestingly, among them, the frequency of meristem expression is the largest.Five types of CRE exist within the category of phytohormone responsiveness.This includes abscisic acid (ABA), methyl jasmonate (MeJA), auxin, gibberellin, and salicylic acid responsiveness.The stress repressiveness category had four types of CRE, including defense and stress responsiveness, anaerobic induction, drought, and low-temperature stress, with anaerobic induction being the most frequent.In addition, light responsiveness accounts for almost half of all CREs (269/569), and there is a large frequency of light responsiveness in each WOX gene. As shown in Figure 6C Cis-Acting Elements Analysis of WOXs We extracted 2000 bp upstream of the CDS of the 30 WOX genes to identify the CREs to predict the potential regulatory functions of the WOX genes in the Dendrobium species.In total, 569 CREs belonging to 35 types and 17 response functions were found in the three Dendrobium species (Figure 6 and Table S3). We classified the retrieved CREs into four categories: growth and development elements, phytohormone responsiveness, stress repressiveness, and light responsiveness.The growth and development element category includes endosperm expression, circadian control, and meristem expression.Interestingly, among them, the frequency of meristem expression is the largest.Five types of CRE exist within the category of phytohormone responsiveness.This includes abscisic acid (ABA), methyl jasmonate (MeJA), auxin, gibberellin, and salicylic acid responsiveness.The stress repressiveness category had four types of CRE, including defense and stress responsiveness, anaerobic induction, drought, and low-temperature stress, with anaerobic induction being the most frequent.In addition, light responsiveness accounts for almost half of all CREs (269/569), and there is a large frequency of light responsiveness in each WOX gene. Expression Pattern Analysis of WOX Gene Family in D. chrysotoxum We performed expression analyses based on transcriptome data from different flower parts in the three developmental periods of D. chrysotoxum (Figure 7).In the transcriptome heatmap, DchWOX6 and DchWOX10 of the ancient clade were expressed at significantly higher levels in S1.However, DchWOX6 was similarly expressed in all five floral parts, whereas DchWOX10 exhibited high expression only in the gynostemium.Of the three members of the intermediate clade, DchWOX5 had higher expression in the ovary of S1, DchWOX8 in the sepal of S1, and DchWOX4 had lower expression in S1 than S2 and S3.Among the five WOXs of the WUS/modern clade, DchWOX2 and DchWOX3 displayed similar expression levels throughout flower development, and they had higher expression amounts in the lip of S1; DchWOX7 had higher expression in the ovary of S1; DchWOX1 was highly expressed in the ovary of S2; and DchWOX9 gynostemium expression was highest in S1. Expression Pattern Analysis of WOX Gene Family in D. chrysotoxum We performed expression analyses based on transcriptome data from different flower parts in the three developmental periods of D. chrysotoxum (Figure 7).In the transcriptome heatmap, DchWOX6 and DchWOX10 of the ancient clade were expressed at significantly higher levels in S1.However, DchWOX6 was similarly expressed in all five floral parts, whereas DchWOX10 exhibited high expression only in the gynostemium.Of the three members of the intermediate clade, DchWOX5 had higher expression in the ovary of S1, DchWOX8 in the sepal of S1, and DchWOX4 had lower expression in S1 than S2 and S3.Among the five WOXs of the WUS/modern clade, DchWOX2 and DchWOX3 displayed similar expression levels throughout flower development, and they had higher expression amounts in the lip of S1; DchWOX7 had higher expression in the ovary of S1; DchWOX1 was highly expressed in the ovary of S2; and DchWOX9 gynostemium expression was highest in S1.S4 lists the FPKM values for the WOXs in D. chrysotoxum. RT-qPCR Analysis of WOX Genes in D. chrysotoxum We selected DchWOX3, DchWOX5, and DchWOX10 from different clades for RT-qPCR experiments to further elucidate the expression patterns of the WOXs during the development of different flower parts in D. chrysotoxum (Figure 8).As shown, the DchWOX3 RT-qPCR results are in general agreement with the transcriptome data, i.e., DchWOX3 showed very low expression in other parts of the flower, while it was significantly expressed in the S1 lip, and its expression was gradually downregulated during flower development (Figure 8A).DchWOX5 was consistently expressed in the ovary during the three periods, suggesting that DchWOX5 is involved in regulating ovary development (Figure 8B).The transcriptome expression heatmap showed that DchWOX10 was significantly expressed during S1 in the gynostemium.However, the RT-qPCR results indicated a trend of increasing followed by decreasing expression of DchWOX10 (Figure 8C).These differences may have resulted from imperfect correlations between the samples used for transcriptome sequencing and the samples used for RT-qPCR.S4 lists the FPKM values for the WOXs in D. chrysotoxum. RT-qPCR Analysis of WOX Genes in D. chrysotoxum We selected DchWOX3, DchWOX5, and DchWOX10 from different clades for RT-qPCR experiments to further elucidate the expression patterns of the WOXs during the development of different flower parts in D. chrysotoxum (Figure 8).As shown, the DchWOX3 RT-qPCR results are in general agreement with the transcriptome data, i.e., DchWOX3 showed very low expression in other parts of the flower, while it was significantly expressed in the S1 lip, and its expression was gradually downregulated during flower development (Figure 8A).DchWOX5 was consistently expressed in the ovary during the three periods, suggesting that DchWOX5 is involved in regulating ovary development (Figure 8B).The transcriptome expression heatmap showed that DchWOX10 was significantly expressed during S1 in the gynostemium.However, the RT-qPCR results indicated a trend of increasing followed by decreasing expression of DchWOX10 (Figure 8C).These differences may have resulted from imperfect correlations between the samples used for transcriptome sequencing and the samples used for RT-qPCR.S5 shows the primer sequences of the DchWOXs and reference gene. All three Dendrobium species in this study had undergone at least two whole-genome duplication (WGD) events [30][31][32].According to the chromosome distribution map (Figure 1), both D. chrysotoxum and D. huoshanense harbored a single pair of tandem repeat genes.The synteny analysis showed that both D. chrysotoxum and D. huoshanense had a single pair of genes with segmental duplication, and D. nobile had two pairs of genes with segmental duplication (Figure 5).It is probably because of these duplication events that the WOX genes differed in number and distribution among the three species.In addition, the Ka/Ks ratios of the four WOX gene pairs detected in this study were all less than one, revealing that these WOXs underwent strong purifying selection during evolution (Table 2) [45].This enables them to remain highly conserved in evolving Dendrobium species, maintaining the specific biological functions of WOX proteins [46]. The phylogenetic analysis of the 30 WOXs from the Dendrobium species compared with those from Arabidopsis and O. sativa showed that the distribution of the WOXs in Dendrobium species is conserved (Figure 2).Like most plants, such as O. sativa, Picea abies, and Eriobotrya japonica, the WUS/modern clade had the highest number of WOX genes and the ancient clade had the lowest number of WOX genes among the three Dendrobium species [36,47,48].The loss of certain WOX genes occurred in the three Dendrobium species, except for WOX1/6/7/8/14, which is unique to dicotyledons.For example, D. chrysotoxum and D. huoshanense both lost WOX4, and only D. nobile retained the homologous gene for AtWOX4 (DnoWOX6).Strikingly, DchWOX1 and DnoWOX1 were well clustered into a subclade with AtWUS and OsWOX1 (AtWUS homologous gene).AtWUS was shown to be the prototype of the Arabidopsis WOX gene family, so we speculate that DchWOX1 is the prototype of the WOXs of D. chrysotoxum, and DnoWOX1 is the prototype of the WOXs of D. nobile [3].However, similar to D. catenatum, the prototype gene was absent in D. hu- Table S5 shows the primer sequences of the DchWOXs and reference gene. All three Dendrobium species in this study had undergone at least two whole-genome duplication (WGD) events [30][31][32].According to the chromosome distribution map (Figure 1), both D. chrysotoxum and D. huoshanense harbored a single pair of tandem repeat genes.The synteny analysis showed that both D. chrysotoxum and D. huoshanense had a single pair of genes with segmental duplication, and D. nobile had two pairs of genes with segmental duplication (Figure 5).It is probably because of these duplication events that the WOX genes differed in number and distribution among the three species.In addition, the Ka/Ks ratios of the four WOX gene pairs detected in this study were all less than one, revealing that these WOXs underwent strong purifying selection during evolution (Table 2) [45].This enables them to remain highly conserved in evolving Dendrobium species, maintaining the specific biological functions of WOX proteins [46]. The phylogenetic analysis of the 30 WOXs from the Dendrobium species compared with those from Arabidopsis and O. sativa showed that the distribution of the WOXs in Dendrobium species is conserved (Figure 2).Like most plants, such as O. sativa, Picea abies, and Eriobotrya japonica, the WUS/modern clade had the highest number of WOX genes and the ancient clade had the lowest number of WOX genes among the three Dendrobium species [36,47,48].The loss of certain WOX genes occurred in the three Dendrobium species, except for WOX1/6/7/8/14, which is unique to dicotyledons.For example, D. chrysotoxum and D. huoshanense both lost WOX4, and only D. nobile retained the homologous gene for AtWOX4 (DnoWOX6).Strikingly, DchWOX1 and DnoWOX1 were well clustered into a subclade with AtWUS and OsWOX1 (AtWUS homologous gene).AtWUS was shown to be the prototype of the Arabidopsis WOX gene family, so we speculate that DchWOX1 is the prototype of the WOXs of D. chrysotoxum, and DnoWOX1 is the prototype of the WOXs of D. nobile [3].However, similar to D. catenatum, the prototype gene was absent in D. huoshanense [35].We hypothesize that, during evolution, there may have been functional redundancy among WOX family members to compensate for the functions performed by the missing genes, or some species-specific WOXs may have arisen [3,39,49]. Supported by the conserved motifs and intron patterns, the highly conserved gene structure guarantees the conserved function of each clade or subclade.WOX genes in the same subfamily tend to have similar numbers of introns and exons, and they also share similarities in gene structure (Figure 3) [50,51].The gene of the ancient clade has a more conserved gene structure than the other two clades' genes, consistent with the WOX genes of Arabidopsis, Poplar, and Sorghum [37].The conserved ancient clade is present in all plants, and we hypothesize that strict conservation ensures that these WOX proteins perform indispensable functions in plant evolution [6].The concatenated motif 3 and motif 1 (Figure 3A) correspond to the homeodomain sequence shown in Figure 4 and are present in all 30 WOX proteins.The homeodomain exhibits a helix-turn-helix-loop-helix structure, which ensures that it can differentiate between sequence-specific targets with precise spatiotemporal organization (Figure 4A) [52].Similar to most plants, such as Arabidopsis, rice, and maize, only WUS/modern clade members contained the WUS-box (TL-LFP-) (Figure 4B) [4,39].In summary, the sequence and structure conservation of the WOX gene family members maintains their functional integrity across species. Transcriptional regulation occurs mainly through the promoter and its associated CREs to activate or repress gene expression [53].The WOX gene family is extensively involved in regulating the development of various plant organs and contributes to abiotic stress and phytohormone signaling.The promoter regions of these 30 WOXs were rich in light-responsive elements, suggesting that WOXs play an essential role in regulating the light response (Figure 6) [54].The meristem expression is the most frequent of the growth and developmental components.DchWOX4, DhuWOX2, and DnoWOX3 had two, three, and three meristem expression elements, respectively (Figure 6B), and these three genes shared a branch with AtWOX11 and AtWOX12.Since AtWOX11 and AtWOX12 participate in new root organ development in Arabidopsis [12], DchWOX4, DhuWOX2, and DnoWOX3 are speculated to regulate the differentiation of roots in the three Dendrobium species, respectively.The stress-repressive CREs in the promoter region of the WOXs mainly include anaerobic induction, drought inducibility, and low-temperature responsiveness, which implies that the WOX genes are essential for plants to respond to abiotic stresses.This has been verified in Arabidopsis and O. sativa; for example, the rab21 promoter drives OsWOX13 overexpression in O. sativa, thereby improving its drought tolerance [55].Elements associated with the plant hormone response in the promoter region of the WOXs are ABA, IAA, SA, GA, and MeJA.Many studies have revealed that the WOX is affected by IAA, ABA, and GA during plant growth and development [22,56].The MeJA response element is the most abundant phytohormone response element.It is involved in plant defense responses and also regulates plant growth and development.[57,58].To summarize, the WOX gene family in Dendrobium species has a vital function in plant growth, development, and the stress response by mediating phytohormone regulation. The transcriptome analysis of Arabidopsis, O. sativa, Fragaria vesca, and Nelumbo nucifera indicated that NnWOX14 was significantly expressed in the carpel of N. nucifera; FvWOX9 and FvWOX9a in F. vesca showed significant expression in the process of development; and WOX family members are expressed in the flowers of both Arabidopsis and O. sativa [37,59,60].All of these observations suggest the significance of the WOX gene in the formation of floral organs in plants.Therefore, we combined transcriptomic data from the flowers of D. chrysotoxum with RT-qPCR experiments to identify the important regulatory role of the WOX genes in D. chrysotoxum flower development (Figures 7 and 8).According to the results of DchWOX3 being significantly expressed in the S1 lip (Figure 8A), along with the development process of gradually reducing the amount of its expression, we speculate that DchWOX3 may participate in regulating lip growth.It has been found that PeWOX9A, PeWOX9B, and DcWOX9 are highly expressed in the gynoecium in D. catenatum and P. equestris, respectively, and the overexpression of DcWOX9 in Arabidopsis resulted in staminate and pistil sterility [36].In our study, DchWOX5 was significantly expressed in the ovary during the three periods (Figure 8B).PeWOX9A, PeWOX9B, DcWOX9, and DchWOX5 both share a clade with AtWOX9, indicating their specific roles in regulating gynoecium and ovary development, which need to be further verified. Identification and Physicochemical Properties of WOXs Candidate WOX genes were searched in the genomes of three Dendrobium species in the two-way BLAST tool of the TBtools v2.003 software, using the 15 WOXs of A. thaliana as probes, respectively [61,62].Meanwhile, using the Simple HMM Search tool of TBtools, the Hidden Markov Model (HMM) file of the homeodomain (PF00046) from the Pfam database (http://pfam.xfam.org/search,accessed on 27 September 2023) was utilized to further identify WOX family members in the three Dendrobium species.Candidate WOXs identified by BLAST and HMM were uploaded to NCBI CD-Search (https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi, accessed on 27 September 2023) for structural analysis, and only genes with conserved typical homeodomains of WOXs were retained. Chromosomal Localization Based on the annotation data of the three Dendrobium genomes, chromosomal localization maps of the WOX genes were produced using Gene Location Visualize from the GTF/GFF program of TBtools. Phylogenetic Analysis of WOX Gene Family The protein sequences of 15 AtWOXs, 13 OsWOXs, ten DchWOXs, nine DhuWOXs, and 11 DnoWOXs were uploaded into the MEGA11 software, and then these 58 WOX protein sequences were used to achieve sequence alignment using the Clustal W function (default parameters), and the phylogenetic trees of the five species were constructed using the maximum likelihood (1000 bootstrap replication) [64,65].The editing and beautification of the phylogenetic tree was performed by Evolview 3.0.(http://www.evolgenius.info/evolview/#/treeview, accessed on 8 October 2023) [66]. Protein Conservative Domain and Gene Structure Analysis The prediction of conserved structural domains for the 30 WOXs in Dendrobium was accomplished by utilizing the CDD program from NCBI (https://www.ncbi.nlm.nih.gov/cdd, accessed on 10 October 2023).The identification of conserved motifs for the 30 WOXs was performed by the MEME online program (https://meme-suite.org/meme/tools/ meme, accessed on 10 October 2023) [67].Gene Structure View in TBtools was employed to map the phylogenetic trees, conserved motifs, and gene structures in combination.The WOX protein sequence alignment was performed by Clustal W of MEGA 11 and then beautified by jalview (Version: 2.11.3.2). Synteny Analysis of WOX Gene Family The identification of intra-species duplicate genes in the three Dendrobium species was performed using the One Step MCScanx function of TBtools [68].In Advance Circos of TBtools, the duplication patterns of the three Dendrobium species were visualized.Then, the calculation of the Ka, Ks, and Ka/Ks values for the gene pairs was accomplished by the Simple Ka/Ks Calculator in TBtools. Cis-Acting Regulatory Element Analysis First, Gtf/Gff3 Sequence Extract and Fasta Extract of TBtools were used to extract 2000 bp upstream of the 30 WOX genes.Second, to complete the prediction of the CREs, the acquired sequences were submitted to the online website PlantCARE (http: //bioinformatics.psb.ugent.be/webtools/plantcare/html/,accessed on 13 October 2023).Finally, the distribution of the acquired CREs was visualized using the Basic Biosequence View module of TBtools, while the categories and number of CREs were counted and plotted in Excel 2016 [69]. Expression Pattern and RT-qPCR Analysis D. chrysotoxum plant materials were taken from the Forest Orchid Garden of Fujian Agriculture and Forestry University for transcriptome sequencing and RT-qPCR, including five flower parts (sepal, petal, lip, ovary, and gynostemium) in three periods (unpigmented bud stage, pigmented bud stage, and early flowering stage). The transcriptome sequencing and library construction of the five flower parts from the three periods of D. chrysotoxum development were performed by BGI Genomics Co., Ltd.(Shenzhen, China).RESM v1.2.8 was used for transcript quantification and to calculate the FPKM value for each sample.Based on the FPKM value, heatmaps of gene expression are created in the HeatMap program of TBtools. Further validation of the expression patterns of the three WOX genes was achieved by RT-qPCR experiments.The FastPure Plant Total RNA Isolation Kit (for polysaccharide-and polyphenol-rich tissues) (Vazyme Biotech Co., Ltd., Nanjing, China) was used to extract total RNA from D. chrysotoxum samples.The Hifair ® AdvanceFast One-Step RT-gDNA Digestion SuperMix for qPCR (YEASEN, Shanghai, China) was used to generate the cDNA for the quantitative PCR.Based on the transcription data, DchActin (Maker75111) was selected as the reference gene.The WOX gene sequences were submitted to the Primer Premier 5 software to design specific PCR primers (Table S5).The TSINGKE ArtiCanATM SYBR qPCR Mix was used for the RT-qPCR analysis on the Bio-Rad/CFX Connect Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, USA).Three biological replicates were carried out for all experiments.Finally, the relative expression of the three WOX genes was calculated using the 2 −∆∆CT method with S1 Se as the reference.The data were visualized using GraphPad Prism 7.0. Conclusions In this study, 10, 11, and 9 WOX genes were identified in the genomes of D. chrysotoxum, D. huoshanense, and D. nobile, respectively, and chromosomal localization, phylogeny, gene structure, and motif composition analyses were performed.In addition, based on the transcriptome and RT-qPCR experiments, we analyzed the expression patterns of the Dch-WOXs in five floral parts of D. chrysotoxum at three developmental periods.In conclusion, our results provide useful information for the in-depth exploration of the biological roles of the WOX gene family, as well as floral developmental studies in Dendrobium species. Figure 2 . Figure 2. Phylogenetic tree of WOXs in A. thaliana, O. sativa, and three Dendrobium species. Figure 2 . Figure 2. Phylogenetic tree of WOXs in A. thaliana, O. sativa, and three Dendrobium species. Figure 4 . Figure 4. Multiple sequence alignment results of the WOX gene family in three Dendrobium species.(A) Homeodomain.(B) WUS-box.The red box indicates the homeodomain and the black box indicates the WUS-box domain. Figure 4 . Figure 4. Multiple sequence alignment results of the WOX gene family in three Dendrobium species.(A) Homeodomain.(B) WUS-box.The red box indicates the homeodomain and the black box indicates the WUS-box domain. Figure 4 . Figure 4. Multiple sequence alignment results of the WOX gene family in three Dendrobium species.(A) Homeodomain.(B) WUS-box.The red box indicates the homeodomain and the black box indicates the WUS-box domain. , DchWOX9 in D. chrysotoxum has the largest number (33 CREs) of elements, DhuWOX2 and DhuWOX4 in D. huoshanense have the most (25 CREs), and DnoWOX6 in D. nobile has the most (34 CREs). , DchWOX9 in D. chrysotoxum has the largest number (33 CREs) of elements, DhuWOX2 and DhuWOX4 in D. huoshanense have the most (25 CREs), and DnoWOX6 in D. nobile has the most (34 CREs). Figure 6 . Figure 6.The CREs in the promoter regions of 30 WOX genes.(A) Distribution of the WOX CREs; (B) the number of CREs; (C) statistics on the number of different categories of CREs.The types and numbers of CREs are listed in TableS3. Figure 6 . Figure 6.The CREs in the promoter regions of 30 WOX genes.(A) Distribution of the WOX CREs; (B) the number of CREs; (C) statistics on the number of different categories of CREs.The types and numbers of CREs are listed in TableS3. Table 1 . Characteristics of the WOXs from three Dendrobium species. Table 2 . The Ka/Ks of the WOX gene family in the three Dendrobium species. Table 2 . The Ka/Ks of the WOX gene family in the three Dendrobium species.
8,010.8
2024-05-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Hyocholic acid and glycemic regulation: comments on ‘Hyocholic acid species improve glucose homeostasis through a distinct TGR5 and FXR signaling mechanism’ Hyocholic acid species (HCA, hyodeoxycholic acid, and their glycine and taurine conjugated forms) comprise 80% of the composition of pig bile (Haslewood, 1956). An interesting fact about pigs is that they do not get diabetes even though they eat almost anything and in abundant amounts, a diabetes-promoting diet. The first use of pig bile for treatment of 'xiao-ke', a condition known today as diabetes, was recorded ∼400 years ago by the Chinese medical practitioners in the Compendium of Materia Medica (Li, 1573‒1593). Recently, we found HCA species as novel biomarkers for metabolic diseases (Zheng et al., 2021b) and also identified the role of HCA species in the prevention of diabetes as well as their mechanism of action (Zheng et al., 2021a). Although bile acids (BAs) are mostly associated with their aid in food digestion, they have also been shown to act as signaling molecules by binding to two particular receptors, farnesoid X receptor (FXR) and the G-protein-coupled receptor, TGR5. Experiments were thus directed to the effect of HCA binding to these two BA receptors on glycemic regulation in both in vivo and in vitro models. Hyocholic acid species (HCA, hyodeoxycholic acid, and their glycine and taurine conjugated forms) comprise 80% of the composition of pig bile (Haslewood, 1956). An interesting fact about pigs is that they do not get diabetes even though they eat almost everything and in abundant amounts, a diabetes-promoting diet. The first use of pig bile for treatment of 'xiao-ke', a condition known today as diabetes, was recorded 400 years ago by the Chinese medical practitioners in the Compendium of Materia Medica (Li, 1573(Li, -1593. Recently, we found HCA species as novel biomarkers for metabolic diseases (Zheng et al., 2021b) and also identified the role of HCA species in the prevention of diabetes as well as their mechanism of action (Zheng et al., 2021a). Although bile acids (BAs) are mostly associated with their aid in food digestion, they have also been shown to act as signaling molecules by binding to two particular receptors, farnesoid X receptor (FXR) and the G-protein-coupled receptor TGR5. Experiments were thus directed to the effect of HCA binding to these two BA receptors on glycemic regulation in both in vivo and in vitro models. The first in vivo experiment was done using pigs. Three groups of pigs were fed GW4064, an FXR agonist that caused significant suppression of HCA species production, along with 30% increase in blood glucose levels and 69% decrease in blood glucagon-like peptide-1 (GLP-1) levels. When HCA species were administered, the blood glucose levels decreased and circulating GLP-1 increased, suggesting that glucose homeostasis and GLP-1 secretion were regulated by HCA species. Further in vivo testing was then done in two diabetic mouse models. HCA species administration to the mice caused the most significant lowering of blood glucose and the most improved glucose tolerance results compared to metformin at a dose 2-fold higher than HCA and to tauroursodeoxycholic acid (TUDCA). Circulating GLP-1 levels were also significantly increased in the HCA group. The BA receptors intestinal FXR and TGR5 are expressed in enteroendocrine L cells that are found primarily in the ileum and colon. Therefore, in vitro studies of the effects of HCA species were performed using the enteroendocrine Lcell lines, STC-1 and NCI-H716. Based on previous studies, which showed that BAs could induce GLP-1 release within 1-2 h (Thomas et al., 2009) and induce expression of the proglucagon gene within 24 h of treatment (Trabelsi et al., 2015), the effects of six different HCA species and six different non-HCA BAs on GLP-1 secretion and proglucagon gene expression were measured. At low BA concentration (5 mM), there was no increase in GLP-1 secretion or production, while at higher concentration (25 mM), all of the HCA species and the TGR5 agonists, lithocholic acid (LCA) and deoxycholic acid (DCA), stimulated GLP-1 secretion within 1 h and after 24 h. HCA species, TGR5 agonists LCA and DCA, and FXR antagonists TUDCA and tauro-b-muricholic acid promoted the transcription of proglucagon and GLP-1 secretion. At 50 mM concentration, HCA species surpassed all other BAs in the ability to increase proglucagon transcription and GLP-1 secretion. Similar results were achieved when human colonic explants were treated with the various BAs at 50 mM. The next set of in vitro experiments were designed to measure direct effects of HCA species on the receptors by measuring intracellular cyclic adenosine monophosphate (cAMP) accumulation mediated by activation of TGR5 and also the effects of FXR agonists and HCA species (FXR antagonist) on the expression of the downstream FXR target, small heterodimer partner (SHP), and on GLP-1 secretion. These experiments are further discussed and diagrammatically illustrated in Figure 1. The necessity for TGR5 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. activation for increased secretion and production of GLP-1 was then confirmed in vivo by comparing the effects of HCA species administrated to TGR5 -/and TGR5 þ/þ mice. Finally, a cohort of 55 participants comprised of 30 healthy, 18 pre-diabetic, and 17 newly diagnosed diabetic individuals were given an oral glucose tolerance test. Results revealed that GLP-1 secretion was much higher in the healthy group and that HCA species were inversely correlated with fasting and post-glucose load levels. Although strong evidence was presented for the mechanism of action of HCA species in preventing or ameliorating diabetes, there are many unanswered questions that remain. The first fundamental question is how pigs developed the capability of producing HCA species in such large quantities. Previous studies have implicated gut microbiota such as Ruminococcus productis together with an unknown gram-positive rod called hyodeoxycholic acid-1 (HDCA-1) in the production of HCA species via bacterial biotransformation of b-muricholic acid (Eyssen et al., 1999). Other routes of HCA species biosynthesis include synthesis from non-12-hydroxylated BAs, LCA, taurolithocholic acid, and chenodeoxycholic acid (CDCA), via CYP3A4-mediated 6a-hydroxylation (Deo and Bandiera, 2008a;Jia et al., 2021) and conversion of LCA to 3a,6b-dihydroxy cholanoic acid, which then becomes further oxidized followed by reduction to become HDCA via gut microbiota (Deo and Bandiera, 2008b). Due to the connection between production of HCA species and the gut microbiota, one could hypothesize that differences in the composition of gut microbiota between pigs and humans may be part of the reason for their large difference in BA composition. Why does not the human body try to compensate and produce more HCA species in response to a diabetogenic diet given the strong relationship between diet and gut microbiota composition? If HCA species were administered at high doses for a prolonged period, would there be a change in the composition of the gut microbiota toward that of the pig? As most of the L cells are located in the ileum, would the ileal microbiota composition be the most affected by prolonged administration of HCA? Are HCA species capable of producing harmful side effects in humans/mice after prolonged exposure? Further studies are needed to assess any long-term side Figure 1 The effects of BA-activated TGR5 signaling and BA-inhibited FXR signaling in enteroendocrine L cells. L cells produce and secrete important hormones that affect energy metabolism and preserve pancreatic b-cell function. (A) In L cells, TGR5 is coupled to Gas G-proteins. HCA species are found to be an agonist for TGR5 and act to promote the secretion of GLP-1, an incretin that has important effects on glucose homeostasis. Gas protein coupling to BA-activated TGR5 results in the recruitment of adenyl cyclase, which subsequently activates cAMP to increase intracellular Ca 2þ via the protein kinase A (PKA) or guanine nucleotide exchange factor (Epac) pathway and ultimately increases the secretion of GLP-1. An assay was performed, which detected increased production of cAMP upon treatment with HCA species, thus indicating that HCA species were the agonist for TGR5 (Zheng et al., 2021a). (B) HCA species are shown to be the Lcell FXR antagonist by their ability to reverse the inhibition of proglucagon transcription that leads to decreased GLP-1 production and secretion and also by being able to downregulate the expression of SHP, a downstream target of FXR. CDCA, an FXR agonist, gave opposite effects (Zheng et al., 2021a). ASBT, apical sodium-dependent bile acid transporter; ATP, adenosine triphosphate; ER, endoplasmic reticulum; FGF15/19, fibroblast growth factor 15/19; RYR, ryanodine receptor. effects and length of efficacy for these unique BAs.
1,983.8
2021-04-30T00:00:00.000
[ "Biology", "Medicine" ]
ON THE DISCRETE LINEAR ILL � POSED PROBLEMS An inverse problem of photo acoustic spectroscopy of semiconductors is investigated The main problem is formulated as the integral equation of the rst kind Two di erent regularization methods are applied the algorithms for de ning regularization parameters are given THE STATEMENT OF THE PROBLEM An inverse problem of photo acoustic spectroscopy of semiconductors taking into account carrier di usion and recombination consists of the recovering of real function f x x l l which is a part of the following boundary value problem d n x dx i n x f x Dn vsn Dn l vsn l Values of are measured in the experiment for di erent values of frequen cies N Here the function x x lb l lg lb lg is the solution of the boundary value problem d x dx i x x V x THE STATEMENT OF THE PROBLEM An inverse problem of photo-acoustic spectroscopy of semiconductors taking into account carrier di usion and recombination consists of the recovering of real function f(x) x 2 (;l 0) l > 0, which is a part of the following boundary (1.5) with the kernel K(! x) of the exponential type. It is well known that such problem are ill-posed. Since the function g(!) is measured only for nite discrete set of frequencies ! 1 ! 2 : : : ! N , the problem (1.5) is discrete ill-posed. Furthermore, any measured data contain random errors j j = 1 2 : : : N bounded by the errors level 0 @ N ;1 N X j=1 2 j 1 A 1=2 for some positive . Therefore, for the numerical solution of the inverse problem it is necessary to calculate the function f(x) on the basis of discrete data g j j = 1 2 ::: N of the following form: g j = g j + j = ( ' j f) X + j j = 1 2 : : : N (1.6) where ' j (x) = K(! j x ) are known linearly independent functions, f ' j 2 X X is a Hilbert space with the inner product A lot of problems in signal processing, geophysics can be formulated in the form (1.6), a good overview of discrete ill-posed problems is given in 3 4 ]. Because of nite number of data, the solution of the inverse problem is nonunique, therefore, we look for the normal pseudo-solution f + (x) of the problem (1.6). It can be shown that f + (x) has the form where '(x) = ( ' 1 (x) ' 2 (x) : : : ' N (x)) g = ( g 1 g 2 : : : g N ) Q is N N Gram matrix with elements q j k = ( ' j ' k ) j k = 1 2 : : : N : See, for example, 5]. Since, the inverse problem (1.6) is ill-posed, the matrix Q ;1 is ill-conditioned and for the numerical solution it is necessary to use a special regularization method. The Tikhonov's regularization method is very popular, which is convenient to use in semi-continuous form 6]. Accordingly to this scheme, the approximated solution is the function f (x) w h i c h minimizes on the space X the functional N X j=1 (f ' j ) ; g j ] 2 + kfk 2 X f 2 X: Here is the regularization parameter which should be chosen. Although, during last thirty years the theory of regularization is quite well developed, the problem of nding parameter is still important. See, for example, some recent papers 7 8 9 1 0 1 1 1 2 ]. THE REGULARIZATION PARAMETER PROBLEM All methods for determining the regularization parameter can be divided into several types accordingly the used additional information. One group of methods uses a priori information concerning the error level . Usually one use the discrepancy principle 13]. It is noted that the discrepancy principle yields oversmoothed solution. It is shown in 14] that this method provides the smallest error propagation in the approximated solution but it gives the worst resolution. In reality, the error kf ; f + k can be reduced for some greater error propagation at the expense of improving the resolution. Such approach leads to the majorant principle 13], if an estimate of kf ; f + k is available 14]. Such approach w as used also, for example, in 15 1 6 ]. It should be noted that such estimate is not possible for entire X space and further assumption concerning the upper bound of the norm kf + k is necessary. In this case, the optimal choice of is possible. If a priori value kf + k is unknown, then one try to obtain it from the data, for instance, using the norm kf k. Such approach is realized in 15]. We also use this idea in our paper. Unfortunately, the sharp estimate of is desirable, since the accuracy of kf ; f + k is very sensitive t o t h e c hange of . Therefore, we are interested in methods that do not use the error level . We mention L-curve m e t h o d with a compact operator T in the Hilbert space X. Really, if for any g ! g the convergence f = R ( ) g ! f = T ;1 g holds and R ( ) g = Rg then simply R T ;1 and R is continuous, i.e. the inverse problem (2.1) is not ill-posed. So, for discrete ill-posed problems we should not expect that for n ! 1 and ! 0 w e will get the convergence f ! f + . This means that any heuristic method of choosing the regularization parameter sometimes fail even for nite n. The non-convergence of the L-curve method is proved in 10]. j (x) (d j + ) 2 (z 1 (x) z 2 (x) : : : z N (x)) > = U ' (x) U D U > is the orthogonal decomposition of the matrix Q, D = diagfd 1 d 2 ::: d N g is the diagonal N N matrix of eigenvalues d j j = 1 2 : : : N . THE METHOD OF THE REGULARIZATION FUNCTION If an estimate of kqk is known, then the optimal choice of the regularization parameter is that which provides minimal right-hand side of the inequality (3.2) (the majorant principle). We note that such regularization parameter depends on x, hence we h a ve the regularization function = (x) x 2 (;l 0): Usually kqk is unknown, in this situation we estimate kqk from the data. The simplest way is to substitute the true vector q with the vector q = ( Q + E) ;1 g: However, more precise approach can be used. From the de nition of the q substituting g = g + = Qq + we deduce the equality q = q + (Q + E) ;1 q ; (Q + E) ;1 (3.3) where = ( 1 2 ::: N ) > : Substituting this formula for q into the right-hand side m times, we h a ve q = m X j=1 j;1 (Q + E) ;j g ; m X j=1 j;1 (Q + E) ;j + m (Q + E) ;m q: (3.4) If m is quite large and < 1, then it is su cient to use the rst term. Substituting it into (3.2), we obtain the method for the choice of the regularization parameter. This method is not heuristic, since it uses the error level . The best tting is to nd providing minimum to the criterion function k df d k (quasi-optimal value). In similar way the criterion function is formul a t e d i n 1 3 ]. THE ANALYSIS OF SOME CLASSICAL METHODS If we use m = 1 and the formula (3.3), we obtain the inequality kqk 2 k q k 2 + k(Q + E) ;1 k 1 ( + kqk): In order to nd the best estimate of kqk 2 we need to minimize the right-hand side of the inequality. We m a y expect that the minimizer of the criterion function kq k 2 k(Q + E) ;1 k 1 + kqk 2 + will be close to the optimal value. The constant does not change the position of the minimum and may be omitted. Neglecting the term kqk 2 , w e o b t a i n the criterion function of the cross-validation method 6]. It can be seen that this method is not quite precise because it uses m equal only to 1. The crossvalidation method was suggested for the case when errors j j = 1 2 : : : N are a white noise. This assumption is crucial for the application of the crossvalidation method. Using our scheme and m > 1 w e m a y use such criterion function without requiring the a priori distribution of measuring errors as well as in situations when the cross-validation method fails.
2,153.2
1999-12-15T00:00:00.000
[ "Mathematics" ]
Amyloid Accumulation in the Toxic Nodule of the Thyroid Gland in a Patient with End Stage Renal Failure Amyloidosis is characterized by accumulation of amorphous, proteinaceous material in various organs and tissues of the body. Amyloid may accumulate in the thyroid gland in cases of medullary thyroid carcinoma and systemic amyloidosis. Amyloid accumulates extracellularly in the thyroid parenchyma and disrupts the normal follicular patterns. Most of the cases reported up to now were clinically euthyroid, but many presentation forms and overlaps have been reported. Herein we present a patient with toxic nodular goiter with amyloid deposition in the toxic nodule as well as the remaining thyroid tissue. Introduction Amyloidosis is characterized by accumulation of amorphous, proteinaceous material in various organs and tissues of the body [1]. The mechanism is not clearly defined. Amyloidosis can be primary or secondary according to the etiology. Secondary amyloidosis occurs in chronic inflammatory states such as rheumatoid arthritis, Chrohn's disease, osteomyelitis and tuberculosis. Consequently, almost any disease associated with chronic inflammation of whatever etiology is liable to amyloidosis complications [2]. Serum amyloid protein (SAA) is responsible for secondary amyloidosis [3,4]. Amyloid deposit in the thyroid gland was first reported by von Rokitansky in 1855 [5]. Amyloid may accumulate in the thyroid gland in cases of medullary thyroid carcinoma and systemic amyloidosis. Diffuse, clinically apparent enlargement of the thyroid gland due to widespread amyloid deposit is a rare occurrence. In 1904 von Eisenberg introduced the term "amyloid goiter" into the literature. It is defined by the presence of amyloid within the thyroid gland in such quantities as to produce clinically apparent enlargement of the gland [6]. Most of the cases reported up to now were clinically euthyroid. Patients with hyperthyroxinemia and hypothyroidism have been reported. Some patients resembling subacute thyroiditis have also been reported. Systemic amyloidosis may occur in patients with kidney failure. In patients who have been on a hemodialysis or peritoneal dialysis program for more than five years, β2 microglobulin builds up forming deposits. This is named as dialysis-related amyloidosis and often occurs around joints. This substance is normally cleared by the kidneys but cannot be removed by dialysis membranes [2]. Herein we present a patient with toxic nodular goiter with amyloid deposition in the toxic nodule as well as the remaining thyroid tissue. Case Report A 52-year-old female patient with known chronic renal failure and on a routine hemodialysis program for seven years was hospitalized for prerenal transplant evaluation. According to her medical history, she was completely healthy seven years ago and her kidneys deteriorated after usage of an antibiotic drug. Routine thyroid function tests were compatible with subclinical hyperthyroidism (T 4 17 pmol/L (9-20), T 3 6.2 pmol/L (3.5-8), TSH 0.20 μIU/mL (0.35-5.1)) so a thyroid ultrasonography was performed revealing a 55 × 36 × 25 mm hypoechoic nodule almost completely filling the left lobe. Anti-thyroglobulin and antithyroid peroxidase autoantibodies were negative. She had no 2 Case Reports in Endocrinology compressive symptoms. Increased activity in the nodule and suppression in the remaining thyroid tissue were reported in her thyroid scan by technetium (Figure 1). Fine needle aspiration cytology (FNAC) revealed normal thyrocytes and was reported as benign. She was diagnosed as toxic uninodular goiter, and total thyroidectomy was performed. The macroscopic specimen revealed amyloidosis in the right lobe, amyloidosis in the nodule, interstitial area, and perivascular areas in the left lobe of the thyroid gland (Figures 2, 3, 4, and 5). She is now on 100 mcg levothyroxine replacement therapy and is being prepared for renal transplantation. Discussion It is generally accepted that some degree of deposition of amyloid in the thyroid gland can be detected in more than 80% of patients suffering from secondary amyloidosis and approximately 50% of those affected by primary amyloidosis [7]. Usually it is diagnosed in autopsies and macroscopic specimens, but recent studies have focused on FNAC in diagnosis. Amyloid goiter is a rarer entity. Amyloid accumulates extracellularly in the thyroid parenchyme and disrupts the normal follicular patterns [8,9]. Most patients are clinically euthyroid, but many different presentation forms have been reported. The thyroid gland can be soft, hard, diffuse or nodular in character according to the amount of amyloid deposited [3]. Kimura dysfunction. 5 patients had hypothyroidism, 2 patients had low T3 syndrome, 1 patient had subacute thyroiditis-like syndrome, and 1 had coexisting Graves disease. Five out of the nine patients had positive thyroid autoantibodies [1]. The etiology of amyloidosis in our patient was uncertain. Several presentations and case reports have been reported about dialysis-related amyloidosis. One case was even reported as carpal tunnel syndrome being the presenting feature [10]. Thyroid involvement is not common in dialysis-related amyloidosis. Musculoskeletal manifestations are more prominent in this type of amyloidosis which were absent in our patient. She had been on a peritoneal dialysis program for five years prior to switching to hemodialysis. She reported being hospitalized for peritonitis two times. Repeated peritonitis attacks may have contributed to her developing secondary sistemic amyloidosis. Maybe systemic amyloidosis was the etiology for chronic renal failure in the first place, but kidney biopsy had never been performed. Our patient had also toxic nodular goiter. The interesting fact in our case was that the nodule was not an amyloid nodule but a toxic nodule with intranodular amyloid deposition. Amyloid was also deposited in the thyroid parenchyme. The nodule was confirmed active with scintigraphy showing that the amyloid deposit did not interfere with technetium. Since amyloid leads to dysfunctioning of the deposited organ, the expected outcome was thyroidal enlargement and primary hypothyroidism. A previous letter by Tokyol et al. revealed similar findings. They argued that hyperthyroidism could be a secondary response of the thyroid gland to interstitial infiltration by amyloid material [11]. It should be noted that amyloid can deposit in all organs and tissues of the body including the thyroid gland. Amyloid accumulation in the thyroid gland does not usually cause thyroid dysfunction and most patients are euthyroid. The importance of this case is that it proves that amyloid can accumulate in thyroid nodules and even in toxic nodules. Whether amyloid deposition contributes to developing thyroidal diseases other than goiter and hypothyroidism needs further research and experience.
1,433.6
2012-10-23T00:00:00.000
[ "Medicine", "Biology" ]
Gel-Like Human Mimicking Phantoms: Realization Procedure, Dielectric Characterization and Experimental Validations on Microwave Wearable Body Sensors A simple and low-cost procedure for gel-like time-durable biological phantoms is presented in this work. Easily accessible materials are adopted, which are able to provide a flexible and controllable method to rapidly realize different kind of tissues. The proposed technique is applied to fabricate various tissue-mimicking phantoms, namely skin, muscle, blood and fat. Their effectiveness is first tested by performing dielectric characterization on a wide frequency range, from 500 MHz up to 5 GHz, and validating the measured dielectric parameters (dielectric constant and conductivity) by comparison with reference models in the literature. Then, a multi-layer phantom simulating the human arm is realized, and a wearable body sensor is adopted to prove the perfect agreement of the biometric response achieved in the presence of the fabricated phantom and that provided by a real human arm. Introduction We live in an increasingly connected world, where communication devices far outnumber the human inhabitants [1]. Most people in the evolved world own multiple portable or wearable devices at all times, in the form of smartphones, laptops, smartwatches, health and fitness trackers. In order to ensure safe and reliable operation in the biomedical context [2], the interaction of the above devices with the human body should be carefully analyzed and understood. Ideally, new devices should be designed and tested on actual users, to ensure their performances are coherent with the expected design. However, apart from being unsafe and unethical, using an actual human as a test subject also presents a host of logistical problems. The bulk of design and development is actually performed by adopting computer-aided simulations, in order to save time and reduce costs [3]. Prototyping at each design stage for validation with a real human body would dramatically increase costs and slow down the product development cycle. Furthermore, even at the later stages of development, it is not always convenient or possible to use a human subject. Human beings find it hard to stay completely stationary for extended durations, reducing repeatability of sensitive tests. For tests conducted in specialized chambers, accommodating an actual user would require a larger enclosure that would be costlier [4]. Thus, in all but the rarest of cases, designers rely on human body phantoms and surrogates which mimic the human body. As may be expected, the human body is a very complex system, and it is not feasible to build an exact replica that is able to duplicate all of its properties. Therefore, most phantoms are limited to the scope of a specific biomedical application. Various phantoms and dummies have been developed and discussed in the literature that replicate breast tissue [5], dry phantoms for SAR testing [6], optical properties of skin [7], etc. In the case of handheld and wearable telemedicine devices, designers are often interested in the interaction between onboard antennas and the human body [8]. Since the major part of the body is a high permittivity lossy medium [9], it can have significant effects on the radiation pattern and efficiency of antennas. The position of a wearable device on the body impacts the types of underlying body tissue that it will be exposed to. For example, a device placed on the arm is exposed to a different set of tissues as compared to a device placed on the chest or head. Fortunately, the high conductivity of the external tissue layers [10] reduces the penetration depth, thus limiting the interaction of internal organs with antennas. Therefore, for the purposes of antenna design and validation, it may be sufficient to model just the external layers, including skin, blood, fat and muscle [11,12]. The electrical properties of these layers have been studied at length, and empirical data as well as numerical models are widely available [13,14]. The known properties of these layers are commonly used to create simplistic multi-layer phantoms in 3D electromagnetic simulations. A planar model offers significant savings in computation time and resources, as compared to more complex models. However, the validity of such non-homogeneous planar models has not been rigorously discussed in the literature. The current work describes the first results (to the best of the authors' knowledge) of the realization and validation of a multi-layer phantom, that reproduces dielectric characteristics widely quoted in the literature. Cheap and easily available ingredients are used for creating each layer of the gelatinous phantom. Different electrical characteristics are achieved by simply changing the ratio of ingredients. A rigorous experimental validation of the complex dielectric permittivity for the realized samples is carried out. The work also compares the ability of the realized multi-layer phantom to accurately mimic the behavior of the human body. To this end, a wearable antenna usable for biometric security purposes is adopted as a test device, by comparing its sensing performances on a voluntary person as well as on the human-mimicking phantom, along with results obtained using 3D EM simulations. Materials and Methods In this section, the procedure adopted to realize a multi-layer phantom, usable for the design and the pre-clinical test of telemedicine sensors, is described. Low-cost and easily available ingredients are adopted, so to have a simple and cheap realization process. Specifically, food-grade gelatine, in particular gelatine leaves, are used as the solidifying agent. The composition originally proposed by [15] is refined to achieve a better agreement with theoretical models. It is observed that higher oil content corresponds to lower permittivity phantoms, while a higher water content results in a higher overall permittivity. The conductivity of the phantom is directly influenced by the salt concentration. Dishwashing soap is used as an emulsifying agent that serves to combine the water and oil into an emulsion. In Table 1, the proportions of the different ingredients adopted in this paper for the phantom's realization are reported. The updated composition leads to a better reproduction of the dielectric characteristics of the human body. Phantom Realization Procedure The gelatine leaves are soaked in water for about 10 min, and they are subsequently squeezed to remove excess water. The rehydrated gelatine is then heated up to about 80 • C, until it is completely liquified. The liquid gelatine is then cooled down to around 45 • C, before the remaining ingredients are added. It is important to continuously stir the mixture during this process in order to avoid the formation of clumps. The blend is then heated again to obtain a more homogenous mixture, which is then poured into an 11 cm by 8.4 cm container, and stored in a fridge overnight. Four different phantoms, namely fat, skin, blood and muscle, are realized by changing the percentage of the involved ingredients. Details of the involved quantities for the different human tissues are provided in Table 1. Measurement Setup for Experimental Validations A specific measurement setup is considered to perform the accurate experimental validation of the biological phantom, which is realized through two sequential steps, namely: 1. The dielectric characterization of each phantom layer; 2. The performance characterization of the multi-layer phantom on a prototype of wearable antenna working in contact with the human body. The experimental setup is equipped in the Microwave Laboratory at the University of Calabria; it includes an open-ended coaxial probe (type DAK-3.5 built by SPEAG [16]), which is connected to Vector Network Analyzer (VNA) Anritsu, model MS 4647A. The above setup is able to measure the complex dielectric permittivity (dielectric constant and tangent loss) in the range from 1 to 200, within a frequency interval going from 200 MHz up to 20 GHz. It introduces a measurement uncertainty equal to ±2%, as reported in the Calibration Certificate provided by the Calibration Laboratory of SPEAG. The fields radiated by the open-ended probe interact with the material in contact with its interface. The VNA is able to measure the magnitude and the phase of the reflected signal, which is directly related to the dielectric properties of the contact material. To achieve accurate results, the Medium Under Test (MUT), namely the biological phantom in this specific case, must appear as a semi-infinite layer, i.e., it must have a greater extension than the diameter of the sensor opening. Furthermore, it is important that no air gap exists between the sensor and the sample. Due to the gel-like nature of the sample, the above conditions are easily guaranteed, as a good contact can be obtained between the probe and the MUT, by applying a little pressure on to the probe. As a first step, the above instruments are calibrated using a distilled water sample at room temperature, specifically equal to 23 • C at the time of measurement. After the calibration phase, the complex permittivity of the reference materials is measured to gauge the instrument accuracy. Table 2 presents the percentage variance observed in the measured complex permittivity of the reference materials (deionized water and air) after the calibration. Repeated measurements were conducted for different frequencies, it can be seen that all the measurements are extremely close to the expected value. Experimental Results In this section, the experimental results assessing the reliability of the proposed tissuemimicking phantoms are presented. Specifically, in the first subsection, the measurement results relative to the complex permittivity (both dielectric constant as well as conductivity) of the realized phantoms are reported and validated through reference models; while, in the second subsection, the experimental validation of the multi-layer phantom simulating the human arm is performed by verifying the proper response of a wearable sensor designed to operate for communication in direct contact with a human person. Dielectric Characterization of Realized Phantoms Following the procedure described in Section 2.1, dielectric measurements are performed by adopting the test setup illustrated in Figure 1. Experimental Results In this section, the experimental results assessing the reliability of the proposed tissue-mimicking phantoms are presented. Specifically, in the first subsection, the measurement results relative to the complex permittivity (both dielectric constant as well as conductivity) of the realized phantoms are reported and validated through reference models; while, in the second subsection, the experimental validation of the multi-layer phantom simulating the human arm is performed by verifying the proper response of a wearable sensor designed to operate for communication in direct contact with a human person. Dielectric Characterization of Realized Phantoms Following the procedure described in Section 2.1, dielectric measurements are performed by adopting the test setup illustrated in Figure 1. Each measurement was repeated ten times, to ensure data reliability and measurement repeatability, and the mean value for each set of measurements was assumed. The resulting measured data for dielectric permittivity (real and imaginary part) as well as conductivity of each biological phantom are reported in Figures 2-4, respectively. Each measurement was repeated ten times, to ensure data reliability and measurement repeatability, and the mean value for each set of measurements was assumed. The resulting measured data for dielectric permittivity (real and imaginary part) as well as conductivity of each biological phantom are reported in Figures 2-4, respectively. The dielectric characterization was performed over a wide frequency range, from 500 MHz up to 5 GHz, so as to construct a big database including the dielectric parameters of the bio-phantoms, to be successfully adopted for a large variety of applications, including biosensors, bio-imaging and tissue engineering. The aforementioned frequency range covers most of the frequency bands used for common biomedical and diagnostic applications as well as off-body and on-body communication. The dielectric characterization was performed over a wide frequency range, from 500 MHz up to 5 GHz, so as to construct a big database including the dielectric parameters of the bio-phantoms, to be successfully adopted for a large variety of applications, including biosensors, bio-imaging and tissue engineering. The aforementioned frequency range covers most of the frequency bands used for common biomedical and diagnostic applications as well as off-body and on-body communication. The measured values for the dielectric parameters are successfully validated by comparison with data taken from reference models in the literature [14,17]. As a demonstration, the above comparison is reported in Table 3 for the 2.4 GHz frequency, operating within the Industrial, Scientific, Medical (ISM) band. It is assumed as the working frequency for the wearable body sensor described in the following subsection. The dielectric characterization was performed over a wide frequency range, from 500 MHz up to 5 GHz, so as to construct a big database including the dielectric parameters of the bio-phantoms, to be successfully adopted for a large variety of applications, including biosensors, bio-imaging and tissue engineering. The aforementioned frequency range covers most of the frequency bands used for common biomedical and diagnostic applications as well as off-body and on-body communication. The measured values for the dielectric parameters are successfully validated by comparison with data taken from reference models in the literature [14,17]. As a demonstration, the above comparison is reported in Table 3 for the 2.4 GHz frequency, operating within the Industrial, Scientific, Medical (ISM) band. It is assumed as the working frequency for the wearable body sensor described in the following subsection. The measured values for the dielectric parameters are successfully validated by comparison with data taken from reference models in the literature [14,17]. As a demonstration, the above comparison is reported in Table 3 for the 2.4 GHz frequency, operating within the Industrial, Scientific, Medical (ISM) band. It is assumed as the working frequency for the wearable body sensor described in the following subsection. Wearable Body Sensor for Tissue Engineering Application The final validation presented in this subsection is aimed at demonstrating the close mimicking features of the realized biological phantoms, as compared to real body tissues. Specifically, a multi-layer phantom is first realized by following the experimental procedure described in Section 2; then, a prototype of miniaturized wearable sensor realized by the research group at University of Calabria [18], and operating within the ISM frequency band (2.4-2.5 GHz), is adopted as a test body sensor to validate its functionality when working in the presence of a human body (simulated by the multi-layer phantom). In order to realize the multi-layer phantom, thin slices are cut from the realized samples of the various body layers (skin, blood, muscle, fat), which are subsequently stacked and then placed on a polycarbonate sheet ( Figure 5), with the aim to provide a mechanical support as well as an easy handling during testing. Specifically, a multi-layer phantom is first realized by following the experimental procedure described in Section 2; then, a prototype of miniaturized wearable sensor realized by the research group at University of Calabria [18], and operating within the ISM frequency band (2.4-2.5 GHz), is adopted as a test body sensor to validate its functionality when working in the presence of a human body (simulated by the multi-layer phantom). In order to realize the multi-layer phantom, thin slices are cut from the realized samples of the various body layers (skin, blood, muscle, fat), which are subsequently stacked and then placed on a polycarbonate sheet ( Figure 5), with the aim to provide a mechanical support as well as an easy handling during testing. Once its fabrication was complete, the effectiveness of the multi-layer phantom was tested by measuring the reflection response (return loss) of the wearable body sensor placed on it (Figure 6a). The result is successfully compared (Figure 7) with that obtained from measurements directly performed on a voluntary person (Figure 6b). Once its fabrication was complete, the effectiveness of the multi-layer phantom was tested by measuring the reflection response (return loss) of the wearable body sensor placed on it (Figure 6a). The result is successfully compared (Figure 7) with that obtained from measurements directly performed on a voluntary person (Figure 6b). Specifically, a multi-layer phantom is first realized by following the experimental procedure described in Section 2; then, a prototype of miniaturized wearable sensor realized by the research group at University of Calabria [18], and operating within the ISM frequency band (2.4-2.5 GHz), is adopted as a test body sensor to validate its functionality when working in the presence of a human body (simulated by the multi-layer phantom). In order to realize the multi-layer phantom, thin slices are cut from the realized samples of the various body layers (skin, blood, muscle, fat), which are subsequently stacked and then placed on a polycarbonate sheet ( Figure 5), with the aim to provide a mechanical support as well as an easy handling during testing. Once its fabrication was complete, the effectiveness of the multi-layer phantom was tested by measuring the reflection response (return loss) of the wearable body sensor placed on it (Figure 6a). The result is successfully compared (Figure 7) with that obtained from measurements directly performed on a voluntary person (Figure 6b). It can be observed that the antenna exhibits a resonant frequency equal to 2.43 GHz, when placed on the multilayer phantom, and 2.465 GHz, if operating on the human body. The usable impedance bandwidth is equal to 102 MHz (4.2%) with the phantom, and 105 MHz (4.26%) in the presence of a real human body. On the other hand, the measurement in air exhibits a resonance frequency at 2.53 GHz and an impedance bandwidth of 64 MHz (2.5%). The measured curves reported in Figure 7 reveal the excellent performance of the biological multi-layer phantom, whose response is almost identical to that provided by a real human arm, thus opening to a variety of useful biomedicine applications, such as biosensors, where pre-clinical tests can be successfully implemented by adopting a proper tissue-mimicking phantom, thus avoiding direct It can be observed that the antenna exhibits a resonant frequency equal to 2.43 GHz, when placed on the multilayer phantom, and 2.465 GHz, if operating on the human body. The usable impedance bandwidth is equal to 102 MHz (4.2%) with the phantom, and 105 MHz (4.26%) in the presence of a real human body. On the other hand, the measurement in air exhibits a resonance frequency at 2.53 GHz and an impedance bandwidth of 64 MHz (2.5%). The measured curves reported in Figure 7 reveal the excellent performance of the biological multi-layer phantom, whose response is almost identical to that provided by a real human arm, thus opening to a variety of useful biomedicine applications, such as biosensors, where pre-clinical tests can be successfully implemented by adopting a proper tissue-mimicking phantom, thus avoiding direct measurements on human persons. Discussion A simple and low-cost procedure, adopting easily accessible materials, has been presented for the realization of gel-like human phantoms that excellently mimick the behavior of real human tissues. Liquid phantoms are the most commonly adopted in the scientific community, as they are the easiest and most flexible to fabricate, with volume and consistency being easily controllable. Solid phantoms may be used for lifetime extension, but they require more complicated and less flexible procedures. Gelatin phantoms can provide semi-solid solutions, requiring less realization time, while preserving a relatively long temporal stability. Moreover, they can be successfully adopted for imaging and biocompatibility applications. Following the above motivations, we have successfully implemented an easy technique to realize gel-like tissue-mimicking phantoms. First, the fabrication procedure has been accurately described, and particularized for the following tissues: skin, muscle, blood and fat. Successively, the dielectric characterization of the fabricated biological phantoms has been performed, and parameters (dielectric permittivity and conductivity) from the experimental stage have been successfully validated with existing models in the literature, thus confirming the accurate behavior of the realized tissue phantoms. As a subsequent step, a multi-layer phantom mimicking the human arm has been realized by properly sizing and stacking the single prototyped biological layer. Finally, a wearable body sensor useful for biometric security applications has been adopted to demonstrate the perfect agreement between the sensor response provided in the presence of the fabricated multi-layer phantom and that obtained with a real human arm. The achieved results present the reliable application of the proposed gel-like human phantoms as tissue-mimicking materials for in vitro studies and prediction of in vivo bioeffects at microwave and millimeter-wave frequencies.
4,641
2021-04-01T00:00:00.000
[ "Physics" ]
The Role of Optimal Electron Transfer Layers for Highly Efficient Perovskite Solar Cells—A Systematic Review Perovskite solar cells (PSCs), which are constructed using organic–inorganic combination resources, represent an upcoming technology that offers a competitor to silicon-based solar cells. Electron transport materials (ETMs), which are essential to PSCs, are attracting a lot of interest. In this section, we begin by discussing the development of the PSC framework, which would form the foundation for the requirements of the ETM. Because of their exceptional electronic characteristics and low manufacturing costs, perovskite solar cells (PSCs) have emerged as a promising proposal for future generations of thin-film solar energy. However, PSCs with a compact layer (CL) exhibit subpar long-term reliability and efficacy. The quality of the substrate beneath a layer of perovskite has a major impact on how quickly it grows. Therefore, there has been interest in substrate modification using electron transfer layers to create very stable and efficient PSCs. This paper examines the systemic alteration of electron transport layers (ETLs) based on electron transfer layers that are employed in PSCs. Also covered are the functions of ETLs in the creation of reliable and efficient PSCs. Achieving larger-sized particles, greater crystallization, and a more homogenous morphology within perovskite films, all of which are correlated with a more stable PSC performance, will be guided by this review when they are developed further. To increase PSCs’ sustainability and enable them to produce clean energy at levels previously unheard of, the difficulties and potential paths for future research with compact ETLs are also discussed. Introduction In addition to its numerous appealing photoelectronic properties and potentially low manufacturing costs, the photovoltaic industry is at present particularly interested in exploring organic-inorganic combination perovskites that feature a framework of ABX 3 [1][2][3].In the past few years, perovskite-based solar cells (PSCs) have exhibited an unparalleled surge in effectiveness, rising from 3.8% in 2009 to 22.7% in 2018 [4] and recently reaching 26.1% in 2023 [5].This is the very first occasion that a novel solar cell manufacturing process has demonstrated the potential to rival existing commercially available solar cells in such a short period.Furthermore, the primary obstacles to their widespread commercialization are being progressively removed.These obstacles may be due to the instability of perovskite solar cells with respect to moisture, light etc.In order to improve their moisture resistance, the encapsulation of perovskite materials by using fluoropolymers has been reported, enabling the materials to retain 95% of their efficiency by controlling the degradation of the perovskite in the presence of moisture [6,7].This finding suggests that long-term stability can be achieved by integrating the artificial impact of a contact sterilization strategy with the development of new, reliably stable crystals [8,9].Achieving completely solution-based approaches, low production costs, and techniques for a streamline production process remains challenging [10,11].Regarding the toxicological concern of lead (Pb), unleaded compounds such as MASnI 3−x Br x and MASnI 3 have been demonstrated, which exhibited significantly poorer photovoltaic presentations in comparison to MAPbI 3 , suggesting the importance of Pb [12].However, the tiny quantity of lead halide perovskite in these systems causes small Pb losses to have an impact on human living circumstances [13].Consequently, there is an exciting prospect for portable and mobile energy sources because of the thorough research available on them and quick advancements in their efficiency; this should be referred to as the perovskite age rather than just a perovskite fever [14].Electron transport materials (ETMs), which transfer electrons generated by photosynthesis from photoactive layers to the cathode, have a major impact on the efficiency of photovoltaic systems. Based on various materials, methods, and features, there are many metal oxides that have been used as ETMs in the reported literature on perovskite solar cells.Regarding the materials, for example, titanium dioxide (TiO 2 ), zinc oxide (ZnO), and tin oxide (SnO 2 ) are reported as ETMs in most of the planar architecture.However, each of these metal oxides have their own advantages and disadvantages [4,15,16].Regarding methods, spin coating or free spin coating and printing or dipping methods are employed in order to improve the coverage of the electrode as well as enhance the electron mobility [11].Regarding features, ETLs must possess a low trap density, high light transmittance, and energy level matching, as shown in Figure 2 [11]. This review deals with the compilation of recent developments in perovskite solar cells with respect to ETMs.As discussed earlier, TiO 2 , ZnO, and SnO 2 have commonly been used in recent research; however, the implementation of a variety of passivation strategies enhances their efficiency, stability, and processability differently.These passivation strategies, including additive/dopant engineering, thermal and solvent engineering, and interface engineering, are compiled in this review for each of these familiar ETMs.The benefits of SnO 2 over TiO 2 in terms of thermal processing, preparation techniques, and the nature of the materials (such as crystalline, amorphous, or nanoparticle), which are directly connected to the efficiency of the fabricated device, are discussed [17][18][19][20]. Other than the power conversion efficiency, the instability of PSCs when in contact with external stimuli such as humidity, light, or an electric field causes a severe breakdown of the perovskite crystals and plays a crucial role in their large-scale production [21][22][23][24][25].For example, the UV-induced degradation of devices greatly affects the perovskite layer, causing carrier losses, which affect the efficiency of the device [26].In order to overcome these issues, many attempts have been made by researchers, such as encapsulation, changing the HTL, using dopant-free HTLs, and ion migration [27][28][29][30][31][32][33][34][35].BaSnO 3 film doped with lanthanum (La) has also been used as an ETL in order to reduce the destruction caused by UV light, which resulted in 90% of the effectiveness being retained [36,37]. Therefore, in order to improve PSCs, understanding the structure of PSCs and the study of materials and features of ETMs are important.Regarding the structure of PSCs, perovskite solar cells have different classifications, such as being mesoporous or having a planar structure.In the case of mesoporous PSCs, these consist of ITO/hole-blocking layer/mesoporous layer/perovskite absorber/hole transport layer/metal.Mostly, mesoporous TiO 2 or Al 2 O 3 is used as the mesoporous layer.Initially, an efficiency of 9.7% was achieved by using mesoporous TiO 2 with a CH 3 NH 3 PbI 3 absorber, which was further improved to 10.9% by using a mixed-halide perovskite absorber (CH 3 NH 3 PbI 3−x Cl x ).By implementing different approaches such as the two-step coating method for making CH 3 NH 3 PbI 3 [38] or solvent engineering in the preparation of CH 3 NH 3 Pb(I 1−x Br x ) 3 (x = 0.1-0.15),researchers have enhanced the efficiency to 15% and 16.2%, respectively [39].This power conversion efficiency has now reached to 22.2% through the use of printable mesoscopic perovskite solar cells (p-MPSCs) with mesoporous layers of semiconducting titanium dioxide [40].In the case of planar structures, depending on the location of the ETLs and HTLs, regular (negative-intrinsic-positive) and inverted structures (positiveintrinsic-negative) have been classified, as shown in Figure 1. Micromachines 2024, 15, x FOR PEER REVIEW 3 of 20 power conversion efficiency has now reached to 22.2% through the use of printable mesoscopic perovskite solar cells (p-MPSCs) with mesoporous layers of semiconducting titanium dioxide [40].In the case of planar structures, depending on the location of the ETLs and HTLs, regular (negative-intrinsic-positive) and inverted structures (positiveintrinsic-negative) have been classified, as shown in Figure 1.Initially, titanium dioxide (TiO2) was used as the ETL in NIP structures, whereas poly(3,4-ethylene dioxythiophene) doped with poly(styrene-sulfonate) (PEDOT:PSS) was used as an HTL in PIN structures [41].Although both architectures can currently achieve high power conversion efficiencies (PCEs) above 20-22%, NIP-type PSCs have produced significantly higher efficiencies than PIN-type architectures [42,43].This might be the consequence of the lower open-circuit voltage (Voc) for PIN-type PSCs as a result of the perovskite's inappropriate doping state close to its N-type interface, which raises the nonradiative recombination rate [44]. In the case of NIP-type PSCs, an 11.4% PCE was initially achieved for the cell structure comprising FTO/compact TiO2/perovskite/Spiro-OMeTAD/Au.By implementing different approaches and different deposition methods for the perovskite layer, such as the dual-source vapor deposition method [45], the sequential deposition method [46], and the doping of TiO2 using gold or yttrium [47], an efficiency of 19.3% was reached by 2014.However, researchers have now achieved efficiencies >20% by using different passivation strategies.For example, passivation of the interface between SnO2 and the perovskite by using hydroxyethylpiperazine ethane sulfonic acid achieved a PCE of 20.22% [48], and the doping of chlorine to SnO2 brought the PCE to 25.8% in 2021 [49]. In the case of PIN-type PSCs, PEDOT:PSS [poly(3,4-ethylenedioxythiophene):polystyrene sulfonate] and PC61BM or PC71BM ( [6,6]-phenyl-C61/71-butyric acid methyl ester) are used as the HTL and ETL, respectively.Their ability to be prepared at low temperatures, as well as the non-requirement of HTL dopants and their compatibility with organic electronic manufacturing techniques, give p-i-n solar cells an edge over n-i-p ones.At the initial stage, by 2013, the PIN type in the sequence of ITO/PE-DOT:PSS/CH3NH3PbI3/PC61BM/Al resulted in an efficiency of 3.9%.Implementing various approaches such as the one-step deposition method, the sequential deposition method, annealing, and solution processing methods led to improvements in the efficiencies of 5.2%, 7.4%, 9.8%, and 11.5%, respectively [50][51][52].Further, the casting method or the doping of HI to perovskite solutions enabled researchers to reach PCEs of 17.7% [53] and 18.1%, respectively, by making pin-hole-free perovskite films [54].Finally, it reached 18.9%, which was the highest during the year of 2015 [55].However, a recent report using a polymer based on carbazole phosphonic acid (Poly-4PACz) as the HTL layer in PINtype PSCs enhanced the efficiency to 24.4% [49]. The selection of the suitable HTL is also important towards the efficiency of PSCs.In order to reduce the recombination rate, low spatial contact is needed between the HTL and perovskite.Moreover, the highest occupied molecular orbital (HOMO) energy level Initially, titanium dioxide (TiO 2 ) was used as the ETL in NIP structures, whereas poly(3,4-ethylene dioxythiophene) doped with poly(styrene-sulfonate) (PEDOT:PSS) was used as an HTL in PIN structures [41].Although both architectures can currently achieve high power conversion efficiencies (PCEs) above 20-22%, NIP-type PSCs have produced significantly higher efficiencies than PIN-type architectures [42,43].This might be the consequence of the lower open-circuit voltage (V oc ) for PIN-type PSCs as a result of the perovskite's inappropriate doping state close to its N-type interface, which raises the non-radiative recombination rate [44]. In the case of NIP-type PSCs, an 11.4% PCE was initially achieved for the cell structure comprising FTO/compact TiO 2 /perovskite/Spiro-OMeTAD/Au.By implementing different approaches and different deposition methods for the perovskite layer, such as the dual-source vapor deposition method [45], the sequential deposition method [46], and the doping of TiO 2 using gold or yttrium [47], an efficiency of 19.3% was reached by 2014.However, researchers have now achieved efficiencies >20% by using different passivation strategies.For example, passivation of the interface between SnO 2 and the perovskite by using hydroxyethylpiperazine ethane sulfonic acid achieved a PCE of 20.22% [48], and the doping of chlorine to SnO 2 brought the PCE to 25.8% in 2021 [49]. In the case of PIN-type PSCs, PEDOT:PSS [poly(3,4-ethylenedioxythiophene):polystyrene sulfonate] and PC 61 BM or PC 71 BM ( [6,6]-phenyl-C 61/71 -butyric acid methyl ester) are used as the HTL and ETL, respectively.Their ability to be prepared at low temperatures, as well as the non-requirement of HTL dopants and their compatibility with organic electronic manufacturing techniques, give p-i-n solar cells an edge over n-i-p ones.At the initial stage, by 2013, the PIN type in the sequence of ITO/PEDOT:PSS/CH 3 NH 3 PbI 3 /PC 61 BM/Al resulted in an efficiency of 3.9%.Implementing various approaches such as the one-step deposition method, the sequential deposition method, annealing, and solution processing methods led to improvements in the efficiencies of 5.2%, 7.4%, 9.8%, and 11.5%, respectively [50][51][52].Further, the casting method or the doping of HI to perovskite solutions enabled researchers to reach PCEs of 17.7% [53] and 18.1%, respectively, by making pinhole-free perovskite films [54].Finally, it reached 18.9%, which was the highest during the year of 2015 [55].However, a recent report using a polymer based on carbazole phosphonic acid (Poly-4PACz) as the HTL layer in PIN-type PSCs enhanced the efficiency to 24.4% [49]. The selection of the suitable HTL is also important towards the efficiency of PSCs.In order to reduce the recombination rate, low spatial contact is needed between the HTL and perovskite.Moreover, the highest occupied molecular orbital (HOMO) energy level in the inorganic p-type semiconductor should be at a proper position with respect to the valance band of the perovskite layer to enable proper charge transport and hole collection for obtaining a better current density [56].Since this manuscript deals with the efficiency of PSCs with respect to the electron transport layer, details about the selection of different HTLs and the issues, challenges, and passivation strategies of HTLs are not covered in this review, and this information can be found in the literature [56,57]. Therefore, it is concluded that both of these NIP-and PIN-type architectures exhibited high efficiencies when applied with different methodologies.However, NIP types provide significantly higher efficiencies than PIN types, since NIP types provide higher V oc and fill factor (FF) values.The observed discrepancy might be located at the P-type interface, where PIN-type architecture would have more difficulty extracting holes. Systematic Literature Review One of the most basic needs of the modern world is energy.Fossil energy resources are currently the main source of the world's ever-rising energy demand.Fossil fuel combustion generates greenhouse gas emissions that endanger the Earth's ecosystems by triggering global warming.To replace fossil fuels, it is therefore highly desirable to investigate alternative, carbon-free, renewable energy sources.Solar energy is a desirable electricity alternative because it is the most practical renewable energy source that might be able to meet the world's energy requirements shortly.A solar cell is a device that directly converts solar radiation into electrical power.Solar cells are robust, dependable, and long-lasting because they do not have moving components and can operate silently and without creating any pollution [58][59][60][61].Sustainable electrical solutions that collect environmental resources of energy (thermal, mechanical, and radiant energy) are sought after to continually power or recharge Internet-of-Things devices.Solar cells are very stable and may be produced at a low cost, among other advantageous features.Because of these qualities, solar cells are expected to be used as a long-term source of power for space probes and satellites [62]. Owing to their light absorption as well as their charge-transporting properties, siliconbased devices were focused on in earlier research [63,64].However, their toxicity and production costs limited their bulk-scale production and thus urged for the development of a new absorber.Methylammonium lead halides (CH 3 NH 3 PbX 3 ), so called perovskite materials, then emerged as a new light absorber as they can overcome the above said limitations of silicon as well as providing flexibility [65,66].In addition to the merits of perovskite, such as the tunable band gap, high carrier mobility, high optical absorption coefficient, and longer diffusion length of carriers, it also has challenges of instability.In order to enhance the stability of PSCs, additive engineering, for example, with ionic liquid additives; compositional engineering, for example, the addition of cesium iodide (CsI); interface modification using different lead salts such as lead sulfate/lead phosphate; and different methods of dopant engineering have been carried out [67]. Regarding the ETM, especially for planar architectures, well-known metal oxides (TiO 2 , SnO 2 , and ZnO) are used; however, for inverted architectures, [6,6]-phenyl-C61-butyric acid methyl ester (PC61BM) and fullerene (C60) are commonly used as ETMs [68][69][70].Owing to their poor filming ability and low stability, PCBMs were replaced by polymers and achieved an efficiency of 20.86% [71].However, finding a novel ETM with appropriate energy levels, improved stability, especially towards light and humidity, and high electron mobility is still in demand.Therefore, this review mainly focuses on the development of ETMs mostly in planar architecture and the existing challenges and solutions to overcome the limitations of bulk-scale productions.The related research articles were collected and analyzed, and we compared the efficiencies of the reported PSCs with respect to the techniques or passivation strategies used. Resources for the Systematic Literature Review This systematic literature review (SLR) precisely followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) criteria to methodically study the integration of transfer layers for highly efficient perovskite solar cells.Guaranteeing an organized and transparent review process was the goal.All articles published between 2018 and 2023 were included in the review, which was conducted using credible databases such as Google Scholar, IEEE Xplore, Scopus, and PubMed.Articles on processing high-efficiency perovskite solar cells' transfer layers for optimal electron transfer were required to meet the inclusion criteria.After a rigorous selection procedure that followed the PRISMA guidelines for systematic reviews, a total of 60 articles were included.PRISMA guidelines were followed, and a thorough and methodical search strategy was used.Predefined search terms, including "Passivation of Perovskite Solar Cells", "Surface passivation of Electron Transport Layer or Interface Layer", "Analyzing the Perovskite solar cells with Optimal electron transfer layer", "Role of Optimal electron transfer layer for Perovskite solar cells", and "Perovskite solar cells with Optimal electron transfer layer", were used to find relevant articles.Reputable databases, including Google Scholar, IEEE Xplore, Scopus, and PubMed, were searched.One thousand eight hundred articles were initially obtained from Google Scholar; two hundred ninety from IEEE Xplore; eight hundred forty from Scopus; and eighty-five from PubMed.After a thorough screening process that involved removing duplicates and determining their relevance, 280 articles were found to be eligible for additional review.PRISMA guidelines were followed in the final selection of 60 articles, guaranteeing a consistent and thorough evaluation based on the predetermined inclusion criteria. Research Questions 2.2.1. RQ1: Which Is the Most Efficient Electron Transport Layer for Perovskite Solar Cells? Electron Transport Layers (ETLs) in Perovskite Solar Cells: The remarkable power conversion efficiency (PCE) and the promise of low-cost, scalable manufacture achievable with perovskite solar cells (PSCs) have attracted a lot of attention.Because they make it easier to harvest and transport photogenerated electrons, ETLs are essential to PSCs.Additionally, they aid in adjusting the interface, balancing energy levels, and reducing charge recombination inside the cell. Optimal ETL Thickness: The effect of the ETL thickness on PSC performance has been thoroughly investigated by researchers.One noteworthy work used atomic layer deposition (ALD) to manufacture ultrathin titanium dioxide (TiO 2 ) coatings as superior ETLs.The main conclusions were as follows: Ultrathin TiO 2 Films: Thin layers of TiO 2 ranging in thickness from 5 to 20 nm were used in the study as ETLs. Efficiency: By utilizing an ideal 10 nm thick TiO 2 layer, the as-prepared PSCs on fluorine-doped tin oxide (FTO) substrates attained a noteworthy efficiency of 13.6%. Flexible Cells: With low-temperature-processed TiO 2 films at 80 • C, even flexible PSCs on polyethylene terephthalate (PET) substrates demonstrated an efficiency of 7.2%. High-Performance Mechanism: Many factors were considered responsible for these cells' success: • The transmittance of the ultra-thin layer of TiO 2 was increased; • The current leakage was minimal; • The recombination rate and resistance to charge transfer were decreased; • The ZnO/SnO 2 double layers outperformed all other ETLs in terms of the average power conversion efficiency, delivering 14.6% (best cell: 14.8%), which was 39% better than that of flexible cells made with SnO 2 -only ETLs in the same batch. RQ2: How Can a High Power Conversion Efficiency of Perovskite Solar Cells Be Achieved? It is possible to draw the inference that PSC production must complete three key processes to reach this level of high efficiency and noticeable stability: (1) Controlling the quality of the perovskite film; (2) Creating the appropriate CTLs for the PSCs; (3) Reducing flaws in the bulk and/or at the interfaces of the perovskite. RQ3: What Role Does the Electron Transport Layer Play in a Perovskite Solar Device? In n-i-p architectures, the ETL is essential for producing high-performance solar cells because it inhibits recombination and encourages the transfer of photogenerated electrons from the perovskite layer to the bottom electrode. Requirements of an Ideal Electron Transport Material The fill factor (FF), open-circuit voltage (V oc ), and short-circuit current density (J sc ) have a direct correlation with the PCE.According to the concepts behind the solar power effect referred to from traditional p-i-n semiconductor designs [33], the V oc is the result of the separation of both the hole and the electrons' quasi-Fermi amounts of energy all through the whole device and is therefore impacted by the electrical energy distribution of both the perovskite lightweight film and the charge-transporting layer [72].The light harvester's and the device's carrier recombination spectrum responses are reflected in J sc .The transport medium mobility, the film morphology, and the bulk and contact energy recombination rates in the device can all be indicators of the FF since it is directly related to charge extraction and transportation.A careful selection and architecture of the adjacent ETL are required because the current standard perovskite materials, such as FAPbI 3 and MAPbI 3 , are moisture-sensitive, thermally unstable, and chemically sensitive due to their robust Lewis acid characteristics.Up to now, the perfect ETL should satisfy each of these specifications. Electronic Properties: The lowest LUMO (unoccupied molecular orbital) level of the ETM ought to preferably be either somewhat lower or equivalent to that of the perovskitebased substance to facilitate electron selection.Due to the ambipolar transportation characteristic of perovskite materials, a wider band gap and a smaller maximum occupancy molecule orbital (HOMO) than those of polycrystalline active substances are needed to fulfill the electron containment and hole-blocking functionality [28].Furthermore, there should be a decrease in the amount of material compositional disarray, which will minimize the likelihood of ETL defects in order to stop the recombination of carriers.For example, when an exceptionally ordered [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) layer was placed utilizing the solvent-induced tempering process, PSCs showed an impressive rise in V oc from 1.04 to 1.13 V. A significant resistivity with electron movement larger than the polycrystalline layer of activity is also necessary to further rule out the space charge-limit effect since any charge collection at the interface would accelerate the speed of deterioration [70]. Features of Film Morphology: Although it stops current from flowing from small holes in the film and charge recombination at these electrode interactions, a pinhole-free dense morphological idea of the ETL is essential for highly efficient PSCs.This creates an increased shift difficulty.The ambipolar conduction property of perovskite materials is the reason for this [28].In addition, a substantially required superior material with few flaws is needed to obtain outstanding PSCs with large V oc and FF values. Hydrophobicity and Chemical Durability: To avoid chemical reactions with the nearby epitaxial layer and anode electrodes, an ideal ETM ought to have strong chemical durability.Furthermore, because hydrophobicity keeps humidity from penetrating and interacting with the polycrystalline elements, it is crucial for ETLs in PSCs.Moreover, the interaction of chemicals that exists between the ETM and polycrystalline elements should be taken into consideration to achieve contact sterilization of the film of perovskite and lessen the interfacial carrier's recombination brought on by defects and trap states at the electronselective interfaces [72,73].Additionally, because of the sensitive precipitation behavior of perovskites, selecting an ETL with an appropriate surface energy will be essential for typical n-i-p electronics to improve the kinetics of consolidation and the overall appearance of the films produced using perovskites. While it is still very difficult to discover a single ETM that satisfies all of these requirements, several material classes and their hybrids have been researched to address PSC application requirements.Some of the crucial features of existing ETMs include the electron accessibility, the valency band maximum, and the conduction band minimum (CBM). Electron Transport Layers in Perovskite Solar Cells In terms of defect states, charge transport methods, the electrical structure, thin-film manufacturing, and optoelectronic characteristics, metal oxides (MO x s) provide the most promising design [59].They allow electron transit and obstruct hole transport to the corresponding electrode.Although MO x s reduce the voltage shunt that exists between the transparent electrode/HTL and the translucent electrode/perovskite interfaces, they have potential as materials for PSCs.A schematic representation of the role of the ETL in perovskite solar cells is given in Figure 2. electron-selective interfaces [72,73].Additionally, because of the sensitive precipitation behavior of perovskites, selecting an ETL with an appropriate surface energy will be essential for typical n-i-p electronics to improve the kinetics of consolidation and the overall appearance of the films produced using perovskites. While it is still very difficult to discover a single ETM that satisfies all of these requirements, several material classes and their hybrids have been researched to address PSC application requirements.Some of the crucial features of existing ETMs include the electron accessibility, the valency band maximum, and the conduction band minimum (CBM). Electron Transport Layers in Perovskite Solar Cells In terms of defect states, charge transport methods, the electrical structure, thin-film manufacturing, and optoelectronic characteristics, metal oxides (MOxs) provide the most promising design [59].They allow electron transit and obstruct hole transport to the corresponding electrode.Although MOxs reduce the voltage shunt that exists between the transparent electrode/HTL and the translucent electrode/perovskite interfaces, they have potential as materials for PSCs.A schematic representation of the role of the ETL in perovskite solar cells is given in Figure 2. Titanium Dioxide (TiO2) The TiO2 mutations known as anatase (tetragonal), rutile (tetragonal), and brookite (orthorhombic) have been extensively employed as photocatalysts [74] and in cosmological compartments [75] due to their distinct crystalline phases and special characteristics.Due to its low cost, tunable electronic characteristics, and conductive band that closely matches that of perovskites, which facilitates electron delivery and collection, a particularly promising substance used in n-type ETLs for effective PSCs is TiO2.Nevertheless, there are certain disadvantages to using TiO2 film in PHJ PSCs: (i) TiO2's poor conductivity and electron mobility make it undesirable for electron transport and collecting [76,77].(ii) When TiO2 is exposed to UV light, at the substance's interface and bordering grains, oxygen vacancies are produced.Due to this process, these vacancies act as charged traps and significantly reduce the number of carriers generated by photons [42,43].Consequently, the contact between the TiO2 and polycrystalline elements causes significant instability, The TiO 2 mutations known as anatase (tetragonal), rutile (tetragonal), and brookite (orthorhombic) have been extensively employed as photocatalysts [74] and in cosmological compartments [75] due to their distinct crystalline phases and special characteristics.Due to its low cost, tunable electronic characteristics, and conductive band that closely matches that of perovskites, which facilitates electron delivery and collection, a particularly promising substance used in n-type ETLs for effective PSCs is TiO 2 .Nevertheless, there are certain disadvantages to using TiO 2 film in PHJ PSCs: (i) TiO 2 's poor conductivity and electron mobility make it undesirable for electron transport and collecting [76,77].(ii) When TiO 2 is exposed to UV light, at the substance's interface and bordering grains, oxygen vacancies are produced.Due to this process, these vacancies act as charged traps and significantly reduce the number of carriers generated by photons [42,43].Consequently, the contact between the TiO 2 and polycrystalline elements causes significant instability, delaying the light-responsiveness of the resultant electronics [77].A lot of money has been spent on changing TiO 2 compact layers (CLs) through interfacial designs and chemical doping in order to improve PSC performance [18] (Figure 3).The surface form and properties of the TiO 2 CL of PSCs have a significant impact on the quality of the perovskite photosensitive layer in terms of crystal size, homogeneity, and surface coverage, which in turn impacts the production of solar power [70]. spent on changing TiO2 compact layers (CLs) through interfacial designs and chemical doping in order to improve PSC performance [18] (Figure 3).The surface form and properties of the TiO2 CL of PSCs have a significant impact on the quality of the perovskite photosensitive layer in terms of crystal size, homogeneity, and surface coverage, which in turn impacts the production of solar power [70]. Surface Modification with TiO2 Nanoparticles The change in the surface of ETLs has received a portion of consideration as a means of enhancing PSC performance and stability.The topological form of TiO2 films can be modified because TiO2 nanoparticles (NPs) have a greater specific surface area than TiO2 CLs.TiO2 NPs facilitate the effective injection of electrons and their travel, which can improve the balance of carrying charges.The TiO2 anatase stage is extensively used as an ETL in PSCs because it is simple to produce [78][79][80].On the other hand, although the pure limestone stage of TiO2 is difficult to produce, it is the least studied phase.There is also hope for using TiO2's rutile stages as an ETL for PSC purposes.Currently, in PSCs and related device structures, considering their PCEs, materials based on [6,6]-phenyl-C61butyric acid methyl ester (PCBM) and organic materials such as self-assembling monolayers (SAMs), fullerene (C60), SnO2 NPs, and mp-TiO2 are utilized to combine with or modify TiO2 CL, SnO2, and ZnO. Mesoporous TiO2 The technique of fabricating mp-TiO2 films is often laborious and complex, involving the application of a TiO2 CL and then the production of mp-TiO2.mp-TiO2 necessitates a thermal sintering technique at temperatures over 500 °C to optimize its electron mobility characteristics and eliminate polymer pattern particles, in addition to changing the crystallographic state (anatase) of the aqueous oxygen sheet (Figure 3).This time-consuming, high-temperature technique limits the usage of mp-TiO2 in flexible PSCs produced through roll-to-roll production.Some researchers have studied how lithium-doped mp-TiO2 affects PSC effectiveness [34,81], and the PSCs showed better electrical properties due to the lithium-doped mp-TiO2 reducing the electronically charged trap states and accelerating the electron transit.The modified TiO2 coatings dramatically changed the electrical The change in the surface of ETLs has received a portion of consideration as a means of enhancing PSC performance and stability.The topological form of TiO 2 films can be modified because TiO 2 nanoparticles (NPs) have a greater specific surface area than TiO 2 CLs.TiO 2 NPs facilitate the effective injection of electrons and their travel, which can improve the balance of carrying charges.The TiO 2 anatase stage is extensively used as an ETL in PSCs because it is simple to produce [78][79][80].On the other hand, although the pure limestone stage of TiO 2 is difficult to produce, it is the least studied phase.There is also hope for using TiO 2 's rutile stages as an ETL for PSC purposes.Currently, in PSCs and related device structures, considering their PCEs, materials based on [6,6]-phenyl-C61butyric acid methyl ester (PCBM) and organic materials such as self-assembling monolayers (SAMs), fullerene (C60), SnO 2 NPs, and mp-TiO 2 are utilized to combine with or modify TiO 2 CL, SnO 2 , and ZnO. Mesoporous TiO 2 The technique of fabricating mp-TiO 2 films is often laborious and complex, involving the application of a TiO 2 CL and then the production of mp-TiO 2 .mp-TiO 2 necessitates a thermal sintering technique at temperatures over 500 • C to optimize its electron mobility characteristics and eliminate polymer pattern particles, in addition to changing the crystallographic state (anatase) of the aqueous oxygen sheet (Figure 3).This time-consuming, high-temperature technique limits the usage of mp-TiO 2 in flexible PSCs produced through roll-to-roll production.Some researchers have studied how lithium-doped mp-TiO 2 affects PSC effectiveness [34,81], and the PSCs showed better electrical properties due to the lithium-doped mp-TiO 2 reducing the electronically charged trap states and accelerating the electron transit.The modified TiO 2 coatings dramatically changed the electrical conductivity to improve the removal of charge and inhibit charge recombination.Furthermore, the doped TiO 2 thin film had a major effect on the nucleation of the perovskite layer.As a result, big grains formed and accumulated to create thick films with facetted crystallites.These PSCs containing inkjet-printed mp-TiO 2 films had a PCE of 18.29%.Large-scale applications can benefit from the dependable and scalable alternative to spin coating offered by inkjet printing technology.A PCE of 17.19% was observed in PSCs [82-86] that contained mp-TiO 2 films made of 50 nm sized NPs.These films showed encouraging functions.To create nanostructure-based ETL materials for PSC applications, a great deal of work has been invested.Following this, nanopillars were employed in PSCs as ETLs.Fast carrier extraction was made possible with effective TiO 2 CL/mp-TiO 2 nanopillar scaffolds, which reduced the combination loss.Additional successful mp-TiO 2 -based PSCs have been reported to date. Figure 4 summarizes the energy levels of the four phases of TiO 2 with X-ray diffraction patterns and scanning electron microscopy (SEM) illustrations [11]. more, the doped TiO2 thin film had a major effect on the nucleation of the perovskite layer.As a result, big grains formed and accumulated to create thick films with facetted crystallites.These PSCs containing inkjet-printed mp-TiO2 films had a PCE of 18.29%.Largescale applications can benefit from the dependable and scalable alternative to spin coating offered by inkjet printing technology.A PCE of 17.19% was observed in PSCs [82-86] that contained mp-TiO2 films made of 50 nm sized NPs.These films showed encouraging functions.To create nanostructure-based ETL materials for PSC applications, a great deal of work has been invested.Following this, nanopillars were employed in PSCs as ETLs.Fast carrier extraction was made possible with effective TiO2 CL/mp-TiO2 nanopillar scaffolds, which reduced the combination loss.Additional successful mp-TiO2-based PSCs have been reported to date. Figure 4 summarizes the energy levels of the four phases of TiO2 with X-ray diffraction patterns and scanning electron microscopy (SEM) illustrations [11].In order to achieve highly efficient TiO2/perovskite solar cells, surface passivation has been carried out by many researchers (Table 1).For example, interfacial recombination was significantly suppressed via passivation using PMMA:PCBM in TiO2-based PSCs.Utilizing chlorine capping on TiO2 in ITO/ETL/Cs0.05FA0.81MA0.14PbI2.55Br0.45/HTM/metalstructures resulted in a PCE of 21.40% [87].Contact passivation with chlorine-capped TiO2 colloidal nanocrystals reduced the interfacial recombination and enhanced the interface binding, exhibiting an efficiency of 20.1% [88].The doping of sodium chloride (NaCl) into a water-based TiO2 solution was found to improve its conductivity, energy level matching, and charge extraction in the electron transport layer (ETL) for PSCs, thus reaching an output of 23.15% [16].In the case of carbon-based perovskite solar cells (C-PSCs), the imperfections in the bulk perovskite and at the interface between the perovskite and the electron transport layer (ETL) may lead to undesired increases in trap-state densities and non-radiative recombination, which could restrict their performance.In such cases, the passivation of TiO2 by using hydrogen peroxide significantly enhanced the PCE by 16.23%.H2O2-treated TiO2 offers a practical way to enhance the interfacial bridging between TiO2 and the perovskite in C-PSCs.Moreover, such passivation strategies can also enhance their long-term stability in ambient air without encapsulation [89].In order to achieve highly efficient TiO 2 /perovskite solar cells, surface passivation has been carried out by many researchers (Table 1).For example, interfacial recombination was significantly suppressed via passivation using PMMA:PCBM in TiO 2 -based PSCs.Utilizing chlorine capping on TiO 2 in ITO/ETL/Cs 0.05 FA 0.81 MA 0.14 PbI 2.55 Br 0.45 /HTM/metal structures resulted in a PCE of 21.40% [87].Contact passivation with chlorine-capped TiO 2 colloidal nanocrystals reduced the interfacial recombination and enhanced the interface binding, exhibiting an efficiency of 20.1% [88].The doping of sodium chloride (NaCl) into a water-based TiO 2 solution was found to improve its conductivity, energy level matching, and charge extraction in the electron transport layer (ETL) for PSCs, thus reaching an output of 23.15% [16].In the case of carbon-based perovskite solar cells (C-PSCs), the imperfections in the bulk perovskite and at the interface between the perovskite and the electron transport layer (ETL) may lead to undesired increases in trap-state densities and non-radiative recombination, which could restrict their performance.In such cases, the passivation of TiO 2 by using hydrogen peroxide significantly enhanced the PCE by 16.23%.H 2 O 2 -treated TiO 2 offers a practical way to enhance the interfacial bridging between TiO 2 and the perovskite in C-PSCs.Moreover, such passivation strategies can also enhance their long-term stability in ambient air without encapsulation [89].In addition, doping different metals as oxides or sulfides to TiO 2 also improved the efficiency of the devices.For example, in the case of mesoporous TiO 2 based PSCs, Al 2 O 3 has been used.Introducing aluminum oxide significantly suppressed the surface recombination and thus improved the efficiency [90].In the case of sulfides, the doping of Na 2 S improved the conductivity of TiO 2 layers.Both sodium (Na) and sulfide (S) play an important role, in which Na increases the conductivity of TiO 2 and S alters the wettability of TiO 2 .These synergetic effects passivate the defects as well as improve the crystallinity of perovskite, and thus enhanced the efficiency to 21.25% in [91].The doping of TiO 2 layers using Mg had a hole-blocking effect. Doping with Mg improved the optical transmission properties, upshifted the conduction band minimum (CBM), and downshifted the valence band maximum (VBM), with a better hole-blocking effect and a longer electron lifetime.Owing to these attributes, the resulting devices exhibited an efficiency of 12.28% [92].Additionally, doping with indium (In) boosted the fill factor and voltage of perovskite cells.The indium-doped TiO 2 -based device consisting of Cs 0.05 (MA 0.17 FA 0.83 ) 0.95 Pb(I 0.83 Br 0.17 ) 3 resulted in a 19.3% efficiency [93]. Tin Dioxide (SnO 2 ) Owing to its favorable optoelectronic properties, such as its broad optical bandgap, elevated electron mobility, remarkable transparency in visible and near-infrared regions, suitable energetic alignment with perovskites, and effortless production of dense and transparent films through diverse methods, SnO 2 is regarded as another feasible ETL that is commonly employed in PSCs [74,94].Research by Miyasaka and colleagues [95] revealed that PSCs using low-temperature-processed SnO 2 as an ETL led to a PCE of 13% with excellent stability.Another study claimed to have achieved a PCE of roughly 21% [64] by using a simple chemical bath that implanted SnO 2 as an ETL in PSCs after processing.Surface passivation and the use of a bilayer structure are two methods for elemental doping and changing the surface.More significantly, elemental doping in SnO 2 ETLs with different metal cations, including Li + and Sb 3+ , demonstrated effective planar PSCs [59,73].Additionally, by modifying the interface between the SnO 2 and perovskite using a 3-aminopropyltriethoxysilane self-assembled monolayer, some researchers obtained effective PSCs with a PCE of 18% [96].Binary alkaline halides have been employed in SnO 2 -based PSCs to apply the defect passivation approach [70].Cesium, chlorinated Ti 3 C 2 TF, and ethylene diaminetetraacetic acid (EDTA) were used to modify SnO 2 [97,98].By improving the conduction band of perovskite and facilitating a smoother interface between the SnO 2 and the perovskite, effective planar PSCs with a PCE of 21.52% were generated using EDTA [80].Chen et al. developed PSCs with a PCE of 13.52% [34,86] by employing simple spin coating to deposit SnO 2 onto a TiO 2 CL to patch fractures in the TiO 2 hole-blocking layer.Recently, stable high-performance PSCs with a PCE of 22.1% were reported wherein the TiO 2 CL was impacted by the SnO 2 layer [83,86].By implementing a solution interdiffusion process, a high-quality perovskite film was fabricated with a natural drying method (without spin coating or the assistance of antisolvent, gas, or a vacuum), which improved the efficiency [99] (Table 2).Mesoporous SnO 2 ETLs were recently created using a new noncolloidal SnO 2 precursor based on acetylacetonate.It was discovered that the halide residue in the film offers superior surface passivation to improve the hole-blocking property and is crucial to the SnO 2 's thermal durability [11] (Figure 5). Micromachines 2024, 15, x FOR PEER REVIEW 11 of 20 solution interdiffusion process, a high-quality perovskite film was fabricated with a natural drying method (without spin coating or the assistance of antisolvent, gas, or a vacuum), which improved the efficiency [99] (Table 2).Aqueous-solution-processed 2D TiS2 as an electron transport layer Planar Pero-SCs 18.90 [100] Perovskite photovoltaic modules achieved via cesium doping MAPbI3-based perovskite modules 18.26 [10] SnO2 modified with RbCl and potassium polyacrylate (K-PAM) ITO/SnO2/(FAPbI3)1−x (MAPbBr3)x 24.07 [101] Mesoporous SnO2 ETLs were recently created using a new noncolloidal SnO2 precursor based on acetylacetonate.It was discovered that the halide residue in the film offers superior surface passivation to improve the hole-blocking property and is crucial to the SnO2's thermal durability [11] (Figure 5).Because of its large surface area, ease of synthesis, and low cost of production, zinc oxide (ZnO) is a great artificial semiconductor component.Moreover, ZnO has been Because of its large surface area, ease of synthesis, and low cost of production, zinc oxide (ZnO) is a great artificial semiconductor component.Moreover, ZnO has been studied the most as a CL in PSCs because of its superior optoelectronic capabilities [20].To improve electron transmission from the perovskite layer to the ZnO ETL, the researchers added SAM across the two materials [63].This allowed them to achieve outstandingly durable PSCs.It is possible to efficiently prevent perovskite degradation by introducing a SnO 2 layer between the ZnO and piezoelectric layers.The PCEs of these PSCs reached as high as 12.17% with minimal repeatability.ZnO has a basic surface with a high isoelectric point (pH > 8.7), which is sufficient to remove protons from the acidic MA cation and encourage breakdown [11]. For photovoltaic (PV) devices, interface engineering in organometal halide PSCs has proven to be an effective means of improving stability and performance.Zinc oxide (ZnO) has long been recognized as a potential layer for electron transport in solar cells, and it can also be used in flexible electronics.Nevertheless, ZnO's reactivity with the perovskite coating during the annealing process limits its use in PSCs (Figure 6).Due to the high-temperature (>450 • C) processing in producing TiO 2 -based ETLs, the fabrication of flexible devices is limited.Owing to the high electron mobility, low processing temperature, excellent optical transparency in the visible spectrum, and energy level matching with perovskites, zinc oxide (ZnO) has been considered as an alternative ETL to TiO 2 .However, achieving good efficiencies is hampered by the thermal instability of perovskite films placed directly on ZnO.Perovskite coatings on ZnO are known to break down as the post-annealing temperature rises above 70 • C. Lowering the temperature during annealing will result in partial crystallization and poor morphology of the perovskites.Therefore, the passivation of ZnO has become attractive in recent research [24,102].For example, the surface passivation of zinc oxide using magnesium oxide and protonated ethanolamine (EA) produces highly efficient, hysteresis-free, and stable PSCs with a PCE of 21.1% [15].MgO doping resolves the instability of the ZnO/perovskite interface.Moreover, EA promotes effective electron transport from the perovskite to the ZnO, further fully eliminating PSC hysteresis, and MgO inhibits interfacial charge recombination, thereby improving cell performance and stability [15].However, the doping of Zinc sulfide (ZnS) on the ZnO-ZnS surface opens up a new channel for electron transport, accelerating electron transfer and lowering interfacial charge recombination.This results in a champion efficiency of 20.7% with better stability and little hysteresis (Table 3).It has been shown that ZnS improves PSC performance by acting as a passivating layer and a cascade ETL [103]. studied the most as a CL in PSCs because of its superior optoelectronic capabilities [20].To improve electron transmission from the perovskite layer to the ZnO ETL, the researchers added SAM across the two materials [63].This allowed them to achieve outstandingly durable PSCs.It is possible to efficiently prevent perovskite degradation by introducing a SnO2 layer between the ZnO and piezoelectric layers.The PCEs of these PSCs reached as high as 12.17% with minimal repeatability.ZnO has a basic surface with a high isoelectric point (pH > 8.7), which is sufficient to remove protons from the acidic MA cation and encourage breakdown [11]. For photovoltaic (PV) devices, interface engineering in organometal halide PSCs has proven to be an effective means of improving stability and performance.Zinc oxide (ZnO) has long been recognized as a potential layer for electron transport in solar cells, and it can also be used in flexible electronics.Nevertheless, ZnO's reactivity with the perovskite coating during the annealing process limits its use in PSCs (Figure 6).Due to the hightemperature (>450 °C) processing in producing TiO2-based ETLs, the fabrication of flexible devices is limited.Owing to the high electron mobility, low processing temperature, excellent optical transparency in the visible spectrum, and energy level matching with perovskites, zinc oxide (ZnO) has been considered as an alternative ETL to TiO2.However, achieving good efficiencies is hampered by the thermal instability of perovskite films placed directly on ZnO.Perovskite coatings on ZnO are known to break down as the postannealing temperature rises above 70 °C.Lowering the temperature during annealing will result in partial crystallization and poor morphology of the perovskites.Therefore, the passivation of ZnO has become attractive in recent research [24,102].For example, the surface passivation of zinc oxide using magnesium oxide and protonated ethanolamine (EA) produces highly efficient, hysteresis-free, and stable PSCs with a PCE of 21.1% [15].MgO doping resolves the instability of the ZnO/perovskite interface.Moreover, EA promotes effective electron transport from the perovskite to the ZnO, further fully eliminating PSC hysteresis, and MgO inhibits interfacial charge recombination, thereby improving cell performance and stability [15].However, the doping of Zinc sulfide (ZnS) on the ZnO-ZnS surface opens up a new channel for electron transport, accelerating electron transfer and lowering interfacial charge recombination.This results in a champion efficiency of 20.7% with better stability and little hysteresis (Table 3).It has been shown that ZnS improves PSC performance by acting as a passivating layer and a cascade ETL [103].Aluminum-doped ZnO nanoparticles can improve the thermal stability of the ETL.In addition to that, PCBM (phenyl-C61-butyric acid methyl ester) can also be added to solve the problem of reduced short-circuit current density and significant photocurrent hysteresis.These modifications resulted in a PCE of 17% in [104].Interestingly, passivation using Nb 2 O 5 dramatically enhanced the stability of perovskite films over 20 days under ambient conditions and also exhibited an efficiency of 14.57% under simulated solar irradiation.This passivation using Nb 2 O 5 enhanced the crystallinity of the perovskite and improved the stability of the devices [105].A PCE of nearly 19.81% was achieved by applying interface engineering to ZnO using monolayer graphene (MLG) [61].The introduction of MLG at the ETL/perovskite interface enhanced both the photovoltaic and carrier extraction capabilities while simultaneously shielding the perovskite layer from degradation at high temperatures, hence contributing to the device's stability.Moreover, the efficiency was enhanced to 21% by passivating further with 3-(pentafluorophenyl)-propionamide (PFPA) [61]. In the case of ZnO-based PSCs, high stability with a PCE > 18% was achieved through the post-treatment of ZnO using ethanolamine [106].Thus, the in situ passivation of ZnO improved the quality of the perovskite compared to that of a SnO 2 /perovskite structure. In addition to TiO 2 , SnO 2 , and ZnO, there are some other ETLs reported in the literature [109].Very recently, UV-inert ZnTiO 3 was reported as an electron-selective layer in planar PSCs.ZnTiO 3 is a semiconductor with a perovskite structure that exhibits weak photocatalysis but good chemical stability.Indium-doped tin oxide ITO/ZnTiO 3 / Cs 0.05 FA 0.81 MA 0.14 PbI 2.55 Br 0.45 /Sprio-MeOTAD/Au enhanced photostability, and displayed a stable power conversion efficiency of 19.8%.These novel ETLs offer a new family of electron-specific materials with exceptional UV stability [107]. An amorphous tungsten oxide/tin dioxide hybrid electron transport layer is also reported, which can efficiently block holes via the pinholes and cracks of the tin dioxide to indium tin oxide.This promotes charge extraction and impedes the electron-hole recombination process at the hetero-interface.Furthermore, superior electron transport is achieved in comparison to that achieved with conventional electron transport layers because of the increased mobility of amorphous tungsten oxides and the creation of a cascading energy-level sequence between the amorphous tungsten oxides and tin dioxide.A higher power conversion efficiency of 20.52% has been demonstrated by PSCs based on a hybrid ETL of SnO 2 /a-WO 3 [108] (Table 3). Polymers If utilized as an ETL scaffold, polymers can give perovskite absorbers the best possible morphologies and robust humidity resistance.However, because of their weak conductivity limits or insulating nature, mesoporous polymer scaffolds are typically employed as templates rather than ETLs in PSCs [18][19][20].For example, a mesoporous graphene/polymer (mp-GP)/Cs 2 CO 3 ETL can be produced at low temperatures for high-performance PSCs to enhance electron transport.The granular-like polyaniline, also known as PANI, works together with the conductive graphene network structure to perform tasks concurrently, as follows: (1) it has well-defined pores that function as quick electromagnetic frequencies; (2) it provides a permeable micro-void space for the layers of activity to infiltrate, resulting in a fully crystalline polycrystalline external; and (3) because of the chemical inactivity and packaging of the perovskite crystals, the addition of mp-GP as an ETL demonstrates increased efficiency in PSCs since the 2D version of graphite offers a solid 3D structure that protects the perovskite component from water infiltration and aggressive interface development when operating at a high frequency.Benefiting from the previously mentioned characteristics, these unencapsulated PSCs showed an impressive PCE of 13.8%, as well as exceptional chemical and thermal durability, as evidenced by a hardly perceptible drop in the PL boiling effectiveness after thirty minutes of heat annealing in air at 150 • C [86].Polyethylene glycol was also used as a moisture-resistant component and the efficiency was recovered [110]. Future Directions and Conclusions Perovskite solar cells with regular/planar structures exhibit efficiencies above 25%.For further development, there are many factors that need to be considered, such as improving the perovskite morphology and crystallinity (large grain size), and achieving compatibility between the ETL and perovskite absorber.In addition, the stability of the devices, low-cost fabrication, and the fabrication of flexible solar cells are other issues that remain hindrances to their widespread commercialization. (a) Perovskite morphology: Because of the persistently high defect density in solutionprocessed films, effective methods for passivating these defects both in the bulk and on the surface are needed in order to achieve an efficiency of greater than 25% for commercialization.Understanding the surface morphology of both the ETL and the perovskite layer, as well as their interface, is very important before processing.Even though many attempts have been taken to improve the morphology or crystallinity of perovskites in order to minimize defects, reducing the recombination rate is still challenging.In addition to this, there is a lack of techniques or tools to qualitatively investigate or to quantify the density of perovskite defects before and after passivation.The existing steady-state PL method is limited for radiative recombination and challenges still exist for non-radiative components [111].In order to achieve high efficiency and high-quality perovskite films with a large grain size, both an electron diffusion length that largely exceeds the optical penetration depth and high electron mobility are required.To capture more photons, additional optimizations like thickening the perovskite and adding an anti-reflection layer might be beneficial [112].(b) Photocurrent density (V oc ): The loss of photocurrent density (V oc ) plays a crucial role in affecting the efficiency of perovskite solar cells.By precisely managing the perovskite preparation process, bulk impurities and structural flaws can be reduced, and non-radiative recombination losses can be avoided by controlling or engineering the layer interfaces.In this way, it is possible to achieve the full range of V oc ~1.34 V for MAPbI 3 .(c) Stability: PSC instability has been shown to be most aggressively caused by humidity because of the strong interaction between water molecules and the perovskite material. In general, ETLs have the problems of moisture sensitivity and poor film morphology.The presence of external factors such as humidity, light, heat, and an electric field, which severely damage the perovskite crystals by triggering chemical reactions or allowing ion migration to easily occur through defect sites [113].Isolating the device from the environment, using hydrophobic back-contact materials, or encapsulating it can all be used to prevent or slow this form of degradation [114].Encapsulation is a technique that is used to suppress charge-driven degradation.Hence, encapsulation fails to stop these molecules from penetrating, and effective mitigation techniques for charge accumulation-such as minimizing the grain-boundary defects in perovskite crystals-should be developed in order to stop irreversible degradation and enhance the material's stability.A number of significant advancements have also been made in the area of long-term stability, such as the demonstration of solid-state perovskite solar cells, two-step spin-coating techniques, compositional engineering, solvent-based approaches, and the use of low-dimensional (2D, quasi-2D, and 2D/3D) perovskites [113].In order to fix organic cations on grain boundaries and thus inhibit ion movement and ultimately significantly increase the operational stability of perovskite solar cells, a covalent bonding approach has recently been developed.Perovskites can be stabilized through ion redistributions and the release of stored charges during the nighttime via a cyclic operation that simulates an actual activity.Therefore, this covalent bond approach must be optimized by using different chemical doping methods, which may enhance the stability of the fabricated PSCs.(d) Toxicity of Pb 2+ : Recently, lead (Pb 2+ ) is still being used as the B-cation site in perovskite solar cells, even in the most advanced models.Because lead is a dangerous substance, using it could have negative effects on the environment and possibly make its way into the human food chain.One approach is the doping of Pb 2+ by chelating it with thiol or phosphonic acid derivatives, which stop the leakage of toxic lead.Another option is the fabrication of lead-free devices.As a result, a lot of research has been conducted on lead-free substitute perovskite materials.Tests have been conducted on perovskite solar cells based on a variety of elements, including antimony, copper, germanium, bismuth, and others.Tin seems to be the best option because of its comparable electrical structure and ionic radius.As a result, the lead ion in the B-site can be directly replaced without causing a large phase shift.The PCE of tin-based perovskite cells is approximately 10-12%, which is substantially less than that of lead-containing perovskites.However, the drawback of tin is that it undergoes oxidation from Sn 2+ to Sn 4+ .Therefore, doping with suitable elements or chemicals need to be optimized.(e) Commercialization: There are still some major issues stopping the large-scale commercialization of perovskite solar cells.The current manufacturing techniques used in lab-scale projects are not suitable for large-scale production.This is being addressed with a search for techniques that are compatible with roll-to-roll processing, allowing high throughput. In conclusion, researchers and scientists are developing next-generation PSCs with enhanced PCE and long-term stability in an effort to solve these difficulties.Furthermore, to completely unlock the high inherent electrical quality that perovskites offer, appropriate passivation procedures, including dopant engineering, solvent engineering, interface engineering, and heat engineering, must be developed.Perovskite has the potential to surpass other PV technologies in the future with the help of methodical collaboration between a variety of scientific, engineering, and entrepreneurial sectors. Figure 2 . Figure 2. Schematic representation of the role of the ETL in perovskite solar cells. Figure 2 . Figure 2. Schematic representation of the role of the ETL in perovskite solar cells. Table 1 . Descriptions of different surface alterations in TiO2-based devices and their PCEs. Table 1 . Descriptions of different surface alterations in TiO 2 -based devices and their PCEs. Table 2 . Descriptions of different surface alterations/device architectures and their PCEs. Table 2 . Descriptions of different surface alterations/device architectures and their PCEs. Table 3 . Descriptions of different surface alterations in ZnO-based and other ETL-based devices and their PCEs.
12,758.2
2024-06-30T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Constructor theory of probability Unitary quantum theory, having no Born Rule, is non-probabilistic. Hence the notorious problem of reconciling it with the unpredictability and appearance of stochasticity in quantum measurements. Generalizing and improving upon the so-called ‘decision-theoretic approach’, I shall recast that problem in the recently proposed constructor theory of information—where quantum theory is represented as one of a class of superinformation theories, which are local, non-probabilistic theories conforming to certain constructor-theoretic conditions. I prove that the unpredictability of measurement outcomes (to which constructor theory gives an exact meaning) necessarily arises in superinformation theories. Then I explain how the appearance of stochasticity in (finitely many) repeated measurements can arise under superinformation theories. And I establish sufficient conditions for a superinformation theory to inform decisions (made under it) as if it were probabilistic, via a Deutsch–Wallace-type argument—thus defining a class of decision-supporting superinformation theories. This broadens the domain of applicability of that argument to cover constructor-theory compliant theories. In addition, in this version some of the argument's assumptions, previously construed as merely decision-theoretic, follow from physical properties expressed by constructor-theoretic principles. Introduction Quantum theory without the Born Rule (hereinafter: unitary quantum theory) is deterministic [1]. Its viability as a universal physical theory has long been debated [1][2][3][4]. A contentious issue is how to reconcile its determinism with the unpredictability and appearance of stochasticity in quantum measurements [2,4,5]. Two problems emerge: (i) how unpredictability can occur in unitary quantum theory, as, absent the Born Rule and 'collapse' processes, single measurements do not deliver single observed outcomes ( §1b), and (ii) how unitary quantum theory, being non-probabilistic, can adequately account for the appearance of stochasticity in repeated measurements ( §1c). These problems also arise in the constructor theory of information [6] ( § §2 and 3). In that context unitary quantum theory is one of a class of superinformation theories, most of them yet to be discovered, which are elegantly characterized by a simple, exact, constructortheoretic condition. Specifically, certain physical systems permitted under such theories-called superinformation media-exhibit all the most distinctive properties of quantum systems. Like all theories conforming to the principles [7] of constructor theory, superinformation theories are expressed solely via statements about possible and impossible tasks, and are necessarily nonprobabilistic. So a task being 'possible' in this sense means that it could be performed with arbitrarily high accuracy-not that it will happen with non-zero probability. Just as for unitary quantum theory, therefore, an explanation is required for how superinformation theories could account for unpredictable measurement outcomes and apparently stochastic processes. To provide this, I shall first provide an exact criterion for unpredictability in constructor theory ( §4); then I shall show that unpredictability necessarily arises in superinformation theories (including quantum theory) as a result of the impossibility of cloning certain states-thereby addressing problem (i). Then, I shall generalize and improve upon an existing class of proposed solutions to problem (ii) in quantum theory-known as the decision-theoretic approach [1,[8][9][10], by recasting them in constructor theory. This will entail expressing a number of physical conditions on superinformation theories for them to support the decision-theoretic approach-thus defining a class of decision-supporting superinformation theories, which include unitary quantum theory ( § §5-7). As I shall outline ( §1d), switching to constructor theory widens the domain of applicability of such approaches, to cover certain generalizations of quantum theory; it also clarifies the assumptions on which Deutsch-Wallace-type approaches are based, by revealing that most of the assumptions are not decision-theoretic, as previously thought [1,8], but physical. (a) Constructor theory Constructor theory is a proposed fundamental theory of physics [7], consisting of principles that underlie other physical theories (such as laws of motion of elementary particles, etc.), called subsidiary theories in this context. Its mode of explanation requires physical laws to be expressed exclusively via statements about which physical transformations (more precisely, tasks- §2) are possible, which are impossible, and why. This is a radical departure from the prevailing conception of fundamental physics, which instead expresses laws as predictions about what happens, given dynamical equations and boundary conditions in space-time. As I shall explain ( §2), constructor theory is not just a framework (e.g. [11], or category theory [12]) for reformulating existing theories: its principles are proposed physical laws, which supplement the content of existing theories. They express regularities among subsidiary theories, including new ones that the prevailing conception cannot adequately capture. They thereby address some of those theories' unsolved problems and illuminate the theories' underlying meaning, informing the development of successors-just as, for instance, the principle of energy conservation explains invariance properties of the dynamical laws, supplementing their explanatory content without changing their predictive content, providing criteria to guide the search for future theories. In this work, I appeal to the principles of the constructor theory of information [6]. They express the regularities in physical laws that are implicitly required by theories of information (e.g. Shannon's) as exact statements about possible and impossible tasks, thus giving a physical meaning to the hitherto fuzzily defined notion of information. Notions such as measurement and distinguishability, which are problematic to express in quantum theory, yet are essential to the decision-theoretic approach, can be exactly expressed in constructor theory. (b) Unpredictability The distinction between unpredictability and randomness is difficult to pin down in quantum theory, especially unitary quantum theory, but it can be naturally expressed in the more general context of constructor theory. Unpredictability occurs in quantum systems even given perfect knowledge of dynamical laws and initial conditions. 1 When a perfect measurer of a quantum observablê X-say the x-component of the spin of a spin-1/2 particle-is presented with the particle in a superposition (or mixture) of eigenvectors ofX, say |0 and |1 , it is impossible to predict reliably which outcome (0 or 1) will be observed. But in unitary quantum theory, a perfect measurement ofX is merely a unitary transformation on the source S a (the system to be measured) and the target S b (the 'tape' of the measurer): which implies, by the linearity of quantum theory, for arbitrary complex amplitudes α, β. As no wavefunction collapse occurs, there is no single 'observed outcome'. All possible outcomes occur simultaneously: in what sense, then, are they unpredictable? Additional explanations are needed-e.g. Everett's [13] is that the observer differentiates into multiple instances, each observing a different outcome, whence the impossibility of predicting which one [1,5]. Such accounts, however, can only ever be approximate in quantum theory, as they rely on emergent notions such as observed outcomes and 'universes'. Also, unpredictability is a counterfactual property: it is not about what will happen, but what cannot be made to happen. So, while the prevailing conception struggles to accommodate it, constructor theory does so naturally. Just as the impossibility of cloning a set of non-orthogonal quantum states is an exact property [14], I shall express unpredictability exactly as a consequence of the impossibility of cloning certain sets of states under superinformation theories ( §4). This distinguishes it from (apparent) randomness, which, as I shall explain, requires a quantitative explanation. (c) The appearance of stochasticity Another key finding of this paper is a sufficient set of conditions for superinformation theories to support a generalization of the decision-theory approach to probability, thereby explaining the appearance of stochasticity, namely that repeated identical measurements not only have different unpredictable outcomes but are also, to all appearances, random. Specifically, consider the frequencies of each observed outcome 2 x in multiple measurements of a quantum observableX on N systems each prepared in a superposition or mixture ρ ofX-eigenstates |x . The appearance of stochasticity is that, for sufficiently large N, the frequencies do not differ significantly (according to some a priori fixed statistical test) from the numbers Tr{ρ|x x|} (equality occurring in the limiting case of an ensemble ( §6)). To account for this, the Born Rule states that the probability that x is the outcome of any individualX-measurement is Tr{ρ|x x|}, thus linking, by fiat, Tr{ρ|x x|} with the frequencies in finite sequences of experiments. In unitary quantum theory no such link can, prima facie, exist, as all possible outcomes occur in reality. How can that theory inform an expectation about finite sequences of experiments, as its Born-Rule-endowed counterpart can? The decision-theoretic approach claims to explain how [1,[8][9][10][15][16][17]. It models measurements as deterministic games of chance:X is measured on a superposition or mixture ρ ofX-eigenstates; the reward is equal (in some currency) to the observed outcome. Thus, the problem is recast as that of how unitary quantum theory can inform decisions of a hypothetical rational player of that game, satisfying only non-probabilistic axioms of rationality. The decision-theory argument shows that the player, knowing unitary quantum theory (with no Born Rule) and the state ρ, reaches the same decision, in situations where the Born Rule would apply, as if they were informed by a stochastic theory with Born-Rule probabilities Tr{ρ|x x|}. This explains how Tr{ρ|x x|} can inform expectations in single measurements under unitary quantum theory. One must additionally prove, from this, that unitary quantum theory is as testable as its Born-Rule-endowed counterpart [1,15,18]. Thus, the decision-theoretic approach claims to explain the appearance of stochasticity in unitary quantum theory without invoking stochastic laws (or axioms), rather as Darwin's theory of evolution explains the appearance of design in biological adaptations without invoking a designer. It has been challenged, especially in regard to testability [4,19], and defended in, for example, [1,15,18]. Note that this work is not a defence of that approach; rather, it aims at clarifying and illuminating its assumptions (showing that most of them are physical) and at broadening its domain of applicability to more general theories than quantum theory. In my generalized version of the decision-theoretic approach, I shall define a game of chance under superinformation theories ( §7) and then identify a sufficient set of conditions for them to support decisions (under that approach) in the presence of unpredictability ( §6). These conditions define the class of decision-supporting superinformation theories (including unitary quantum theory). Specifically, they include conditions for superinformation theories to support the generalization f x of the numbers Tr{ρ|x x|} ( §5) corresponding to Born-Rule probabilities. That is to say, my version of the decision-theory argument explains how the numbers f x can inform decisions of a player satisfying non-probabilistic rationality axioms under certain superinformation theories ( §7). Those theories would account for the appearance of stochasticity at least as adequately as unitary quantum theory. (d) Summary of the main results Switching to constructor theory yields three interrelated results: (1) The unpredictability of measurements in superinformation theories is exactly distinguished from the appearance of stochasticity, and proved to follow from the constructor-theory generalization of the quantum no-cloning theorem ( §4). (2) A sufficient set of conditions for superinformation theories to support the decision-theoretic argument ( § §5-7) is provided, defining a class of decision-supporting superinformation theories, including unitary quantum theory. Constructor theory emancipates the argument from formalisms and concepts specific to (Everettian) quantum theory-such as 'observed outcomes' or 'relative states'. (3) Most premises of the decision-theory argument are no longer controversial decisiontheoretic axioms, as in existing formulations, but follow from physical properties implied by exact principles of constructor theory. In § §2 and 3, I summarize as much of constructor theory as is needed; in §4, I present the criterion for unpredictability; in § §5 and 6, I give the condition for superinformation theories to permit the constructor-theoretic generalization of the numbers f x = Tr{ρ|x x|}; in §7, I present the decision-theory argument in constructor theory. Constructor theory In constructor-theoretic physics the primitive notion of a 'physical system' is replaced by the slightly different notion of a substrate-a physical system some of whose properties can be changed by a physical transformation. Constructor theory's primitive elements are tasks (as defined below), which intuitively can be thought of as the specifications of physical transformations affecting substrates. Its laws take the form of conditions on possible/impossible tasks on substrates allowed by subsidiary theories. As tasks involving individual states are rarely fundamental, more general descriptors for substrates are convenient. (a) Attributes and variables The subsidiary theory must provide a collection of states, attributes and variables, for any given substrate. These are physical properties of the substrate and can be represented in several interrelated ways. For example, a traffic light is a substrate, each of whose eight states (of three lamps, each of which can be on or off) is labelled by a binary string (σ r , σ a , σ g ): σ i ∈ {0, 1}, ∀i ∈ {r, a, g}, where, say, σ r = 0 indicates that the red lamp is off, and σ r = 1 that it is on. Similarly, for i = a (amber) and i = g (green). Thus, for instance, the state where the red lamp is on and the others off is (1, 0, 0). An attribute is any property of a substrate that can be formally defined as a set of all the states in which the substrate has that property. So, for example, the attribute red of the traffic light, denoted by r, is the set of all states in which the red lamp is on: An intrinsic attribute is one that can be specified without referring to any other specific system. For example, 'having the same colour lamp on' is an intrinsic attribute of a pair of traffic lights, but 'having the same colour lamp on as the other one in the pair' is not an intrinsic attribute of either of them. In quantum theory, 'being entangled with each other' is, likewise, an intrinsic attribute of a qubit pair; 'having a particular density operator' is an intrinsic attribute of a qubit that has, for instance, undergone an entangling interaction. The rest of the quantum state, in the Heisenberg picture, describes entanglement with the other systems that the qubit has interacted with, and so is not an intrinsic attribute of the qubit. A physical variable is defined in a slightly unfamiliar way as any set of disjoint attributes of the same substrate. In quantum theory, this includes not only all observables (which are representable as Hermitian operators), but many other constructs, such as any set {x, y} where x and y are the attributes of being, respectively, in distinct non-orthogonal states |x and |y of a quantum system-i.e. the eigenvalues of two non-commuting observables. Whenever a substrate is in a state in an attribute x ∈ X, where X is a variable, we say that X is sharp (on that system), with the value x-where the x are members of the set X of labels 3 of the attributes in X. As a shorthand, 'X is sharp in a' shall mean that the attribute a is a subset of some attribute in X. In the case of the traffic light, 'whether some lamp is on' is the variable P = {off , on}, where I have introduced the attributes off = {(0, 0, 0)} and on, which contains all the states where at least one lamp is on. So, when the traffic light is, say, in the state (1, 0, 0) where only the red lamp is on, we say that 'P is sharp with value on'. Also, we say that P is sharp in the attribute r (red, defined above), with value on-which means that r ⊆ on. In quantum theory, a substrate can be a quantum spin-1/2 particle-e.g. an electron. The z-component of the spin is a variable, represented as the set of two intrinsic attributes: that of the z-component of the spin being 1/2 and −1/2. That variable is sharp when the qubit is in a pure eigenstate of the observable corresponding to the z-component of the spin and is non-sharp otherwise. (b) Tasks A task is the abstract specification of a physical transformation on a substrate, which is transformed from having some physical attribute to having another. It is expressed as a set of ordered pairs of input/output attributes x i → y i of the substrates. I shall represent it as: 4 A = {x 1 → y 1 , x 2 → y 2 , . . .}. 3 I shall always define symbols explicitly in their contexts, but for added clarity I use the convention: small Greek letters (γράμματα) denote states; small italic boldface denotes attributes; CAPITAL ITALIC BOLDFACE denotes variables; small italic denotes labels; CAPITAL ITALIC denotes sets of labels; CAPITAL BOLDFACE denotes physical systems; and capital letters with arrow above (e.g. C) denote constructors. The {x i } are the legitimate input attributes; the {y i } are the output attributes. A constructor for the task A is defined as a physical system that would cause A to occur on the substrates and would remain unchanged in its ability to cause that again. Schematically, Input attribute of substrates Constructor − −−−−−− → Output attribute of substrates, where constructor and substrates jointly are isolated. This scheme draws upon two primitive notions that must be given physical meanings by the subsidiary theories, namely: the substrates with the input attribute are presented to the constructor, which delivers the substrates with the output attribute. A constructor is capable of performing A if, whenever presented with the substrates with a legitimate input attribute of A (i.e. in any state in that attribute), it delivers them in some state in one of the corresponding output attributes, regardless of how it acts on the substrate with any other attribute. A task on the traffic light substrate is {on → off }; and a constructor for it is a device that must switch off all its lamps whenever presented when any of the states in on. In the case of the task {off → on}, it is enough that, when the traffic light as a whole is switched off (in the state (0, 0, 0)), it delivers some state in the attribute on-say by switching on the red lamp only, delivering the state (1, 0, 0)-not necessarily all of them. In quantum information, for instance, a unitary quantum gate can be thought of as implementing a one-to-one possible task on the qubits that it acts on-its substrates. The physical system implementing the gate and the substrates constitute an isolated system: the gate is the same after the task as before. (Impossible tasks do not have constructors and thus cannot be thought of as corresponding to gates obeying quantum theory.) (c) The fundamental principle A task T is impossible if there is a law of physics that forbids it being carried out with arbitrary accuracy and reliability by a constructor. Otherwise, T is possible, which I shall denote by T . This means that a constructor capable of performing T can be physically realized with arbitrary accuracy and reliability (short of perfection). Catalysts and computers are examples of approximations to constructors. So, 'T is possible' means that T can be brought about with arbitrary accuracy, but it does not imply that it will happen, as it does not imply that a constructor for it will ever be built and presented with the right substrate. Conversely, a prediction that T will happen with some probability would not imply T's possibility: that 'rolling a seven' sometimes happens when shooting dice does not imply that the task 'roll a seven under the rules of that game' can be performed with arbitrarily high accuracy. Non-probabilistic, counterfactual properties-i.e. about what does not happen, but could-are central to constructor theory's mode of explanation, as expressed by its fundamental principle: I. All (other) laws of physics are expressible solely in terms of statements about which tasks are possible, which are impossible, and why. The radically different mode of explanation employed by this principle permits the formulation of new laws of physics (e.g. constructor information theory's ones). Thus, constructor theory differs in motivation and content from existing operational frameworks, such as resource theory [11], which aims at proving theorems following from subsidiary theories, allowing their formal properties to be expressed in the resource-theoretic formalism. Constructor theory, by contrast, proposes new principles, not derivable from subsidiary theories, to supplement them, elucidate their physical meaning and impose severe restrictions ruling out some of them. In addition, while resource theory focuses on allowed/forbidden processes under certain dynamical laws, constructor theory's main objects are impossible and possible tasks; the latter are not just allowed processes: they require a constructor to be possible. However, resource theory might be applicable to express certain constructor-theoretic concepts. For instance, as remarked, the notion of a chemical catalyst, as recently formalized in resource theory [20], is related to that of a constructor. A constructor is distinguished among general catalyst-type objects in that it is required to be capable of performing the task reliably, repeatedly and with perfect accuracy. Hence an ideal constructor is not itself a physical object, but the limiting case of an infinite sequence of possible physical objects. Each element of the sequence is an approximation to a constructor, performing the task to some finite accuracy. For example, the task of copying a string of letters is a possible task; a perfect copier is never realized in reality; but one can in principle build increasingly accurate approximations to it: a template copier is inaccurate; for higher accuracies, one would need to include some error-correction mechanism. Thus, that a task is possible is a manner of speaking about the realizability of each element of such a sequence, except for the limiting, perfect constructor that is never physically realizable, because of the inevitability of errors and deterioration under our physical laws. Hence principle I requires subsidiary theories to have two crucial properties (holding in unitary quantum theory): (i) they must support a topology on the set of physical processes they apply to, which gives a meaning to a sequence of approximate constructions, converging to an exact performance of T; (ii) they must be non-probabilistic-as they must be expressed exclusively as statements about possible/impossible tasks. For instance, the Born-Rule-endowed versions of quantum theory, being probabilistic, do not obey the principle. (d) Principle of locality S 1 ⊕ S 2 is the substrate consisting of substrates S 1 and S 2 . Constructor theory requires subsidiary theories to provide the following support for such a combination. First, if subsidiary theories designate any task as possible which has S 1 ⊕ S 2 as the input substrate, they must provide a meaning for presenting S 1 and S 2 to the relevant constructor as the substrate S 1 ⊕ S 2 . Second, they must conform to Einstein's [21] principle of locality in the form: II. There exists a mode of description such that the state of S 1 ⊕ S 2 is the pair (ξ , ζ ) of the states 5 ξ of S 1 and ζ of S 2 , and any construction undergone by S 1 and not S 2 can change only ξ and not ζ . Unitary quantum theory satisfies principle II, as is explicit in the Heisenberg picture [22,23]. In that picture, the state of a quantum system is, at any one time, a minimal set of generators for the algebra of observables of that system, plus the Heisenberg state [24,25]. As the latter never changes, it can be abstracted away when specifying tasks: any residual 'non-locality' in that state [26] does not prevent quantum theory from satisfying principle II. The parallel composition A ⊗ B of two tasks A and B is the task whose net effect on a substrate M ⊕ N is that of performing A on M and B on N. When A ⊗ T is possible for some task T on some generic, naturally occurring substrate (as defined in [6]), A is possible with side effects, which is written A . (T represents the side effect.) Note that in quantum theory the constructor performing a possible task A ⊗ B on M ⊕ N may generate entanglement between the two substrates. For example, consider two tasks A and B including only one pair of input/output attributes on some qubit, and let a be the output attribute of A, b that of B. LetP a be the quantum projector for a qubit to have the attribute a andP b for attribute b. The constructor performing A ⊗ B is required to deliver the two qubits in a quantum state in the +1-eigenspace of the projectorP a ⊗P b (where the small ⊗ denotes here the tensor product between quantum operators). Such eigenspaces in general include states describing entangled qubits. Constructor theory of information I shall now summarize the principles of the constructor theory of information [6]. These express exactly the properties required of physical laws by theories of (classical) information, computation and communication-such as the possibility of copying-as well as the exact relation between what has been called informally 'quantum information' and 'classical information'. 6 First, one defines computation media. 7 A computation medium with computation variable V (at least two of whose attributes have labels in a set V) is a substrate on which the task Π (V) of performing a permutation Π defined via the labels V is possible (with or without side effects), for all Π . Π (V) is a reversible computation. 8 Information media are computation media on which additional tasks are possible. Specifically, a variable X is clonable if for some attribute namely cloning X is possible (with or without side effects). 9 An information medium is a substrate with at least one clonable computation variable, called an information variable (whose attributes are called information attributes). For instance, a qubit is a computation medium with any set of two pure states, even if they are not orthogonal [6]; with orthogonal states it is an information medium. Information media must also obey the principles of constructor information theory, which I shall now recall. (a) Interoperability Let X 1 and X 2 be variables of substrates S 1 and S 2 , respectively, and X 1 × X 2 be the variable of S 1 ⊕ S 2 whose attributes are labelled by the ordered pair (x, x ) ∈ X 1 × X 2 , where X 1 and X 2 are the sets of labels of X 1 and X 2 , respectively, and × denotes the Cartesian product of sets. The interoperability principle is elegantly expressed as a constraint on the composite system of information media (and on their information variables): III. The combination of two information media with information variables X 1 and X 2 is an information medium with information variable X 1 × X 2 . (b) Distinguishing and measuring These are expressed exactly in constructor theory as tasks involving information variableswithout reference to any a priori notion of information. A variable X is measurable if a special case of the distinguishing task (3.2) is possible (with or without side effects)-namely, when the original source substrate continues to exist 11 in some attribute y x and the result is stored in a target substrate: where x 0 is a generic, 'receptive' attribute and 'X' = {'x': x ∈ X} is an information variable of the target substrate, called the output variable (which may, but need not, contain x 0 ). When X is sharp on the source with any value x, the target is changed to having the information attribute 'x', meaning S had attribute x . A measurer of X is any constructor capable of performing the task (3.3) for some choice of its output variable, labelling and receptive state. 12 Thus, it also is a measurer of other variables: for example, it measures any subset of X, or any coarsening of X (a variable whose members are unions of attributes in X). Two notable coarsenings of X 1 × X 2 are: X 1 + X 2 , where the attributes (x 1 , x 2 ) are re-labelled with numbers x 1 + x 2 (and combined accordingly), and X 1 X 2 , where the attributes (x 1 , x 2 ) are re-labelled with numbers x 1 x 2 (and likewise combined). I shall consider only non-perturbing measurements, i.e. y x ⊆ x in (3.3). Whenever the output variable is guaranteed to be sharp with a value 'x', I shall say, with a slight abuse of terminology, that the measurer of X "delivers a sharp output 'x'". (c) The 'bar' operation Given an information attribute x, define the attributex (x-bar) as the union a : a⊥x a of all attributes that are distinguishable from x. With this useful tool one can construct a Boolean information variable, defined as {x,x} (which, as explained below, is a generalization of quantum projectors). Also, for any variable X, define the attribute u X . = x∈X x. The attributeū X is the constructor-theoretic generalization of the subspace spanned by a set of quantum states. For example, consider an information variable X = {0, 1} where 0 and 1 are the attributes of being in particular eigenstates of a non-degenerate quantum observableX (which also has other eigenstates). Then,ū X is the attribute of being in any of the possible superpositions and mixtures (prepared by any possible preparation 13 ) of those two eigenstates ofX. (d) Consistency of measurement In quantum theory repeated measurements of physical properties are consistent in the following sense. Consider the variable X = {0, 1} defined above. Let 2 be the attribute of being in a particular eigenstate ofX orthogonal to both 0 and 1. All measurers of the variable Z = {u X , 2} are then also measurers of the variable Z = {ū X , 2}, so that all measurers of the former, when given any attribute a ⊆ū X , will give the same sharp output 'u X '. The principle of consistency of measurement requires all subsidiary theories to have this property: IV. Whenever a measurer of a variable Z would deliver a sharp output when presented with an attribute a ⊆ū Z , all other measurers of Z would too. It follows [6] that they would all deliver the same sharp output. (e) Observables As (from the definition of 'bar')x ≡x, attributes x with x =x have a useful property, whence the following constructor-theoretic generalization of quantum information observables: an information observable X is an information variable such that whenever a measurer of X delivers a sharp output 'x' the input substrate really has the attribute x. 14 A necessary and sufficient condition for a variable to be an observable is that x =x for all its attributes x. For example, the above-defined variable Z = {u X , 2} is not an observable (a Z-measurer delivers a sharp output 'u X ' even when presented with a state ξ ∈ū X \u X , where '\' denotes set exclusion), but {ū X , 2} is. (f) Superinformation media A superinformation medium S is an information medium with at least two information observables, X and Y, that contain only mutually disjoint attributes and whose union is not an information observable. Y and X are called complementary observables. For example, any pair of orthogonal states of a qubit constitutes an information observable, but no union of two or more such pairs does: its members are not all distinguishable. Superinformation theories are subsidiary theories obeying constructor theory and permitting superinformation media. From that simple property it follows that superinformation media exhibit all the most distinctive properties of quantum systems [6]. In particular, the attributes y in Y are the constructor-theoretic generalizations of what in quantum theory is called 'being in a superposition or mixture' of states in the complementary observable X. (g) Generalized mixtures Consider an attribute y ∈ Y and define the observable X y . = {x ∈ X : x ⊥y}. (In quantum theory X could be the photon number observable in some cavity, |1 1| + 2|2 2| + · · · , and y the attribute of being in some superposition or mixture of some of its eigenstates, e.g. (1/ √ 2)(|0 + |1 ). In that case X y would contain two attributes, namely those of being in the states |0 0| and |1 1|, respectively.) One proves [6] that: (1) X is non-sharp in y as x ∩ y = o, ∀x ∈ X (where ' o' denotes the empty set), and X y contains at least two attributes. (2) Some coarsenings of X are sharp in y, just as in quantum theory-where the state (1/ √ 2)(|0 + |1 ) is in the +1-eigenspace of the projector |0 0| + |1 1|. The observable {ū X y ,ū X y }, u X y = x∈X y x, is the constructor-theoretic generalization of such a projector, and it is sharp in y, with valueū X y . As in quantum theory, any measurer of X presented with y, followed by a computation whose output is «whether the outcome was one of the 'x' with x ⊥y», will provide a sharp output 'ū X y ', corresponding to «yes». (I adopt the convention that a 'quoted' attribute is the one that would be delivered by a measurement of the un-quoted one, with suitable labelling. Likewise for variables.) In summary: Quantum Theory Constructor Theory |y is an eigenstate of an observablê (1) and (2) as y does, as follows: Let 'X' y . = {'x' ∈ 'X': x ∈ X y }. (i) 'X' y is not sharp in b y . (If it were, with value 'x', that would imply, via the property of observables, that y ⊆ x, contrary to the defining property that , then y could be distinguished from x, contrary to assumption). (iii) b y ⊆ū 'X' y . For a measurer of {ū 'X' y ,ū 'X' y } applied to the target substrate of an X-measurer is also a measurer of {ū X y ,ū X y }; hence, when presented with y, it must deliver a sharp output 'ū 'X' y '. By the property of observables, {ū 'X' y ,ū 'X' y } must be sharp in b y , with valueū 'X' y . By the same argument, X y is not sharp in a y , i.e. a y ∩ x = o; also a y ⊥x, ∀x ∈ X y ; and {ū X y ,ū X y } is sharp in a y with valueū X y . (h) Intrinsic parts of attributes The attributes a y and b y are not intrinsic, for each depends on the history of interactions with other systems. (In quantum theory, S a and S b are entangled.) However, because of the principle of locality, given an information observable X, one can define the X-intrinsic part [a y ] X of the attribute a y as follows. Consider the attribute (a y , b y ) prepared by measuring X on system S a using some particular substrate S b as the target substrate. In each such preparation, S a will have the same intrinsic attribute [a y ] X , which I shall call the X-intrinsic part of a y , which is therefore the union of all the attributes preparable in that way. The same construction defines the 'X'-intrinsic part It follows, from the corresponding property of a y : that {ū X y ,ū X y } is sharp in [a y ] X with valuē u X y ; that [a y ] x ∩ x=φ; that [a y ] X ⊥x, ∀x ∈ X y . Similarly for the 'quoted' variables and attributes. In quantum theory, [a y ] X and [b y ] 'X' are attributes of having the reduced density matrices on S a and S b . Unlike in [30], they are not given any probabilistic interpretation. They are merely local descriptors of locally accessible information (defined deterministically in constructor theory [6]). (i) Successive measurements In unitary quantum theory, the consistency of measurement (see above) is the feature that when successive measurers ofX are applied to the same source initially in the state (α|0 + β|1 ), with the projector for «whether the two target substrates hold the same value» is sharp with value 1. In constructor theory, the generalization of that property is required to hold. Define a useful device, the X-comparer 16 C X . It is a constructor for the task of comparing two instances of a substrate in regard to an observable X defined on each: where The fact that a quantum C X would deliver a sharp «yes» if presented with the target substrates of successive measurements of an observable on the same source is what makes 'relative states' and 'universes' meaningful in Everettian quantum theory, because it makes the notion of 'observed outcome in a universe' meaningful even when the input variable X of the measurer is not sharp. The same holds in superinformation theories (figure 2). Unpredictability in superinformation media I can now define unpredictability exactly in constructor theory, and show how it arises in superinformation media. (a) X-predictor An X-predictor for the output of an X-measurer whose input attribute z is drawn from some variable Z (in short: 'X-predictor for Z') is a constructor for the task: where P = {p z } is an information observable whose attributes p z -each representing the prediction «the outcome of the X-measurer will be 'x' given the attribute z as input»-are required to satisfy the network of constructions in figure 3. B first prepares S a with the information attribute z ∈ Z specified by some information attribute s z ; then the X-measurer M X is applied to S a ; and then its target S b and the output of the predictor, p z , are presented to an 'X'-comparer C 'X' . If that delivers a sharp «yes», the prediction p z is confirmed. If it would be confirmed for all z ∈ Z, then P X is an X-predictor for Z. The exact definition of unpredictability is then: A substrate exhibits unpredictability if, for some observable X, there is a variable Z such that an X-predictor for Z is impossible. Hence unpredictability is the impossibility of an X-predictor for a variable Z. Note the similarity to 'no-cloning', i.e. the impossibility of a constructor for the cloning task (3.2) on the variable Z. (b) No-cloning implies unpredictability Indeed, I shall now show that superinformation theories (and thus unitary quantum theory) exhibit unpredictability as a consequence of the impossibility of cloning certain sets of attributes. Consider two complementary observables X and Y of a superinformation medium and define the variable Z = X y ∪ {y}. I show that there cannot be an X-predictor for Z. For suppose there were. The predictor's output information observable P would have to include the observable 'X' y . For, if z = x for some x ∈ X y , 'X' y has to be sharp on the target S b of the measurer with value 'x'; so the 'X'-comparer yielding a sharp «yes» would require p x = 'x'. (See §3: C 'X' is a measurer of {'x', 'x'} when 'X' is sharp on one of its sources with value 'x'.) When z = y, the X-predictor's output attribute p y must still cause the X-comparer to output the sharp outcome «yes»; also, P = 'X' y ∪ {p y } is required to be an information variable: hence either p y = 'x' for some 'x' ∈ 'X' y ; or 'x' ∈ū 'X' y . In the former case, again by considering C 'X' as a measurer of {'x', 'x'}, 'X' y would have to be sharp on the target S b of the X-measurer, with the value 'x'; whence y ⊆ x, contrary to the definition of superinformation. In the latter, as y ⊂ū X y , S b would have the attributeū 'X' y ( §3) so that the 'X'-comparer would have to output a sharp «no», again contradicting the assumptions. So, there cannot exist an X-predictor for Z, just as there cannot be a cloner for Z, because Z is not an information variable. Thus, unpredictability is predicted by the superinformation theory's deterministic 17 laws. Its physical explanation is given by the subsidiary theory. In Everettian quantum theory, it is that there are different 'observed outcomes' across the multiverse. But constructor theory has emancipated unpredictability from 'observers', 'relative states' and 'universes', stating it as a qualitative information-theoretic property, just as no-cloning is. X-indistinguishability equivalence classes Quantum systems exhibit the appearance of stochasticity, which is more than mere unpredictability. Consider a quantum observableX of a d-dimensional system S, with eigenstates |x and eigenvalues x. Successive measurements ofX on N instances of S, each identically prepared in a superposition or mixture ρ ofX-eigenstates, display the following convergence property: (i) for large N, the fraction of replicas delivering the observed outcome 18 'x' whenX is measured can be expected not to differ significantly (according to some a priori fixed statistical test) from the number Tr{ρ|x x|}; (ii) in an ensemble (infinite collection) of such replicas, each prepared in state ρ (a 'ρ-ensemble', for brevity), the fraction of instances that would give rise to an observed outcome 'x' equals Tr{ρ|x x|} [31]. But what justifies the expectation in (i)? A frequentist approach to probability would simply postulate that the number Tr{ρ|x x|} from (ii) is the 'probability' of the outcome x whenX is measured on ρ-which would imply (via ad hoc methodological rules; e.g. [18,32]) that Tr{ρ|x x|} could inform decisions about finitely many measurements. By contrast, in unitary quantum theory, it is the decision-theoretic approach that establishes the same conclusionwithout probabilistic assumptions. Absent that argument, the numbers Tr{ρ|x x|} are just labels of equivalence classes within the set of superpositions and mixtures of the states {|x }. I shall now give sufficient conditions on superinformation theories for a generalization of those equivalence classes, which I shall call X-indistinguishability classes, to exist on the set of all generalized mixtures of attributes of a given observable X. One of the conditions for a superinformation theory to support the decision-theoretic argument will be that they allow such classes ( §6). In quantum theory, the equivalence classes are labelled by the d-tuple [f x ] x∈X , where f x = Tr{ρ|x x|}. As the 'trace' operator need not be available in superinformation theories, to construct such equivalence classes I shall deploy fictitious ensembles. This is a novel mathematical construction; properties of such ensembles will be used to define properties of single systems without, of course, any probabilistic or frequentist interpretation. At this stage, the f x are only labels of equivalence classes. Additional conditions will therefore be needed for the f x to inform decisions in the way that probabilities are assumed to do in stochastic theories (including traditional quantum theory via the Born Rule). I shall give these in §7 via (a) X-indistinguishability classes I denote by S (N) a substrate N instances S ⊕ S ⊕ · · · S, consisting of N replicas of a substrate S. Let us fix an observable X of S, whose attributes I suppose with no relevant loss of generality to be labelled by integers: . , x N ): x i ∈ X} be the set of strings of length N whose digits can take values in X, each denoted by s . = (s 1 , s 2 , s 3 , . . . , s N ): s i ∈ X. X (N) = {s : s ∈ X (N) } is an observable of S (N) . In quantum theory, supposing thatX is an observable of a d-dimensional system S, X (N) might beX (N) =X 1 + dX 2 + d 2X 3 + · · · + d N−1X N , whose non-degenerate eigenstates are the strings of length N: |s . = |s 1 |s 2 · · · |s N |s i ∈ X. Fix an N. For any attribute x in X, I define a constructor D x to define attributes of the substrate S (N) , whose limit for N → ∞ will be used to define the X-indistinguishability classes. Consider the observable X It follows from the consistency of measurement ( §3) that: Therefore, crucially, for a given x, F(x) (N) can be sharp even if the observable X (N) is not. In the example above, for N = 3 and x = 0, 0 2/3 ∈ F(0) (3) is the set of all superpositions and mixtures of the eigenstates ofX (3) contained in X 0,2/3 , {|001 , |010 , |100 }:X (3) is not sharp in most such mixtures and superpositions. The observable F(x) (N) is key to generalizing quantum theory's convergence property, for the latter is due to the fact that there exists the limit of the sequence of attributes x f i (N) for N → ∞. Let me now recall the formal expression of the convergence property in quantum theory [31]. 19 Consider a state |z = x∈X c x |x with the property that |z ⊗N = c s 1 c s 2 · · · c s N |s is a superposition of states |s = |s 1 |s 2 · · · |s N |s i ∈ X each having a different f (x; s) = f i (N) . The convergence property is that for any positive, arbitrarily small ε: it answers «yes». Thus, the proportion of instances delivering the observed outcome 'x' when X is measured on the ensemble state |z ∞ = lim N→∞ |z ⊗N is equal to Tr{ρ z |x x|} = |c x | 2 (where ρ z . = |z z|). Therefore, all states |z ⊗N with the property Tr{ρ z |x x|} = |c x | 2 will be grouped by x in the same set, as N → ∞, which can thus be labelled by Tr{ρ z |x x|}. This set is an attribute of a single system, containing all quantum states with the property that Tr{ρ z |x x|} = |c x | 2 = f x . The set of all superpositions and mixtures of eigenstates of X is thus partitioned equivalence classes, labelled by the d-tuple A sufficient condition on a superinformation theory for a generalization of these X-indistinguishability classes to exist under it, on the set of all generalized mixtures of attributes of a given observable X, is that it satisfies the following requirements: (N) , there exists the attribute of the ensemble of replicas of S defined as Φ is the limiting set of Φ (N) -which must exist, and its elements, which are real numbers, must have the property f ∞ ∈Φ f ∞ = 1, 0 ≤ f ∞ ≤ 1. The existence of the limit implies that those attributes do not intersect, i.e. the set F(x) (∞) = {x f ∞ : f ∞ ∈ Φ} is a (formal) variable of the ensemble-a limiting case of F(x) (N) , generalizing its quantum analogue. Given an attribute z of S, define z (N) . = N terms (z, z, . . . z) (in quantum theory, this is the attribute of being in the quantum state |z ⊗N ); introduce the auxiliary variable x∈X the X-partition of unity for the attribute z. 20 If z has such a partition of unity, it must be unique because of (E1). An X-indistinguishability equivalence class is defined as the set of all attributes with the same X-partition of unity: any two attributes within that class cannot be distinguished by measuring only the observable X on each individual substrate, even in the limit of an infinite ensemble. In quantum theory, x f contains all states ρ with Tr{ρ|x x|} = f . A superinformation theory admits X-partitions of unity (on the set of generalized mixtures of attributes in X) if conditions (E1) and (E2) are satisfied. A key innovation of this paper is showing how the mathematical construction of an abstract infinite ensemble (culminating in the property (E1)) can define structure on individual systems (via property (E2))-the attributes x f and the X-partition of unity-without recourse to the frequency interpretation of probability or any other probabilistic assumption. (b) X-partition of unity of the X-intrinsic part Consider now the attribute of being in the quantum state |z = c 0 |0 + e iφ c 1 |1 whose X-partition of unity is [|c 0 | 2 , |c 1 | 2 ]. In quantum theory, the reduced density matrices of the source and target substrate as delivered by an X-measurer acting on |z still have the same partition of unity. The same holds in constructor theory. Consider the X-intrinsic part [a z ] X ( §4) of the attribute a z 20 The existence of x f ∞ does not require there to be any corresponding attribute of the single system for finite N: x , prepending a measurer of X to each of the input substrates of D (N) x will still give a D (N) x , with the same labellings. Thus, the construction that would classify z as being in a certain X-partition of unity can be reinterpreted as providing a classification of [a z ] X , under the same labellings: the two classifications must coincide. If y has a given X-partition of unity, the X-intrinsic part [a z ] X of a z must have the same one. Likewise for the intrinsic part [b z ] X of b z (obtained as the output attribute on the target of the X-measurement applied to y): the 'X'-partition of unity of [b z ] X is numerically the same as the X-partition of unity of z. Conditions for decision-supporting superinformation theories For a given observable X and generalized mixture z of attributes in X, the labels [f (z) x ] x∈X of the X-partition of unity defined in §5 are not probabilities. Even though they are numbers between 0 and 1, summing to unity, they need not satisfy other axioms of the probability calculus: for instance, in quantum interference experiments they do not obey the axiom of additivity of probabilities of mutually exclusive events [33]. It is the decision-theory argument ( §7) that explains under what circumstances the numbers [f (z) x ] x∈X can inform decisions in experiments on finitely many instances as if they were probabilities. I shall now establish sufficient conditions on superinformation theories to support the decision-theory argument, thus characterizing decision-supporting superinformation theories. I shall introduce one of the conditions via the special case of quantum theory. Consider the x-and y-components of a qubit spin,X andŶ. There exist eigenstates ofX, i.e. |x 1 , |x 2 , and of Y, i.e. |y ± = (1/ √ 2)[|x 1 ± |x 2 ] that are 'equally weighted', respectively, in the x-and y-basisin other words, |x 1 , |x 2 are invariant under the action of a unitary that swaps |y + with |y − ; and |y ± are invariant under a unitary that swaps |x 1 with |x 2 . Moreover, there exist quantum states on the composite system of two qubits, which are likewise 'equally weighted' and have the special property that [|y + |y + ± |y − |y − ]. (6.1) I shall now require that the analogous property holds in superinformation theories. While in quantum theory it is straightforward to express this via the powerful tools of linear superpositions, in constructor theory expressing the same conditions will require careful definition in terms of 'generalized mixtures'. The conditions for decision-supporting information theories is that there exist two complementary information observables X and Y such that: (T1) The theory admits X-partitions of unity (on the information attributes of S that are generalized mixtures of attributes of X) and X a + X b -partitions of unity (on the information attributes of the substrate S a ⊕ S b that are generalized mixtures of the attributes in the observable X a + X b ). (2) There exist attributes x 1 , x 2 ∈ X that are generalized mixtures of the attributes {y + , y − }; and attributes y + , y − ∈ Y that are generalized mixtures of attributes {x 1 , x 2 } with the property that: There exists an attribute q that is a generalized mixture of attributes in S x = {(x 1 , x 1 ), (x 2 , x 2 )} and a generalized mixture of attributes in S y = {(y + , y + ), (y − , y − )}, such that S x 1 x 1 ,x 2 x 2 (q) ⊆ q, S y + y + ,y − y − (q) ⊆ q. In quantum theory, y ± are two distinguishable equally-weighted quantum superpositions or mixtures of the attributes in X y , such as |y ± . Similarly for |x 1 , |x 2 . The principle of consistency of measurement ( §3) implies that if an attribute y has an X-partition of unity with element f (y) x , for any permutation Π on X, Π (y) has X-partition of unity such that f (Π (y)) x = f (y) Π(x) . This is because presenting Π (y) to D (N) x (defined in §5) is equivalent to presenting y to the constructor (measurer) obtained prepending the computation Π to D (N) Let the attributes [a y± ] X ⊇ a y ± be the X-intrinsic parts of the attributes a y ± , where (a y ± , b y ± ) is the attribute of S a ⊕ S b prepared by measuring X on S a holding the attribute y ± , with S b as a target. R3. [a y+ This requirement is satisfied in quantum theory: a measurer of the observableX acting on a substrate S a in the state |y ± generates the states: whose reduced density operators on S a are the same, (1/2)[|x 1 x 1 | + |x 2 x 2 |], and are still 'equally weighted', thus invariant under swapping |x 1 and |x 2 . Consider now the observables of S ⊕ S: R4. There exists an attribute q that is both a generalized mixture of attributes in S x and a generalized mixture of attributes in S y : with the property that: S x 1 x 1 ,x 2 x 2 (q) ⊆ q and S y + y + ,y − y − (q) ⊆ q, (6.2) where the swap S on a pair of substrates acts in parallel in each separately. In quantum theory, q is the attribute See table 1 for a summary, and footnote 26 in appendix A.) They are satisfied by quantum theory, as I said, via the existence of states such as those for which (6.1) holds. Games and decisions with superinformation media The key step in the decision-theoretic approach in quantum theory is to model the physical processes displaying the appearance of stochasticity as games of chance, played with equipment obeying unitary (hence non-probabilistic) quantum theory. In the language of the partition of unity, applied to quantum theory, the game is informally characterized as follows. An X-measurer is applied with an input attribute y whose X-partition of unity is [f (y) x 1 , . . . f (y) x d ]. The gamedevice is such that, if the observable X is sharp in y with value x (so, under quantum theory, y is an eigenvalue-x eigenstate ofX), the reward is 'x' (in some currency); otherwise unpredictability arises ( §4). The player may pay for the privilege of playing the game, knowing y, the rules of the game and unitary quantum theory (i.e. a full description of the physical situation). One proves that a player satisfying the same non-probabilistic axioms of rationality as in classical decision theory [35], but without assuming the Born Rule or any other rule referring to probability, would place bets as if he had assumed both the identification between the f x and the Born-Rule probabilities, and the ad hoc methodological rule connecting probabilities with decisions [18,32]. I shall now recast the decision-theoretic approach in constructor theory. The key differences from previous versions are as follows. (i) They involved notions of observed outcomes and relative states, which are powerful tools in Everettian quantum theory. In constructor theory, instead, none of those will be relied upon. (ii) Some axioms that were previously considered decision-theoretic follow from properties of information media and measurements, as expressed in the constructor theory of information. (a) Games of chance From now on, I shall assume that we are dealing with a decision-supporting superinformation theory-i.e. a superinformation theory satisfying the conditions given in §6, with complementary observables X and Y. For simplicity, I shall assume X and Y to be real-valued, whereby X and Y are the set of real numbers. I shall require a slightly more detailed model of a 'game of chance', whose centrepiece is an X-adder. It is defined as the constructor Σ X performing the following computation on an information medium S ⊕ S p ⊕ S p : x∈X,p∈X p {(x, p, 0) → (x, p, x + p)}. (7.1) An X-adder is a measurer of the observable X + X p of S ⊕ S p labelled so that, when X is sharp in input with value x and X p is sharp in input with value 0, X p is sharp on the output pay-off substrate with value x. For any fixed p, this is also a measurer (as defined in (3.3)) of the observable X on S. 21 In quantum theory, it is realized by a unitary U : |x |p |0 → |x |p |p + x , ∀x, p. An X-game 22 of chance G X (z) is a construction defined as follows: (G1) The game substrate S (e.g. a die), with game observable X = {x : x ∈ X}, is prepared with some legitimate game attribute z, defined as any information attribute that admits an X-partition of unity (where X may be non-sharp in it) . (G2) S is presented to an adder together with two other pay-off substrates S p , representing the player's records of the winnings, with pay-off observable X p = {x : x ∈ X}. The first instance of S p (the input pay-off substrate) contains a record of the initial (pre-game) assets, in some x' Figure 5. The composition of two games G X (z) G X (z ) (top) is the game G X a + X b ((z, z )) (bottom). units; the other instance (the output pay-off substrate), initially at 0, contains the record of the winnings at the end of the game and it is set during the game to its pay-off under the action of the adder. Keeping track of both of those records is an artefact of my model, superfluous in real life, but making it easier to analyse composite games. (G3) The composition of two games of chance G X (z) G X (z ), with game substrates S a and S b , is defined as the construction where the output pay-off substrate of G X (z) is the input payoff substrate of G X (z ) (figure 5, top). In other words, the composite game G X (z) G X (z ) is an (X a + X b )-adder realized by measuring the observable X a + X p and the observable X b + X p separately. Thus, the composition of two games G X (z) G X (z ) is the game G X a +X b ((z, z )) with game substrate S a ⊕ S b (figure 5, bottom). (b) The player I model the player for G X (z) as a programmable constructor (or automaton) Γ whose legitimate inputs are: the specification of the game attribute z, the (deterministic) rules of the game (with the game observable X) and the subsidiary theory. Its program must also satisfy the following axioms: A1. Ordering. Given z and z', the automaton orders any two games G X (z) and G X (z )-the ordering is transitive and total. In this constructor-theoretic version of the decision-theory argument, this is the only classical decision-theoretic axiom required of the automaton. It corresponds to the transitivity of preferences in [1,8]. Its effect is to require Γ to be a constructor for the task of providing a real number V{G X (z)} ∈ X-the value of the game. Specifically, I define the value of the game G X (z), V{G X (z)}, as the unique v z ∈ X with the property that Γ is indifferent between playing the game G X (z) and the game G X (v z ). As the reader may guess, the key will turn out to be that attributes with the same partition of unity have the same value. A2. Game of chance. The only observables allowed to condition 23 the automaton's output are: (i) the observable for whether the rules of the game are followed; (ii) the observable D X p , defined as the difference between the observable X p of the first reward substrate before the game and the observable X p of the second reward substrate after the game, as predicted by the automaton (given the specification of the input attribute and the subsidiary theory Thus, whether other observables than the ones of axiom 2 may be sharp is irrelevant to the automaton's output. Otherwise, those observables would have to be mentioned in the program to condition the automaton's output (thus specifying a player for a different game). This fact shall be repeatedly used in §7c. In classical decision theory, any monotonic rescaling of the utility function causes no change in choices; so, without losing generality, I shall assume that, whenever the observable D X p is predicted by the automaton to be sharp on the pay-off substrate with value x, the automaton outputs a substrate holding a sharp v x = x. (c) Properties of the value function As I promised, axiom 1 and 2 imply crucial properties, which were construed as independent decision-theoretic axioms in earlier treatments, but here follow from the other axioms and the principles of constructor theory, under decision-supporting superinformation theories. Namely: (P1) Γ 's preferences must be constant in time. Otherwise, the observable 'elapsed time' would have to condition the program, violating axiom A2. 25 (P2) Substitutability of games [8,35]. The value of a game G X (z) in isolation must be the same as when composed with another game G X (z ). Otherwise, again, the automaton's program would have to be conditioned on the observable 'what games G X (z) is composed with', violating axiom A2. (P3) Additivity of composition. Setting V{G X (z)} = v z and V{G X (z )} = v z , P2 implies: 2) (P4) Measurement neutrality [10]. Games where the X-adder (under the given labellings) has physically different implementations have the same value-otherwise the observable 'which physical implementation' would have to be included in the automaton's program, violating axiom A2. Measurement neutrality, in turn, implies the following additional properties: a. The game G X (Π (z))-where the computation Π (for any permutation Π over X) acts on S immediately before the adder (figure 6)-is a Π (X)-adder, where Π (X) denotes the re-labelling of X given by Π . Hence, it can be regarded as a particular physical implementation of the game G Π(X) (z). Therefore, measurement neutrality implies: ((z a , z b )) (figure 5), by measurement neutrality the values of games on composite substrates must equal that of composite games: (The last step follows from (7.2).) c. Let [a y ] X and [b y ] X be the X-intrinsic parts [a y ] X ⊇ a y and [b y ] X ⊇ b y of the attributes a y and b y ( §4), where (a y , b y ) is the attribute of S a ⊕ S b prepared by measuring X on S a in the attribute y, with S b as a target. By prepending a measurer of X to the Xadder in the game G X (y), so that the measurer's source is the input of the X-adder (figure 7, bottom right), one still has the same X-adder. Thus, the X-game G X ([a y ] X ) (figure 7, top) has the same physical implementation as the X-game G X (y) (figure 7, bottom left). Similarly, by concatenating a measurer of X to an 'X'-adder so that the former's target is the game substrate of the latter, one obtains an X-adder overall, whereby the 'X'-game G 'X' ([b y ] X ) has the same physical implementation as the Xgame G X (y). The same applies if one considers G Π(X) (y), for any permutation Π over X. Measurement neutrality implies: d. Shift rule [8]. Define the uniform shift as the permutation T k = x∈X {x → x + k}, for real k. The game G X a +X b (z, k) (figure 8, right) is a particular implementation of the game G T k (X) (z) ( figure 8, left), where the T k (X)-adder is realized by measuring X a + X b on S a ⊕ S b in the attribute (z, k), for a fixed k ∈ X. By measurement neutrality and (7.4): The 'equal value' property. In quantum theory, a superposition or mixture of orthogonal quantum states, such that X-games played with them have the same value v, has value v [8]. A generalization of this property holds under decision-supporting superinformation theories. A crucial difference from the argument in [8] is that here there is no need to invoke observed outcomes or relative states to prove this property. having an X-partition of unity, with the property that q is a generalized mixture of Hwhereby q ⊆ū H . I shall now prove that the X-game with game attribute q also has value v: V{G X (q)} = v. In the trivial case that q = h 1 or q = h 2 , V{G X (q)} = v. Consider the case where, instead, q is a non-trivial generalized mixture of H = {h 1 , h 2 }: q ⊆ū H , q ∩ h i = o, q ⊥ h i , ∀i = 1, 2. In quantum theory, q is the attribute of being in a superposition or mixture of distinguishable quantum states |h 1 , |h 1 : h 1 |h 2 = 0-e.g. the state |q = (α|h 1 + β|h 2 ) for complex α, β. I shall show that G X (q) is equivalent to another game with value v, obtained by applying the following procedure on the game substrate S a . First, measure H on S a holding the attribute q, with an ancillary system S b as a target of the H-measurer. This delivers the substrate S a ⊕ S b in the attribute (a q , b q ). By the properties of the intrinsic part ( §3), [a q ] X still is a generalized mixture of the attributes in H: for any Π over X. Hence, by the additivity of composition (property (P3)), it also follows that: 7) for any permutations Π , Π over X. Consider the game G R(X) a +X b (q) where q satisfies conditions R4 ( §6) and the reflection R is the permutation over X defined by R = x∈X {x → −x}. The adder in the game is a measurer of R(X) a + X b , which, in turn, is an X-comparer (as defined in (3.4)) on the substrates S a , S b : it measures the observable «whether the two substrates hold the same value x», where the output 0 corresponds to «yes». By the consistency of successive measurements of X on the same substrate with attribute y ( §3), the output variable of that measurer is sharp with value 0 when presented with an attribute which, like q, has the property that q ⊆ū S x -where S x . = {(x 1 , x 1 ), (x 2 , x 2 )}. In quantum theory, as I said, the attribute q is that of being in the quantum state (1/ √ 2)[|x 1 |x 1 + |x 2 |x 2 ], which is a 0-eigenstate of the observable −X a +X b . Thus, by definition of value, the value of G R(X) a +X b (q) must be 0. On the other hand, as q ⊆ū S y , where S y . = {(y + , y + ), (y − , y − )}, and, by (7.7), both G R(X) a +X b ((y − , y − )) and G R(X) a +X b ((y + , y + )) have the same value, by the 'equal-value' property (e), one has: V{G R(X) a +X b (q)} = V{G R(X) a +X b ((y + , y + ))}. Hence: 0 = V{G R(X) a +X b (q)} = V{G R(X) a +X b ((y + , y + ))} = V{G R(X) (y + ) G X (y + )}, where the last step follows by (7.4). Finally, one obtains, as promised: V{G R(X) (y + )} = −V{G X (y + )}. thereby showing that a program satisfying the axioms in §7.1 is possible, under decisionsupporting superinformation theories. The automaton with that program must value the game using the f (y) x as if they were the probabilities of outcomes, without assuming (or concluding!) that they are. It places bets in the same way as a corresponding automaton would, if programmed with the same axioms of classical decision theory plus additional, ad hoc probabilistic axioms connecting probabilities with decisions (e.g. that the 'prudentially best option' maximizes the expected value of the gain [32]) and some stochastic theory with the f (y) x as probabilities of outcomes x (e.g. quantum theory with the Born Rule). My argument broadly follows the logic of Deutsch's or Wallace's formulation; however, individual steps rest on different constructor-theoretic conditions and do not use concepts specific to quantum theory or the Everett interpretation-e.g. subspaces, observed outcomes, relative states, universes and instances of the player in 'universes' or 'branches of the multiverse'. By comparison with (7.11): V{G X (y + )} = (x 1 + x 2 ) 2 , (7.12) which is a special case of (7.10) when the game attribute y + has an X-partition of unity whose non-null elements are f (y + ) x 1 = f (y + ) x 2 = 1/2. This, the central result proved by Deutsch and Wallace, is now proved from the physical, constructor-theoretic properties of measurements. The general, 'unequal weights', case follows by an argument analogous to [8], with some conceptual differences (see appendix A). What elements of reality the elements of a partition of unity in (7.10) represent depends on the subsidiary theory in question: it is not up to constructor theory (nor the decision-theory argument) to explain that. The decision-theoretic argument only shows that decisions can be made, when informed by a decision-supporting superinformation theory, under a non-probabilistic subset of the classical axioms of rationality, as if it were a stochastic theory with probabilities given by the elements in the partition of unity. In particular, in unitary quantum theory that argument is not (as it is often described) a 'derivation of the Born Rule', but an explanation of why, without the Born Rule, in the situations where it would apply, one must use the moduli squared of the amplitudes (and only those) to inform decisions. I have shown that the equivalent holds for any constructortheory-compliant subsidiary theory. Thus, such theories can, like unitary quantum theory, account for the appearance of stochastic behaviour without appealing to any stochastic law. Conclusion I have reformulated the problem of reconciling unitary quantum theory, unpredictability and the appearance of stochasticity in quantum systems, within the constructor theory of information, where unitary quantum theory is a particular superinformation theory-a non-probabilistic theory obeying constructor-theoretic principles. I have provided an exact criterion for unpredictability, and have shown that superinformation theories (including unitary quantum theory) satisfy it, and that unpredictability follows from the impossibility of cloning certain sets of states, and is compatible with deterministic laws. This distinguishes it from randomness. I have exhibited conditions under which superinformation theories can inform decisions in games of chance, as if they were stochastic theories-by giving conditions for decision-supporting superinformation theories. To this end, I have generalized to constructor theory the Deutsch-Wallace decision-theoretic approach, which shows how unitary quantum theory can inform decisions in those games as if the Born Rule were assumed. My approach improves upon that one in that (i) its axioms, formulated in constructorinformation-theoretic terms only, make no use of concepts specific to Everettian quantum theory, thus broadening the domain of applicability of the approach to decision-supporting superinformation theories; (ii) it shows that some assumptions that were previously considered as purely decision-theoretic, and thus criticized for being 'subjective' (namely, measurement neutrality, diachronic consistency, the zero-sum rule), follow from physical properties of superinformation media, measurers and adders, as required by the principles of constructor theory. So the axioms of the decision-theory approach turn out not to be particular, ad hoc axioms necessary for quantum theory only, as it was previously thought; instead, they are either physical, information-theoretic requirements (such as the principle of consistency of measurement, interoperability and the conditions for decision-supporting superinformation theories) or general methodological rules of scientific methodology (such as transitivity of preferences) required by general theory testing. Deutsch [18] has shown how all decision-supporting superinformation theories (as defined in this paper) are testable in regard to their statements about repeated unpredictable measurements. This paper and Deutsch's, taken together, imply that it is possible to regard the set of decisionsupporting superinformation theories as a set of theoretical possibilities for a local, non-probabilistic generalization of quantum theory (alternative to, for example, 'generalized probabilistic theories' [34]), thus providing a new framework where the successor of quantum theory may be sought.
17,382.4
2015-07-12T00:00:00.000
[ "Mathematics" ]
Toward Universal Neural Interfaces for Daily Use: Decoding the Neural Drive to Muscles Generalises Highly Accurate Finger Task Identification Across Humans Peripheral neural signals can be used to estimate movement-specific muscle activation patterns for the purpose of human-machine interfacing (HMI). The available HMI solutions, however, provide limited movement decoding accuracy that often results in inadequate device control, especially in the dynamic tasks context, and require extensive algorithm training that is highly subject-specific. Here, we show that dexterous movements can be identified with high accuracy using a physiology-derived and information-theoretically optimised feature space that targets the spatio-temporal properties of the spiking activity of spinal motor neurons (neural features), decomposed from the interference myoelectric signal. Moreover, we show that the movement decoding accuracy based on these neural features is not influenced by the muscle activation level, reaching overall >98% in the full range of forces investigated and from processing intervals as short as 30-ms. Finally, we show that the high accuracy in individual finger movement recognition can be achieved without user-specific models. These results are the first to show a highly accurate discrimination of dexterous movement tasks in a wide range of muscle activation levels from near-real time processing intervals, with minimal subject-specific training, and thus are promising for the translation of HMI to daily use. I. INTRODUCTION A fundamental role of the human nervous system is to generate goal-oriented behaviour. Behaviour, such as walking, eye-gazing or grasping, is expressed in the form of a controlled movement [1], [2]. In times of a rapid development of human-machine interfaces (HMIs) and common access to smart devices, considerable efforts are being made towards identifying neural correlates of movement intention [3]- [6] and movement execution [6]- [8]. The decoding of basic kinetic [9] and semantic [10] movement components, such The associate editor coordinating the review of this manuscript and approving it for publication was Hasan S. Mir. as the amount of force exerted by a muscle or the actuation of a specific degree of freedom (DOF), is at the core of HMIs, with applications for healthy users as well as patients with motor impairments. Movement-decoding paradigms have the potential to replace the currently available, indirect interfacing that makes use of a 'medium' device, such as a switch or a keyboard, in order to perform an operation, and introduce direct machine control, with which the user can operate a device using different features and levels of muscle excitation [11]- [13]. The direct interfacing not only can change the way we use smart devices recreationally, but it can also bring numerous clinical perks, such as novel rehabilitation strategies and improved neuroprosthetic control. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ However, fulfilling the demand for deconstructing the movement into its basic spatio-temporal components remains challenging [14]- [17]. Some attempts for non-invasive HMIs rely on electroencephalographic signals (EEG) recorded from the central nervous system (CNS) for translating the movement intention into commands to external devices [14], [18], [19]. These solutions provide insufficient spatio-temporal resolution of movement correlates, and thus yield limited accuracy and robustness. The CNS coordinates voluntary movements through the activity of spinal motor neurons that innervate the muscle tissue which, when excited, contracts to generate force [20]. An approach alternative to EEG for neural decoding of movements is thus based on peripheral decoding via the identification of the behaviour of spinal motor neurons [21]. Decoding can be achieved with the decomposition of non-invasive, high-density EMG signals (HD-EMG) [22], [23]. HD-EMG, which record muscles' activity from several closely spaced electrodes over the skin, can be decomposed into motor unit action potentials that correspond to the discharge timings of the innervating motor neurons (i.e., the neural drive to the muscle) [24]. The decomposition can be performed with online implementations and therefore is suitable for HMIs [25], [26]. While motor neuron discharge timings have been previously exploited for movement recognition purposes in classic machine learning paradigms [21], [27]- [29], there have been no attempts to extract physiology-inspired characteristics from decoded motor neuron behaviour to estimate user motion. By investigating the physiological features leading to the movement generation, we aim to obtain a truly robust direct interfacing that could substantially increase the translational potential of movement decoding methods. In this paper, we extract features from decoded motor neuron behaviour that physiologically explain force generation as a combination of recruitment and rate coding of motor units [30], [31]. We present accurate tracking of movement phases in finger control, based on information on the behaviour of motor neurons identified from the decomposed HD-EMG signal. Moreover, we propose that this neural information conveys universal movement execution code in humans, so that the identification of movement from peripheral neural correlates is possible without user-specific models. By demonstrating that a fully non-invasive derivation of neural correlates of movement is feasible and demonstrating the proof of concept of the peripheral neural control universality in humans, we provide a significant step forward towards a better integration of HMI systems in our daily living. B. HD-EMG DECOMPOSITION The HD-EMG signals were resolved into series of discharge timings of active motor neurons following the Convolution Kernel Compensation (CKC) method [32]. The CKC algorithm is based on a convolutive data model of surface myoelectric signals in which the multi-channel EMG signal is convolved in the spatio-temporal domain by estimates of separation vectors for individual motor units. Activity of motor neurons is modelled by series of discrete delta functions that correspond to their respective discharge times. The CKC approach was chosen since it is proved to assure a good approximation of complete Motor Unit (MU) discharge patterns during low-level force-varying contractions. The decomposition results were held to the standards of signal-based metrics of accuracy -the pulse-to-noise ratio (PNR), assuring that only the MUs decomposed with PNR > 30dB were analysed. C. FEATURE SPACE To describe the global myoelectric activity, we calculated the Root Mean Square (RMS) of each channel of the HD-EMG recording. RMS is an estimator of EMG amplitude and provides information on the muscle activation level. To characterise the neural activity, we estimated pooled motor neuron spike trains based on the number of spikes (spike count -SC), the standard deviation of the times of motor neuron discharges (STD), and the average inter-spike-interval (ISI) measures. This set of variables targeted both peripheral and central properties of the active motor units, with SC being directly proportional to the force exerted by the muscle during a contraction, and STD and ISI representing spike train metrics that quantify the temporal and topological similarities and dissimilarities of all-or-none neural cortical events [33]. The feature space was participant-specific and contained information from n EMG sensors that measured the muscle activity over the time interval T . T was processed in uni-form k-size windows. In this paper, the size of k was set to 30-ms (or 30-and 150-ms, respectively, for two-conditional interference EMG signal processing), representing a trade-off between the real-time processing of myoelectric signals and an accurate phase-and-DOF discrimination. The final structure of the feature matrix was three-dimensional (n × m × l), comprising n rows corresponding to EMG electrodes (n = 64), m columns corresponding to the processing windows (where m = T k ), and the l th dimension representing different feature types (i.e. SC, STD and ISI for the neural input (l = 3), and RMS for the interference EMG (l = 1)). To enhance the spatial selectivity of the feature space, we ranked all individual electrodes based on their average RMS score. Following ranking, an activation threshold was imposed on the electrode space which allowed for retaining the features characterising the signals that scored ≥70% of the maximum average RMS value. The electrodes that did not meet the activation thresholding condition were set to zero in the feature space. D. MUTUAL INFORMATION FOR THE FEATURE SPACE OPTIMISATION To decrease the redundancy within the neural feature space, we applied the Information Theory's Mutual Information (MI) measure to compute the entropies of individual features as well as the joint MI between features [34]. The MI between each of the random variables was estimated by binning (bin size being equal to processing window) that allowed for the approximation of their probability density functions. Next, the MI was calculated according to the fol- where s and r are discrete random variables, I (s; r) is the MI between them, and the summations are calculated over the appropriately discretized values of the s and r. For each bin, the joint probability distribution p(s, r) was estimated by counting the number of cases that fell into a particular bin and dividing that number with the total number of cases. The same technique was applied for the approximation of the marginal distributions p(s) and p(r). E. CLASSIFICATION We used two types of classification inputs and three types of classification tasks. In the first classification task we tested the capacity of using peripheral signals for effective movement tracking (i.e. muscle contraction stages discrimination) of a single-DOF activation. In the second task, which constituted an extension of the results obtained with the first task, we tested whether the information contained in peripheral signals allowed for effective movement tracking concurrently with the accurate differentiation between different DOFs. In the third task, we assessed the classification of DOFs activation only by pooling all force levels. To compare the accuracy achieved with the conventional, global HD-EMG approach and that of the proposed neural approach for movement decoding, we performed a comparative analysis on the respective types of input in the first and the third discriminative tasks. We excluded the interference EMG analysis in the second classification task based on the very poor performance of this approach in the first task, which was a less complex version of the second task. F. PERFORMANCE ASSESSMENT To test and compare performances of global and neural discriminative approaches, a classification into discrete classes (i.e. fingers' movements and/or phases of fingers' movements) was performed using a linear discriminant analysis (LDA) [35] with Monte-Carlo cross-validation (MCCV) [36], [37]. In a dataset of n instances, an MCCV procedure randomly generated without replacement k subsets, each of which contained m instances, where n = k × m. In each iteration, a different k subset is held out to estimate the classification error while the remaining k − 1 subsets are used for training. A fixed 5:1 train-test ratio was kept throughout the conducted discriminative performance analysis. To examine the universality of the peripheral information, we performed a classification into the discrete classes using LDA with one out of two variants of a leave-one-subjectout cross-validation (LOOCV) [38], [39]. In classic LOOCV, a single observation corresponds to an individual participant's dataset which is used as classification validation set. In the complete LOOCV variant, called LOOCV (1), we used a single observation as the validation set and the remaining observations (8) as the training set. In the incomplete LOOCV variant, called LOOCV (2), we randomly selected without replacement 5% of a single-observation validation set and included it in the training set along the remaining observations. This introduced a small amount of 'same-source' observation to the classifier's training that could increase the classification accuracy due to sharing of the feature space's pattern with the validation set. G. STATISTICAL ANALYSIS Normality of the data was assessed by the Shapiro-Wilk test. When the null hypothesis of skewed distribution was rejected, the independent samples t-test was used for determining whether samples in different groups (e.g., different DOFs) originated from the same distribution. When this null hypothesis was rejected, post-hoc pair-wise comparisons were performed with the Wilcoxon signed. Post-hoc analysis was performed with the Tukey test. The threshold for significance throughout the analysis was set to p < 0.05. A. DECODING MOVEMENT PHASES Decoding the peripheral correlates of volitional movements requires identifying the fundamental spatio-temporal components of the neural activation to muscles. For this purpose, muscles' electrical activity can be decomposed into FIGURE 1. Intended movement generation and signal decomposition. The movement intention triggers the motor neuron activity (i.e. the neural drive to the muscle), exciting the muscle fascicles. The change in muscle fascicles' length influences the recruitment and discharge rate of motor units (MUs), allowing for a proportional actuation of the intended DOF. The myoelectric signal elicited within the muscle during DOF actuation is acquired using a high-density electrode grid placed on the skin overlaying the active muscle, resulting in the EMG recording. The HD-EMG signal can be then decomposed into neural drive that corresponds to a particular movement. the neural drive down-streamed from the spinal cord via motor neurons. We explored the feasibility of distinguishing movements (DOFs) during their contraction stages (increase/decrease/stable force) based on motor neuron activity decoded from myoelectric signals (Fig.1). The number of spikes and sparseness of the spike trains vary as a function of MU recruitment and rate coding, which are the two physiological mechanisms that determine force modulation. Moreover, each identified MU can be characterised by its spatial location, as estimated by the amplitude distribution of its MU action potential recorded with the HD-EMG grid (see Supplementary Fig. 3). We calculated an averaged RMS value on spike trains in order to score the activation level of each HD-EMG electrode, and then set an activation threshold on the electrode space so that only the signals from the electrodes overlying the most active compartments of the FDS muscle were included in the feature space for each contraction. We then compared the discriminative capacity of global EMG features (conventional myocontrol) vs MU-based features in different phase-and-DOF classification tasks. To keep the dimensionality of the two classification inputs consistent, we applied the activation thresholding on the electrodes when constructing both neural and global HD-EMG feature spaces for all the classification tasks. Since the neural drive was described by several features (see Methods section), we calculated the mutual information (MI) and Poisson probability density function (PDF) to minimise redundancy of the neural feature space (see Methods section). In Fig. 2a-b (as well as in Appendix, Supplementary Fig. 2.) we show that the features extracted from the neural drive to the muscle were complementary in both movement phase-and-DOF characterisation contexts. The feature extraction for the neural input was conducted in 30-ms processing windows, introducing a small delay with respect to processing the signals in real-time. As a comparison, current HMIs used for control of rehabilitation devices, such as prostheses, usually work on intervals of 200-ms. Since the behaviour of MUs changes as a function of the contraction force and of the direction in force modulation, the extracted neural input allowed for tracking the movement phases with high accuracy ( Fig. 3a; x 97%, +/− 2.81, specificity = 97.9%, sensitivity = 98.9%). This result demonstrates that the proposed neural features characterised the motor tasks in their specific qualities as to capture differences that are indistinguishable when assessing the global EMG. To characterise the HD-EMG signal, we applied the electrode thresholding and extracted the RMS from the signals obtained from the most active FDS compartments. The RMS extraction for the global input was conducted in 30-ms (for comparison with neural features) and 150-ms (considering the optimal window duration for EMG-based classification of movement phases [42]) intervals. The global EMG input processed in 30-ms windows yielded very poor discriminative performance of movement phases ( Fig. 3a; x 22%, +/− 6.21, specificity = 37.9%, sensitivity = 51.9%). As expected, with extending the window duration to 150-ms, the phase classification improved (x 51%, +/− 7.95, specificity = 49%, sensitivity = 52.8). However, contrary to the neural input, the global EMG did not allow for distinguishing between the same forces exerted during a decreasing and increasing muscle contraction regardless of the processing window size. The cause for a considerable difference in the classification accuracies between the neural and EMG inputs was the insufficient temporal resolution provided by the HD-EMG signal during transient phases of the contraction, explained in Fig. 3b. Therefore, the use of the decoded neural drive substantially expanded the identification of discriminative aspects of motor tasks with respect to the global EMG. We next validated the possibility of tracking the muscle contraction stages while concurrently discriminating between different finger tasks. For this purpose, we classified individual DOFs (4) and their movement phases (3) (with muscle rest as an additional class, for a total of 13 classes). Fig. 4a shows the phase-and-DOF classification results for neural input averaged across all participants. The accuracy of the proposed neural approach in discriminating both individual finger movements and movement phases concurrently was very high (96.1%, +/− 0.061), with a specificity of 98.3% and a sensitivity of 98.8%. For comparison, the global EMG failed to concurrently classify movement phases and DOF, as expected based on the poor accuracy when classifying only movement phases (results not shown). Our findings prove, for the first time, that a non-invasive interface with muscles can provide a highly accurate recognition of volitional multi-DOF finger movements together with tracking all of the movements' phases with a delay smaller than any previously proposed HMI control system. B. RECOVERING FINGER CLASSES FROM CONTRACTION STAGES Joint classification of DOF and movement phases allows for a highly accurate temporal identification of the activation and de-activation of an individual DOF, in a way that is robust to changes in force. To show the feasibility of accurate finger movement recognition when exerting variable forces, we computed the accuracy of DOF classification based on all movement phases. When using global EMG in variable force conditions, the classification error rate is high specifically when force varies around small values. Indeed, as it can be seen on the top of the Fig. 4b, an across-phase comparison proved that the DOF discrimination based on the global HD-EMG features processed in 150-ms windows was variable, with the accuracy highly reliant on the contraction stage used as an input to the classifier (total accuracy range (TAR): 43.6% for the descending phase input to 92% for the plateau phase input, x 71.25, +/− 20.19). When using global EMG on a much shorter processing interval of 30-ms the performance further decreased, as expected (results not shown). Conversely, the discrimination based on neural input features processed in 30-ms windows resulted in high accuracy regardless of the movement phase (TAR: 93% for the ascending phase input to 97.5% for the descending phase input, x 95, +/− 2.73). On average, in the complete contraction context (condition labelled as MIX; full force range), the neural approach resulted in 98 +/− 0.3% DOF discrimination accuracy, and the global EMG resulted in 88 +/− 2.1% DOF discrimination accuracy while processed in 150-ms intervals (p = 0.04) and 75% +/− 3.7 while processed in 30-ms intervals. Therefore, the neural approach allowed a 10% more accurate DOF classification than the global EMG approach, when classification was performed on complete finger flexion data. Beside the average classification accuracy over all force, the difference in accuracy between the two classification inputs was as high as >50% in the variable-force contraction stages, which are the most common stages in natural use of an HMI. These results are consistent with the findings presented in the previous section, and in agreement with previous research in showing that the global EMG properties are insufficient for an accurate DOF discrimination prior to reaching a steady contraction stage (see Discussion). In summary, we showed for the first time that the recognition of individual fingers can be achieved with the accuracy of ∼98% for the full range of contraction forces investigated, and for an extremely short processing window (30-ms). This indicates that intended dexterous movements can be detected with a temporal resolution of 30-ms for any force exerted in an isometric contraction, with consistently very high accuracy. The difference in performance with respect to global EMG (which showed <90% accuracy on average and <50% accuracy in specific task phases, for a much longer processing window of 150-ms) is substantial and shows a realistic potential for the proposed approach to be carried forward into large-scale user applications. C. GENERALISATION ACROSS INDIVIDUALS Having demonstrated that the neural approach provides a successful multi-phase-and-DOF classification at all phases of the isometric muscle contraction, we then addressed the question whether the neural drive decoded with the EMG decomposition can be used as a universal movement code across humans without (or with minimal) user-specific training, and if so, how well does it generalise in comparison with the global EMG features. For this purpose, we classified different finger movements (4) and their contraction stages (3) based on a leave-one-out cross-validation model (LOOCV -see Methods) across subjects using neural and global EMG inputs. First, we built the LOOCV (1) from a training set comprising the data of eight out of nine participants, with the test set comprising the 'left-out' participant's data. The results of our LOOCV (1) analysis, presented in the top row of Fig. 4c, show a large decrease in the classification accuracy on average when using the general model instead of the subject-specific model, for both types of the classification input (x accuracy based on LOOCV(1): neural input = 40.33% +/− 18.5, global EMG input = 6.84%, +/− 3.1). As can be inferred from the muscle activation heat maps presented in the Appendix, Supplementary Fig. 3., the spatial organisation of the muscle innervation detected from the skin level is the main limitation of the generalisation process, as it can differ significantly between people. Next, we constructed the LOOCV (2) for which the training set comprised the LOOCV (1) training data with the addition of 5% of a left-out participant's data (corresponding to just 6-s of recordings for each subject), and the test set the remaining 95% data from nine out of the ten participants. This testing condition corresponded to a universal interface based on a database of training data with the addition of a very small amount of user-specific training. As presented in Fig. 4c, the average accuracy of the inter-participant DOFand-phase discrimination based on as little as 5% participantspecific neural information increased over two-fold with respect to the LOOCV(1) variant, and was only slightly lower than that of the average subject-specific classification showed in Fig. 4a (x accuracy LOOCV(2): 90%, +/− 0.38, x accuracy -subject-specific training: 96.1%, +/− 0.061, respectively). Conversely, the average LOOCV(2)-based classification accuracy when using the EMG input remained very poor, as it was in the LOOCV(1) variant (x accuracy LOOCV(2): 11.38%, +/− 7.21). These findings imply that the peripheral neural information manifests universal properties across subjects. The small subject-specific information needed to exploit this generalisation across individuals relates to the effect of the subjects' anatomy on the distribution of action potential amplitudes on the skin surface. IV. DISCUSSION We propose a technique for decoding an intended movement from the neural drive to the muscle. We observed that unlike the conventional peripheral movement decoding that uses global EMG signal features, our approach enabled a highly accurate, near-real time classification (30-ms data processing interval) of finger movements at any force level with successful movement phase discrimination. This result was obtained by using physiology-inspired neural drive features that displayed different statistical properties for both different finger flexions and muscle contraction stages. Our pattern recognition of EMG signals and neural drive was complemented by an additional analysis of the universality of the peripheral neural information in humans, in which we discriminated the movements and their phases after excluding or limiting the amount of participant-specific information. A. EMG VS NEURAL DRIVE Our study builds on previous work that applied non-invasive peripheral myoelectric analysis to study muscle activation patterns and movement control [20], [24], [40], [41]. Whereas the majority of the past studies used the indirect approach based on global features of the surface EMG to characterise a movement, we studied the neural information both indirectly with standard EMG signal analysis, and directly by decomposing HD-EMG signals into the neural drive to the muscles. Our results for individual finger movements when considering only a stable portion of force production are in agreement with recent findings reporting dexterous movement recognition from EMG signals [43]- [46], with the neural drive slightly outperforming the EMG input. While the gross EMG signal is sufficient for an accurate differentiation between movements at stable force level, the transient EMG (corresponding to ASC and DESC movement phases, or increase/decrease force levels) has non-stationary properties with variable mean and covariance [47], [48], making reliable pattern extraction difficult when contraction force varies. Nonetheless, control of HMIs is fundamentally based on variable-force contractions, causing difficulties for conventional EMG-based control systems. In agreement with previous research, we showed that the information carried in the global EMG features is insufficient for an accurate movement classification during the contraction onset and relaxation. This confirms that in order for the EMG-based intended movement decoding to be correct, it has to be delayed until a stable force level is achieved and halted right after [49]. While the problem of peripheral transient movement classification has been addressed in the past, the proposed solutions targeted transient decoding during the initial movement phase of force increase only [42], [50], provided validation for gross movement recognition but not dexterous [51], or validated the algorithms on a low amount of movements not exceeding 2 DOFs [52], [53]. Importantly, none of the previous studies achieved the performance we present here. Moreover, all previous studies focused on processing windows of hundreds of milliseconds. Our direct movement decoding method based on peripheral neural activity is highly advantageous in terms of the temporal structure and variability in comparison with the conventional surface EMG solutions. Together with the high-density electrode setup, our approach allows for establishing a movement analysis framework that offers a superior spatio-temporal resolution of muscle activity that accompanies movement execution. In the context of daily living, the direct neural decoding can significantly enhance the HMI systems' robustness. Providing an extremely high classification accuracy of >98%, based on near-real time 30-ms processing intervals, it allows for an accurate intended actuation of different DOFs with a minimal delay accompanying the movement class switching. B. DECODING AND TRACKING INTENDED MOVEMENT WITH PERIPHERAL NEURAL CORRELATES The ultimate neural determinant of motion is the firing of spinal motor neurons that excites the contractile tissue [30], [54]. Thus, at the peripheral level, muscles and (by proxy) movements are controlled by the motor neurons' timings of action potentials discharges. This neural information has been successfully used for classifying gross grasps and wrist movements [21], [27], [28]. Here, we designed a set of features extracted from motor neuron activation timings that represents the temporal structure of motor neuron activation. The force exerted by a muscle during a voluntary contraction depends on the number of active motor units (i.e. motor unit recruitment), and the rates at which the motor neurons discharge action potentials (i.e. rate coding) [30], [31]. Concurrent changes in these two properties control the force generated by the muscle: an increase (decrease) in force follows the increase (decrease) in motor unit recruitment and motor neuron firing rate. Recruitment is difficult to track accurately by EMG decomposition since the decomposition does not identify all active motor units but only a subset [32]. For this reason, our quantitative motor unit analysis showed that a similar number of units was identified across different finger flexion tasks (Fig. 2c-d, Appendix, Supplementary Fig. 1.). On the other hand, rate coding can be detected with good temporal resolution even from a small subset of identified motor units. The features we proposed for characterising the neural drive reflect both mechanisms. The first extracted feature was the sum of spikes, which is an estimate of the strength of the neural drive to the muscle based on the subset of identified units. This feature is proportional to the force exerted by the muscle [47], and therefore contains information for discriminating steady (PLAT) movement phases from transient (ASC, DESC). The second feature was the standard deviation of the timings, which measures the temporal spread of the detected action potentials in the processing window. This feature proved helpful in discriminating between different contraction stages and DOFs in cases when the sum of spikes was equal for different conditions. As for the third and final feature, we extracted the mean inter-spike interval in order to determine whether spikes were fired in bursts or continuously. Recruitment information was mainly associated to the spatial distribution of the detected motor unit action potentials in different finger flexion tasks, thus it was addressed with the spatial activation thresholding of the feature space. Physiological information embedded in the selected feature space could not be obtained from the global EMG, which explains the large difference in performance between the proposed method and conventional EMG classification, especially for challenging classification tasks (such as for the same force during and increasing or decreasing trend). C. TESTING THE UNIVERSALITY OF PERIPHERAL NEURAL INFORMATION AMONG HUMANS Understanding the neural principles underlying the generation of a voluntary action remains a major neuroscientific goal [55]. Towards this end, we identified a set of physiologyinspired spatio-temporal properties of motor neuron activity that, collectively, explained force generation during various isometric dexterous contractions. Because motor unit recruitment and rate coding accompany muscle activation across all humans [30], [48], we hypothesized that the proposed neural feature set, which is based on these two mechanisms, would generalize across participants. The relatively accurate movement discrimination achieved with the leaveone-subject-out cross validation using a linear classification algorithm indicated that the selected features are associated to movement in a similar way across the investigated subjects; yet, we observed a decrease in performance with respect to subject-specific training. We infer that this was due to our feature space comprising the information related to the spatial distribution of action potential amplitudes, which is influenced by the volume conductor properties and therefore by the participant's anatomy. Eliminating this feature may increase the generality of the model across individuals, however the discriminative power of our approach would decrease since motor unit recruitment cannot be easily tracked with only temporal features. Interestingly, we found that by training the classifier on a small amount of participant-specific data, we achieved classification accuracy close to the ideal case of subject-specific training. In the context of neural interfacing, this finding implies that the extensive training periods usually required for an accurate calibration of HMIs [56], [57] can undergo significant reduction, shifting the paradigm towards more effective and user-friendly systems. Her research interests include neural signal processing, bio-inspired algorithms development, and information theoretical analysis of neural data. She received the EPSRC Studentship Award from Imperial College London. He was a part of a team of researchers that pioneered techniques for non-invasive spinal interfacing for rehabilitation applications. The same team has successfully introduced a novel, technology driven surgical paradigm for limb reconstruction which is now becoming a clinical state of the art. His research interests include bio-signal processing, advanced control algorithms, prosthetics, robotics, translational neurorehabilitation, and neural control of movement. DARIO FARINA (Fellow, IEEE) was a Full Professor with Aalborg University and the University Medical Center Göttingen, Georg-August University, where he founded and directed the Department of Neurorehabilitation Systems, acting as the Chair in Neuroinformatics of the Bernstein Focus Neurotechnology Göttingen. He is currently a Full Professor and a Chair in neurorehabilitation engineering with the Department of Bioengineering, Imperial College London. His research interests include biomedical signal processing, neurorehabilitation technology, and neural control of movement. He has been elected Fellow of AIMBE, ISEK, and EAMBES. He was a recipient of the 2010 IEEE Engineering in Medicine and Biology Society Early Career Achievement Award and the Royal Society Wolfson Research Merit Award. He has been the President of the International Society of Electrophysiology and Kinesiology (ISEK). He is also the Editor-in-Chief of the official Journal of this Society, the Journal of Electromyography and Kinesiology. He is also an Editor of Science Advances, the IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, the IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, Wearable Technologies, and the Journal of Physiology. VOLUME 8, 2020
7,424.2
2020-08-11T00:00:00.000
[ "Engineering", "Computer Science" ]
Design and Implementation of a Gesture-Aided E-Learning Platform In gesture-aided learning (GAL), learners perform specific body gestures while rehearsing the associated learning content. Although this form of embodiment has been shown to benefit learning outcomes, it has not yet been incorporated into e-learning. This work presents a generic system design for an online GAL platform. It is comprised of five modules for planning, administering, and monitoring remote GAL lessons. To validate the proposed design, a reference implementation for word learning was demonstrated in a field test. 19 participants independently took a predefined online GAL lesson and rated their experience on the System Usability Scale and a supplemental questionnaire. To monitor the correct gesture execution, the reference implementation recorded the participants’ webcam feeds and uploaded them to the instructor for review. The results from the field test show that the reference implementation is capable of delivering an e-learning experience with GAL elements. Designers of e-learning platforms may use the proposed design to include GAL in their applications. Beyond its original purpose in education, the platform is also useful to collect and annotate gesture data. Introduction The COVID-19 pandemic demanded a variety of adaptations in our daily lives, including in matters of education. With physical locations for learning and teaching closed, remote electronic educational technology (e-learning) substituted presence teaching in large parts of the world [1,2]. It is more important than ever to improve and innovate e-learning to provide better and more fruitful learning experiences. E-learning and respective platforms have already been broadly discussed in academia. In their literature review, Arkorful & Abaidoo [1] summarize e-learning's main advantages. Their results-which rely in large part on the work of Holmes & Gardner [2]-underline that e-learning provides (i) more flexibility in time and space for learners, (ii) increases the accessibility of knowledge, and (iii) reduces communication barriers by facilitating discussion forums. In particular, learners' individual learning speeds are better accommodated as they can progress at their own pace [1]. For teachers, e-learning helps to overcome scarcities of teaching equipment [1]. Maatuk et al. [3] describe the challenges that come with implementing e-learning at a university level. They mention the technical and financial requirements for both providers and learners. Moreover, they find that the technological savviness of students influences the learning outcome. E-learning platforms also need to consider copyright protection and require professional development [3]. Overall they find that students are positively disposed towards e-learning and that they think that it improves their learning experience [3]. The basic building blocks of any e-learning experience are learning objects, i.e., the digital files that generate e-learning activities [4,5]. Learning objects come in a variety of digital formats, including e-books, 2D and 3D animations, cases. The sharing of learning objects is a common occurrence in e-learning communities [4]. Data Model While the learning content itself remains in its original form (e.g., text, sound, video), it is communicated with the addition of a gesture. This gesture does not replace, but rather enhances the learning content. Thus, we call this combination of learning content and gesture gesture-enhanced content (GEC). Consequently, a lesson is an ordered list of GECs. The instructor is responsible for defining GECs and lessons, while the learner executes the lesson by performing all GECs within it and creating GEC executions by doing so. The resulting data model is depicted in Figure 1. Modules The e-GAL design is composed of 5 modules. Figures 2 and 3 illustrate how the modules are connected and how learners and instructors respectively are supposed to interact with the system. Modules The e-GAL design is composed of 5 modules. Figures 2 and 3 illustrate how the modules are connected and how learners and instructors respectively are supposed to interact with the system. Content Catalog The content catalog is a database that holds learning content items in one or multi formats (e.g., text, audio, video). If the content items are large in size, the catalog shou carry file references rather than the actual learning content data, or a database mana ment system that supports large fields should be used to avoid performance issues. Gesture Catalog The second database is the gesture catalog. It holds pre-recorded reference gestu in one or more file formats, which may vary depending on what was used to record gestures. However, the gesture data must be sufficient to animate a humanoid avatar ( lesson player module). Ideally, the gestures are recorded with a high-quality motion-c ture system to produce the best possible reference. Lesson Configurator The lesson configurator is a web-based service with a graphical user interface (G for instructors that allows them to combine learning content items and gestures into in vidual GECs. Multiple GECs can be organized into lessons, and additional lesson para eters (e.g., lesson speed) can be set. Content Catalog The content catalog is a database that holds learning content items in one or multiple formats (e.g., text, audio, video). If the content items are large in size, the catalog should carry file references rather than the actual learning content data, or a database management system that supports large fields should be used to avoid performance issues. Gesture Catalog The second database is the gesture catalog. It holds pre-recorded reference gestures in one or more file formats, which may vary depending on what was used to record the gestures. However, the gesture data must be sufficient to animate a humanoid avatar (see lesson player module). Ideally, the gestures are recorded with a high-quality motioncapture system to produce the best possible reference. Lesson Configurator The lesson configurator is a web-based service with a graphical user interface (GUI) for instructors that allows them to combine learning content items and gestures into individual GECs. Multiple GECs can be organized into lessons, and additional lesson parameters (e.g., lesson speed) can be set. Lesson Player Learners interact with the platform via the web-based lesson player. It replays GECs by depicting a humanoid avatar alongside content items. The avatar is animated using the gesture reference data from the catalog. Alongside the gesturing avatar, one or more output ports (e.g., text display, speaker output) replay the content items. Monitoring Module As mentioned in the introduction, research indicates that gestures need to be performed correctly for GAL to provide benefits [9]. The monitoring module records motion data using some type of sensor (e.g., accelerometer, video) and transfers them to the instructor for review. The choice of motion sensor depends on the gesture's range of motion. For instance, if gestures are only performed with hands, a wrist-mounted inertial measurement unit might suffice to retrace the performed gesture [26]. Full-body gestures on the other hand may require a more complex measurement setup. The recorded motion data, along with metadata about the learner and the performed gesture, get uploaded into the log. The log holds all data about past GEC executions and provides an interface for the instructor to look at the motion data and assess the correctness of the gestures. Reference Implementation We demonstrated and evaluated a reference implementation of the proposed e-GAL system design (see Section 2). The learning task of this reference implementation was to learn a series of German language words. The design's modules were deployed in a microservice pattern [27] and implemented as follows. The content catalog consisted of 64 German language words which is a subset of the words used in Mathias et al. [28]. In addition to the textual representation, synthesized speech by Google's WaveNet-based text-to-speech engine [29] was added. Both text and speech were stored in a PostgreSQL 13 database [30]. For each word, a representative gesture (cf. [28]) was recorded using the full-body motion-capture system XSENS MTw Awinda [31]. After recording, each gesture was exported into an FBX file to be suitable for animating the avatar. The FBX file reference for each gesture was stored in the gesture catalog database. In the implementation of the lesson configurator ( Figure 4), instructors could combine a word with a gesture by drag-and-drop in their browser. An important feature was the ability to preview gestures on the fly since labels were rarely sufficient for describing what a gesture looked like. Available lesson parameters included the lesson speed, i.e., the time between two GECs, and a randomization seed with which the order of GECs was shuffled. Furthermore, instructors could generate individualized hyperlinks with which students could start the lesson. The lesson player ( Figure 5) of the reference implementation was a Unity 3D [32] application running in a WebGL environment. It featured a robot-like avatar ("Y-Bot" [33]) on a neutral background. When the learner started the lesson, the Unity application was loaded alongside the necessary lesson data, namely the learning content items and the gestures' FBX files. After loading, the lesson player played each GEC one after the other by simultaneously displaying the word and playing the sound clip (see Video S1). Then, after a small delay, the avatar performed the gesture. This was repeated for each GEC until the lesson was completed. It was assumed that the learner sits behind their desk and in front of their screen during learning. Their computer's webcam, therefore, was most likely to capture at least the upper body. During each GEC, the participant was recorded and after each GEC, the recorded video clip was annotated with the GEC execution ID and queued up for upload to the monitoring modules log. The instructor could access and rate the videos in the monitoring module's web interface ( Figure 6). Evaluation of Reference Implementation A system test was conducted to assess the e-GAL reference implementation's capability to facilitate remote GAL. We want to note that we do not claim to measure actual learning progress, as this would require more sophisticated methods from other fields closer to neurology. Rather, this study aims to answer the research question of whether elearning can deliver GAL, and in the course validating the proposed e-GAL design. Participant recruitment: 20 people were recruited by email for the system test. Each participant received an individualized link that allowed them to take the prepared lesson at any time and place during the 2-week trial period in July 2021. One person could not finish the experiment due to technical difficulties with the web application. Ultimately, we used data from 12 female and 7 male participants with a mean age of 36.6 (σ = 9) ranging from 23 to 53 years. The majority of the participants worked in technical affine companies, therefore, a basic knowledge of using web applications was assumed. Each participant gave their informed consent to be recorded before starting the experiment. Experiment design: An instance of the reference implementation (see Section 3.1) was made accessible online. The authors, acting as instructors, created a lesson containing six GEC items with the lesson configurator. Video S1 in the Supplementary Material con- Evaluation of Reference Implementation A system test was conducted to assess the e-GAL reference implementation's capability to facilitate remote GAL. We want to note that we do not claim to measure actual learning progress, as this would require more sophisticated methods from other fields closer to neurology. Rather, this study aims to answer the research question of whether e-learning can deliver GAL, and in the course validating the proposed e-GAL design. Participant recruitment: 20 people were recruited by email for the system test. Each participant received an individualized link that allowed them to take the prepared lesson at any time and place during the 2-week trial period in July 2021. One person could not finish the experiment due to technical difficulties with the web application. Ultimately, we used data from 12 female and 7 male participants with a mean age of 36.6 (σ = 9) ranging from 23 to 53 years. The majority of the participants worked in technical affine companies, therefore, a basic knowledge of using web applications was assumed. Each participant gave their informed consent to be recorded before starting the experiment. Experiment design: An instance of the reference implementation (see Section 3.1) was made accessible online. The authors, acting as instructors, created a lesson containing six GEC items with the lesson configurator. Video S1 in the Supplementary Material contains a screen recording of the full lesson. The gestures were chosen based on whether they could be performed while sitting behind a desk. The GECs order was randomized, with the same randomization seed for each participant. Participants could access the lesson via their personal invitation link. After displaying the informed consent form and instructions, participants had the chance to preview their webcam feed to make sure they were comfortable with what was being recorded. After accepting, the lesson player started in their web browser. The lesson started shortly after, and one GEC after another was played. Between GECs was a break of three seconds, during which participants were supposed to imitate the avatar while reading the displayed word out loud. The webcam started recording when a new GEC was played and ended 2.5 s later. The videos were stored locally and queued up immediately for background upload to the monitoring module's log. After all GECs of the lesson had been repeated 4 times, the player stopped and redirected the participant's browser to a German translation of the SUS, originally introduced by Brooke [34] and translated by Reinhardt [35]. The SUS questionnaire is comprised of the ten items in Appendix A Table A1, which are answered with a five-point Likert scale [36] ranging from "strongly disagree" to "strongly agree". To calculate the SUS, each response option gets scored from one to five points, starting with one point for "strongly disagree" to five points for "strongly agree". The next step is to adjust the points of the questions. For all odd questions, we subtract one point and for all even questions, we subtract the value five from their score. Next, we add up the points for each of the ten questions and multiply this sum by 2.5. Finally, we get a usability score for each respondent, ranging from 0 (worst) to 100 (best). Afterward, the participants filled in a questionnaire that asked them about their remote lesson experience (RLE). The questionnaire contained five items which are presented in Appendix B Table A2 and could be answered with a five-point Likert scale [36] ranging from "strongly disagree" to "strongly agree". These questions were supposed to identify any problems in the presentation of the learning content or gestures. For evaluation, we subtract a value of one from each question. Next, we take the mean over all participants per question and get a score ranging from 0 (strongly disagree) to 4 (strongly agree). Additionally, an open question allowed participants to freely comment on their thoughts regarding the platform. Interpretation of the results: To assess the learners' acceptance of the system, we follow Bangor et al. [37,38] and use three different rating scales for interpreting the SUS results. Adjective rating: According to Bangor et al. [37,38], the SUS score can be converted into an adjective rating to interpret its results. They show that the results of a seven-point Likert scale correlate with SUS scores and can therefore be useful for interpretation. The findings of Bangor et al. [37] show that the SUS score has a mean of 12.5 when using the adjective "Worst Imaginable" to describe a system, 20.3 when using "Awful", 35.7 when using "Poor", 50.9 when using "Ok", 71.4 when using "good", 85.5 when using "Excellent" and 90.9 when using "Best Imaginable". Except for "Worst Imaginable" and "Awful", all of these adjectives are significantly different and are therefore of interest for the interpretation of the results. e.g., if the SUS score is 75, we would classify our platform as "Good". Grade scale: Bangor et al. [37] introduce the so-called university grade analog, in which the SUS scores are related to school/university grades. According to this grading scale a SUS score between 90 and 100 is an A, 80 and below 90 is a B, 70 and below 80 is a C, 60 and below 70 is a D, and a score below 60 is an F. Acceptability rating: Moreover, to decide whether the platform is usable or suitable to provide GAL, we follow Bangor et al. [37,38] and use the acceptance ranges they provide. The authors rate a system with a SUS score below 50 as "Not Acceptable" and above 70 as "Acceptable". Between a score of 50 and 70, Bangor et al. [37,38] state that the system should be improved and evaluated as "Marginal". This group can be further divided into "Low Marginal" (SUS score between 50 and 62.6) and "High Marginal" (SUS score between 62.6 and 70). In sum, the adjective rating, grade scale, and the acceptability rating are suitable to answer the question of whether learners accept the e-GAL reference implementation. Regarding the RLE responses, we consider an average of 3.0 to be sufficient. At this level, there is general agreement that the respective lesson element was comprehensible. An exception is question 5 ("I felt insecure during the lesson.") which is reverse coded to check the consistency of the participant's answers. The optional free-text comments are mapped to concepts by means of a small-scale inductive content analysis [39]. The videos of the GEC executions are visually compared against the reference gesture by the authors. Based on the difference, the GEC executions are labeled "Correct" (no discernable difference), "Poor" (recognizable as the reference gesture, but with errors, e.g., not moving the head along with the waving hand), and "Wrong" (not recognizable as the reference gesture). Videos that failed to show the gesture clearly (e.g., because the participant was out of frame) were also labeled "Wrong". Figure 7 shows boxplots for the SUS scores across all participants and female (12) and male (7) participants respectively. The median and mean SUS score was 75, with no differences between genders. Consequently, the reference implementation received a C on the grade scale, and a "Good" according to the adjective rating scale. On the acceptability rating scale, the reference implementation was rated "Acceptable". Interestingly, the individual SUS scores varied considerably, with values between 42.5 and 97.5. Therefore, we show the results of the SUS score on the individual level to better understand the results. Figure 8 represents each participant's SUS score located on each of the three scales: (a) shows that four out of the 19 participants rated the platform with the worst grade F (21%), one with a D (5.3%), six with a C (31.6%), three with a B (15.8%) and five with the best grade A (26.3%). When applying the adjective rating scale (b), we find that one participant rated the platform as "Poor" (5.3%), seven as "Ok" (36.8%), six as "Good" (31.6%), two as "Excellent" (10.5%), and three as "Best Imaginable" (15.8%). Finally, (c) illustrates the acceptability rating and shows that for one participant the reference implementation was "Not Acceptable" (5.3%), for three it was "Low Marginal" (15.8%), for one it was "High Marginal" (5.3%) and for fourteen it was "Acceptable" (73.7%). Figure 7 shows boxplots for the SUS scores across all participants and female (12) and male (7) participants respectively. The median and mean SUS score was 75, with no differences between genders. Consequently, the reference implementation received a C on the grade scale, and a "Good" according to the adjective rating scale. On the acceptability rating scale, the reference implementation was rated "Acceptable". Interestingly, the individual SUS scores varied considerably, with values between 42.5 and 97.5. Therefore, we show the results of the SUS score on the individual level to better understand the results. Figure 8 represents each participant's SUS score located on each of the three scales: (a) shows that four out of the 19 participants rated the platform with the worst grade F (21%), one with a D (5.3%), six with a C (31.6%), three with a B (15.8%) and five with the best grade A (26.3%). When applying the adjective rating scale (b), we find that one participant rated the platform as "Poor" (5.3%), seven as "Ok" (36.8%), six as "Good" (31.6%), two as "Excellent" (10.5%), and three as "Best Imaginable" (15.8%). Finally, (c) illustrates the acceptability rating and shows that for one participant the reference implementation was "Not Acceptable" (5.3%), for three it was "Low Marginal" (15.8%), for one it was "High Marginal" (5.3%) and for fourteen it was "Acceptable" (73.7%). Figure 9 illustrates the results regarding the items to evaluate the RLE (Table A2). When the participants were asked if the word to learn was clearly readable and audible (Questions 3 and 4), they tended to strongly agree, with a score of 3.8 for both questions. When asked whether they were able to focus on the lesson's content (Question 1) or whether they were able to imitate the avatar's gestures (Question 2), we have a somewhat lower score of 3.5 and 3.1 respectively. Furthermore, with a mean value of 1.6, the participants answered that they generally did not feel insecure during the lesson (Question 5). Figure 9 illustrates the results regarding the items to evaluate the RLE (Table A2). When the participants were asked if the word to learn was clearly readable and audible (Questions 3 and 4), they tended to strongly agree, with a score of 3.8 for both questions. When asked whether they were able to focus on the lesson's content (Question 1) or whether they were able to imitate the avatar's gestures (Question 2), we have a somewhat lower score of 3.5 and 3.1 respectively. Furthermore, with a mean value of 1.6, the participants answered that they generally did not feel insecure during the lesson (Question 5). Webcam Videos After the trial period ended, the log contained 491 GEC executions. These were more than the anticipated 456 videos. During labeling, it became apparent that some participants stopped and restarted mid-lesson. Based on these videos, 340 (69.2%) GEC executions were rated "Correct", 95 (19.3%) were rated "Poor", and 56 (11.4%) were rated "Bad". The majority (54.3%) of "Poor" and "Bad" GEC executions occurred during two gestures: "Aufmerksamkeit" (eng.: "attention"; putting a hand behind an ear and leaning back) and "Papier" (eng.: "paper"; crumbling a piece of paper and throwing it away). Webcam Videos After the trial period ended, the log contained 491 GEC executions. These were more than the anticipated 456 videos. During labeling, it became apparent that some participants stopped and restarted mid-lesson. Based on these videos, 340 (69.2%) GEC executions were rated "Correct", 95 (19.3%) were rated "Poor", and 56 (11.4%) were rated "Bad". Free-Text Comments Nine out of the nineteen participants opted to give a free-text comment about their thoughts on the lesson experience. Table 1 shows how often a concept was mentioned in the comments. Discussion This study set out to answer the research question: Can e-learning platforms facilitate gesture-aided learning remotely? In the case of the e-GAL reference implementation, we assume this to be confirmed if (a) instructors can effectively plan and monitor gestureenhanced lessons, (b) learners are able to comprehend and imitate gestures and learning objects during lessons, and (c) learners accept the system. Ad (a): The lesson configurator offered instructors access to 64 learning content items and 64 distinct gestures. With these materials, a lesson containing 6 GECs was successfully created. The gestures could be previewed and selected according to the assumed learning environment (i.e., the learner sitting behind a desk). Regarding lesson monitoring, the monitoring module's log successfully collected videos of all 491 GEC executions. Instructors were able to label all videos using the monitoring module's web interface. Ad (b): Learners were able to access the lesson with the invitation link that was sent out by the instructor. In the responses to the RLE questionnaire, there was general agreement (3.8 out of 4) that the learning content was comprehensible in both text and speech. Interestingly, two participants noted that they did not use the text but rather listened exclusively to the audio. Slightly less agreed upon (3.1 out of 4) was on the comprehensibility of gestures. A possible reason for why the gesture comprehensibility (Question 2) was rated worse may be connected to the 2 least well-performed GECs. "Aufmerksamkeit" required the participant to lean backward, which was not easily discernible on the solid-grey background of the lesson player. A better-designed 3D environment may communicate changes in depth better. As for the second badly performed GEC, "Papier" involved both palms touching each other. The avatar's extremities lacked collision boxes, therefore its hands clipped into rather than touching each other. This was interpreted differently by participants, some touching their forearms or bumping their fists. Furthermore, the XSENS skeletal model has only a rough positioning of the hands. By adding collision boxes to the avatar and including better hand sensors, the communication of gestures that feature more intricate hand movements could be improved. Broader gestures, like waving a hand, were more accurately imitated. Furthermore, the system could have better indicated the right time to imitate the gesture, especially as most of the GEC executions rated "Wrong" seemed to stem from the participant not being aware that they should imitate at that moment. In the end, 69.2% of gesture executions were labeled "Correct". Ad (c): The e-GAL reference implementation was rated "Acceptable" and "Good", and received the letter grade C on the System Usability Scale. The evaluation of the reference implementation is limited insofar as it only considers the perspective of the learner and lacks feedback from instructors. While they were functional enough to define and monitor the experiment lesson, the lesson configurator and monitoring modules were not demonstrated in the same way the lesson player was. To summarize, we consider all of the three requirements stated and discussed above to be fulfilled, thus we conclude the system test as successful. During labeling, a second potential use case for the reference implementation emerged. It can collect and label large amounts of gesture data remotely and with little effort. The main issue in the video clips from the system test was that the framing of the participant was inconsistent, and their webcam quality varied. This could be solved however by better instructing the participants and by consistently checking the framing before and during the lesson. Limitations This study is concerned with the technical viability of e-GAL, thus it does not say anything about the influence of this mode of learning on learning outcomes. Claims of this kind would require a different study design and neurological evidence. Moreover, the lesson used in the study lacked pedagogical considerations (see [40]) which made it unfit to produce and measure actual learning outcomes. Finally, the evaluation of the lesson configurator lacks the perspective of educators who are not in higher education. Future Work More research on the pedagogy of e-GAL applications is needed. This includes determining the overall effectiveness of e-GAL, which parameters (e.g., repetition and order of GECs, lesson tempo) need to be adjustable, and how the avatar's and the 3D environment's designs affect learning outcomes. It should be investigated which types of learning content work best with e-GAL. Future platforms could incorporate machine learning models for pose estimation (e.g., [41,42]) and/or quality assessment [43] of the performed gesture. Such automated methods could be used for example to support or replace the instructor's subjective rating or to provide real-time feedback to the student. Furthermore, instead of the webcam as motion sensor, future systems could use wearable motion sensors to allow students more mobility. Feature requests such as avatar customization, the option to see oneself during the lesson, and immediate feedback during the lesson were mentioned by some participants. These features are realizable for the reference implementation. Lastly, e-learning platforms usually involve a variety of stakeholders such as content creators, educational institutions, and designers [44]. The e-GAL design could be extended or embedded into existing e-learning platforms to accommodate these stakeholders (e.g., interfaces for content creators to add new gestures from other motion-capture systems). Interfaces to existing learning object repositories could produce interesting new GECs. Conclusions We proposed a system design for e-GAL platforms with three design goals. A reference implementation following the design was demonstrated and evaluated in a field test. After interpreting the results of the SUS & RLE, the user comments, and the number of video clips labeled "Correct", we determined that the e-GAL reference implementation met all of the three design goals, consequently demonstrating the ability of the proposed system design to facilitate an acceptable e-GAL experience. Additionally, the reference implementation showed itself to be useful for collecting and annotating video clips of gesture executions, which can be used for instance to generate large gesture datasets for machine learning. The e-GAL design can be used to implement e-GAL applications or as the basis for further research into the topic of gesture-aided e-learning, especially its pedagogical implications. Die gesprochenen Wörter waren klar zu verstehen. The spoken words were clearly audible. 4 Ich konnte die angezeigten Wörter problemlos lesen. I was able to read the displayed words clearly. 5 Ich fühlte mich unsicher während der Lektion. I felt insecure during the lesson.
6,753.6
2021-12-01T00:00:00.000
[ "Computer Science" ]
3D Numerical Simulation of Hydro-Acoustic Waves Registered during the 2012 Negros-Cebu Earthquake The paper investigates on the hydro-acoustic waves propagation caused by the underwater earthquake, occurred on 6 February 2012, between the Negros and Cebu islands, in the Philippines. Hydro-acoustic waves are pressure waves that propagate at the sound celerity in water. These waves can be triggered by the sudden vertical sea-bed movement, due to underwater earthquakes. The results of three dimensional numerical simulations, which solve the wave equation in a weakly compressible sea water domain are presented. The hydro-acoustic signal is compared to an underwater acoustic signal recorded during the event by a scuba diver, who was about 12 km far from the earthquake epicenter. Introduction Although in many applications the sea water is correctly assumed to be incompressible, its weakly compressibility cannot be neglected when studying some specific phenomena. When the rigid bottom below the water layer quickly moves, as in the case of an underwater earthquake, both gravity and acoustic waves are generated in the fluid. The first, propagating as free surface transient perturbations (i.e., tsunamis) are able to carry huge energy, with devastating consequences when approaching the coasts. Tsunami waves move at celerity of 100-200 m/s at oceanic depths. The second, named hydro-acoustic waves, propagate at the sound celerity in water, approximately 1500 m/s and their period is the time interval needed to propagate from the bottom to the free-surface four times. Given that their propagation speed is significantly larger than that of the tsunamis, hydro-acoustic waves are tsunami precursors. Many analytical [1][2][3][4] and numerical [5][6][7][8][9] studies indicate that a fast sea-bed motion triggers pressure waves that propagate in the water layer. In-situ acoustic measurements [10][11][12] during submerged earthquake confirmed the possibility to record the generated hydro-acoustic waves. However further research is needed before these acoustic signals can be used as tsunami precursors in real-time systems. As pointed out by [13], the Negros-Cebu 2012 earthquake generated the compression of the above column of water, triggering the propagation of hydro-acoustic waves. In their study the Authors analyzed an underwater acoustic signal recorded incidentally by a scuba-diver during the earthquake. The recorded audio revealed a specific spectral signature few seconds after the earthquake, that has been attributed to hydro-acoustic waves by comparison with results of two dimensional numerical simulation. The aim of the present paper is to extend their study, performing a three dimensional computation to include the entire fault zone and the surrounding bathymetry. The aim of the research is to evaluate if a more detailed modelling, that takes into account the full three-dimensional features of the bathymetry and of the earthquakes, can provide relevant insight into the phenomena. The next Section describes the earthquake event and the spectral analysis of the recorded audio signal. Section 3 describes the numerical model implemented to reproduce the hydro-acoustic wave generated by the earthquake and propagated to the measuring point, and the comparison between the numerical results and the in-situ measurements. Finally, conclusions are given. In-Situ Observation of Negros-Cebu 2012 Earthquake On the 6 February 2012, Negros Island in the Visayan region in the central Philippines was struck by a 6.7 Mw earthquake. The epicenter (latitude 9.99 • N, longitude 123.21 • E and 11 km of depth, from USGS data) was located few kilometers east of the coast of central Negros, in the vicinity of the coastal towns of La Libertad and Tayasan (see Figure 1). The earthquake caused damages to infrastructures located in the east coast of Negros Island and killed at least 50 people. The paper [14] describes in details the seismic-tectonics of this event, aiming at characterizing the earthquake fault, by studying onshore structural field observations, analyzing earthquake data and interpreting offshore seismic profile. This detailed analysis provided seismic information of the earthquake, that is used as the source term of the hydro-acoustic wave model presented in this paper. The Tañon Strait is the water body that divides the islands of Negros and Cebu, where the earthquake occurred. The strait is about 161 km long, and its width varies from 5 to 27 km. The deepest area of the strait covers the region around the earthquake epicenter and the position of the scuba diver, who recorded the underwater audio; here the maximum water depth is almost uniform around 500 m. A tsunami has been generated by the 6.7 Mw earthquake, inundating the coasts of both islands, luckily without relevant damages. During this event, an Italian scuba diver, Mr. Riccardo Scultz, was diving near the coast of Cebu island recording with a GoPro camera. His position has been reconstructed to be at 9.97 • N; 123.36 • E, at a water depth of 11 m. He reports to have noticed first a vibration of the sea-floor and suddenly fishes behaving differently; then he heard an anomalous noise. Few minutes later he was out of the water on the boat with the other scuba-divers; they perceived the tsunami waves and they sailed offshore. Both video and audio records provided by the scuba-diver have been carefully analyzed in [13]. The authors reconstructed the time sequence of the events from the video and audio record, and computed the seismic and hydro-acoustic waves arrival times, using mean pressure wave celerity in ground and in water. The distance epicenter-measurement is 12 km, therefore the approximate seismic and hydro-acoustic waves arrival times are expected after 4 and 8 s respectively from the quake, considering a wave celerity in ground of 3000 m/s and in water of 1500 m/s. In the previous work, [13] performed a time-series analysis of the audio record, subdividing it in windows of 34 s: one before the computed arrival time of the hydro-acoustic waves, one immediately after and others later. We report in Figure 2 the frequency spectrum of the recorded signal before and after the computed arriving time of acoustic waves. Looking at the frequency spectra of the time series before the earthquake, it can be noted that the amplitude oscillations are in the frequency range of 80-180 Hz, which can be considered the frequency band of the background noise. Considering the time interval immediately after the estimated arrival time of hydro-acoustic wave (lower panel of Figure 2), the acoustic signal oscillates also at lower frequencies, i.e., 10-50 Hz. Since this specific spectral signature recorded after 8 s from the earthquake has not been detected within other time interval of the signals, it is assumed that it is likely to be related the hydro-acoustic waves generated by the sea-bottom displacement. By simulating the generation and propagation of hydro-acoustic waves in the vertical section trough the epicenter and the scuba-diver position, [13] have demonstrated that these pressure waves show a spectral signature similar to that recorded. Therefore, in this paper in order to deeply investigate in the generation and propagation modelling of hydro-acoustic waves, 3D numerical simulation has been carried out. Numerical Simulations Three-dimensional numerical simulations that cover the entire seismic fault zone have been implemented. In the next subsection, details of the model are given, in the following results are presented in term of pressure waves time series at the measuring point, i.e., scuba diver approximate location. Description of the Numerical Model The model solves the wave propagation in weakly compressible inviscid fluid, where waves are generated by a sea-bed motion. In the framework of the linear theory, the governing equation for the fluid potential Φ(x, y, z, t) is: where ∇ 2 is the Laplacian in the horizontal plane x, y, while subscripts with independent variables denotes partial derivatives, c s is the celerity of sound in water, set as 1500 m/s. The free surface boundary condition that includes both dynamic and kinematic conditions, is set as: where g is the gravity acceleration. The waves are generated by imposing the following condition at the sea-floor boundary: where h is the water depth, given by the rest bottom topography h b (x, y) net of the earthquake bottom From (4) the water depth time-variation h t is zero everywhere except on the earthquake zone. In order to reproduce the sea-bed velocity due to the earthquake (ζ t ), the [15] formula has been used to calculate the sea-floor static deformation from the principal seismic parameters. This deformation has been assumed to occur in the time interval of 1 s. Following the study of [14] the fault zone is characterized by 210 • strike, 47 • dip; 90 • rake and a fault length of 20 km. The numerical domain in between the free-surface and the sea-floor, is confined by two artificial surfaces at the lateral boundaries,separating the domain from the open sea, where the approximate radiation condition, valid for planar waves orthogonal to the boundary is imposed: where n indicates the direction normal to the considered boundaries. In Figure 3 is represented the bathymetry of the Tañon Strait in plan view (left) and in 3D view (right). The available nautical charts of the area have been used to extract the bathymetric data, which have been interpolated over a regular grid, equally spaced in x and y direction each 100 m, and used as bottom surface of the numerical domain. A 3D mesh with linear elements has been built, with maximum and minimum element size of 100 m and 0.5 m respectively. A scale geometry has been used to distinguish between horizontal and vertical planes, for x and y directions indeed a mesh up to 10 times larger is used. The mathematical problem is solved using the Finite Element Method, using the Multifrontal Massively Parallel sparse direct Solver (MUMPS). Time integration is carried out for 100 s, with a ∆t of 0.005 s. Comparison with In-Situ Records In order to compare the numerical results with the audio recorded, the former have been extracted at the location where approximately the video and audio signals have been measured during the earthquake. Mr. Scultz reported that he was diving approximately at 11 m of water depth; he was not there for scientific purpose and accidentally recorded the video and audio during the earthquake, therefore the water depth during the measurement is assumed to be 11 m, however it is not known precisely. In Figure 4 the comparison is shown. We report the spectral signal recorded (Figure 4a) and those reproduced numerically (Figure 4c), including the results of the two dimensional simulation of [13] ( Figure 4b). The three spectra in Figure 4 report the moving average of the original signals, using a frequency window of 2 Hz. Both numerical simulations confirm that hydro-acoustic waves generated by 2012 Negros-Cebu earthquake exhibit a spectral signature very similar to that recorded. Differences in the signals are due to different reasons. Firstly, the recorded signal has been considered for a time interval of 34 s, however was not filtered from the background noise. Then, the coastline reproduced in the numerical models is assumed as fully reflective boundary, i.e., impermeable wall. Moreover, the sea-floor has been considered impermeable, the effect of unconsolidated sediment on the hydro-acoustic wave propagation has been neglected. The uncertain on the exact recording position and water depth, would also affect the quality of the comparison, since as proved by [16] the water depth and the distance from the epicenter significantly influence the pressure field. We remark here that the aim of the simulation was to investigate on the frequency band of the hydro-acoustic wave energy. The comparison of the spectral signals is only qualitative, because carried out without knowledge of the frequency response of the GoPro camera, contained in a impermeable pocket. The 3D model result confirms that hydro-acoustic waves propagate within the frequencies range 10-50 Hz. Considering the full 3D bathymetry, a slightly more broad spectrum can be noted, in comparison with the result of 2D simulation. Thus the present results appear to be more similar to the measurements than those obtained using the 2D computations. As stated by [12,16] the water depth and the sea floor morphology affect the hydro-acoustic wave propagation. However for the present case the sea-floor does not present sea mounts or sea-trenches between the epicenter and the recording location, justifying even 2D simulation along the vertical section. Conclusions The spectral analysis of the underwater audio record, during the Negros-Cebu 2012 earthquake, revealed a specific signature that was attributed by [13] to the measurement of propagated hydro-acoustic waves. This paper completes the previous work, performing 3D numerical simulation over the sea-floor that includes the entire seismic fault. The sea-bed displacement has been reconstructed using the Okada formula [15] and the seismic data elaborated by [14]. The time interval needed to reach the so computed static deformation has been assumed equal to 1 s. A parametric analysis varying the velocity of the sea-bed motion has been carried out in 2D, i.e., considering only the vertical section of the water layer that connect the center of the seismic zone with the position of the recording scuba-diver. The result of the present 3D analysis confirms that hydro-acoustic wave energy is distributed in the frequency range of 10-50 Hz, as stated by [13] and confirmed by the audio record. Given the fault length and the bathymetry uniform along the strike direction of the earthquake, the 3D model provides a slight improvement in the acoustic signal reproduction for this event. Funding: This research received no external funding.
3,099.6
2019-07-09T00:00:00.000
[ "Environmental Science", "Geology", "Physics" ]
Prescriptive Unitarity with Elliptic Leading Singularities We investigate the consequences of elliptic leading singularities for the unitarity-based representations of two-loop amplitudes in planar, maximally supersymmetric Yang-Mills theory. We show that diagonalizing with respect to these leading singularities ensures that the integrand basis is term-wise pure (suitably generalized, to the elliptic multiple polylogarithms, as necessary). We also investigate an alternative strategy based on diagonalizing a basis of integrands on differential forms; this strategy, while neither term-wise Yangian-invariant nor pure, offers several advantages in terms of complexity. Introduction and Overview Generalized unitarity has proven an extremely powerful framework for the representation of scattering amplitudes at large multiplicity and/or loop order. The basic idea is that any loop integrand -a rational differential form on the space of internal loop momenta-can be viewed as an element of a basis of standardized Feynman loop integrands. Provided the basis of integrands is large enough, it can be used to represent all the scattering amplitudes of any theory and spacetime dimension. This idea has a long history (see e.g. [1,2]); it was formalized and used to famous effect in e.g. [3][4][5][6][7][8][9], and has been recently refined, generalized, and put to use for many impressive applications (see e.g. [10][11][12][13][14][15]). Among the many advantages of this approach is that the basis of integrands, so long as it is large enough, is sufficient to represent literally all amplitudes (arbitrary multiplicity and states) in a wide class of theories at any loop order. Thus, the integrands in the basis need only be integrated once and for all-reusable for any process of interest. As loop integration has been (and remains) among the hardest problems in perturbative quantum field theory, this is a very important feature. This makes clear the importance of choosing 'good' integrands for a basis-the precise measures of which have evolved greatly with time. (Roughly speaking, a good basis would consist of integrands which can be integrated 'most easily' or which result in the 'simplest' expressions.) Another advantage to unitarity is that coefficients of particular amplitudes with respect to a basis can be computed in terms of mostly (and often wholly) on-shell data-specifically, on-shell functions [8,9,[16][17][18][19][20][21]. When these on-shell functions are leading singularities, they have no internal degrees of freedom. Historically, leading singularities have been defined as maximal co-dimension residues (of polylogarithmic differential forms); more recently, this definition has been modified and generalized to include any full-dimensional compact contour integral of a scattering amplitude integrand [22]. Leading singularities have played a key role in the development of our modern understanding of quantum field theory (see e.g. [5,15,[23][24][25][26][27][28][29][30]), and many of the remarkable aspects of scattering amplitudes (their simplicity, and wide range of symmetries) were discovered in this context. For example, the BCFW recursion relations for tree-amplitudes were first discovered in this setting [3,31,32], as was the infinite-dimensional Yangian symmetry of planar maximally supersymmetric Yang-Mills theory (sYM) [33][34][35], and the correspondence between on-shell functions in sYM and residues of the positroid volume-form in Grassmannian manifolds [36,37]. When leading singularities are used to determine the coefficients of loop amplitudes with respect to some integrand basis, generalized unitarity becomes a relatively simple problem of linear algebra-matching the 'cuts' of field theory against the corresponding cuts of the integrand basis. Until recently, however, it was unclear if leading singularities represented complete information about perturbative scattering amplitudes even in the simplest theories. The reason for this uncertainty lies in the fact that, for sufficiently large multiplicity and/or loop order, scattering amplitude integrands in most theories are not 'dlog' differential forms [38][39][40][41][42][43][44][45][46] and cannot be characterized by (maximal co-dimension) residues alone. For such cases, the traditional definition of leading singularity becomes incomplete; and the most typical strategy to deal with non-polylogarithmic contributions has been to use the highest co-dimension residues that exist (sub n -leading singularities), and then use sufficient numbers of off-shell evaluations to match a loop integrand functionally on the remaining degrees of freedom. (Examples of such strategies being used can be found in [15,16].) The result of this approach, however, has many obvious disadvantages; in particular, it results in representations of amplitudes that (at least term-by-term) involve references to arbitrary choices (for the off-shell evaluation) which can break many of the niceties that scattering amplitudes are known to posses. Before we discuss any concrete examples, it is worth highlighting a conventional difference between this work relative to virtually all existing literature: we have chosen to write all loop integration measures in terms ofd 4L whered:= d/(2π) (by analogy to ' '). As such, many of our results differ by powers of (2πi) relative to those found elsewhere. This choice is motivated by the fact that an integrand normalized to have unit residues with respect to the measure d 4L will have unit contour integrals with respect tod 4L . As such, most formulae appear identical to other literature; a notable exception, however, is the case of sub-leading singularities, for which our convention requires relative factors of i. To illustrate how prescriptive unitarity can work when there are elliptic contributions, consider the elliptic double-box integrand for massless, scalar ϕ 4 -theory in four dimensions: Above, ( i |a) represents an ordinary, scalar inverse propagator expressed in dualmomentum coordinates (the details of which we review below) andd:= d 2π . At any rate, (1.1) is an 8-dimensional (rational) differential form on the space of loop momenta. Taking a contour integral which puts all seven propagators on-shell, however, results in an elliptic differential form on the remaining variable 1 : where we have used α to represent the final loop momentum variable, y 2 (α) is an irreducible quartic (with coefficients that depend on the momenta of the particles involved), and c y is a factor introduced to render y 2 (α) monic. The precise details are not important to us now; but it is easy to see that (1.2) represents an elliptic differential form, without any further 'residues' on which we may define a traditional leading singularity. Recently [22], a broader definition of leading singularity has been introduced to include any full-dimensional compact contour-integral of a scattering amplitude integrand. With this new definition, we can in fact define a leading singularity for the double-box integrand by integrating (1.2) over, for example, the a-cycle, Ω a , of the elliptic curve: here, ϕ is a cross-ratio in the roots r i of the quartic (the details of which are not important to us now). If we wanted to chose a basis of integrands which would be normalized to 'match' this leading singularity prescriptively, we would need to normalize the double-box integrand accordingly: Thus, a prescriptive representation of the 10-particle scattering amplitude in this theory at two loops would involve a term While this example may seem overly trivial (especially considering that the original scalar integrand in (1.1) is literally a term in the Feynman expansion!), the re-writing of it according to prescriptive unitarity according to (1.5) has a remarkable feature: the now-normalized basis integrand I db is in fact dual-conformally invariant and pure in the sense defined by the authors of [47]-as such, it is arguably the simplest possible form of the integral (and, presumably, the easiest to integrate). (For a broader discussion of integrand 'purity', we suggest the reader consult refs. [47,48]; for the present, we merely mention this to emphasize that such integrands, and the differential equations that they satisfy (see e.g. [49]) have been defined so as to manifest many remarkable properties.) Organization and Outline In this work, we generalize and expand upon the discussion above to the case of two-loop amplitudes in planar, maximally supersymmetric (N = 4) Yang-Mills theory (sYM). A closed formula for all such amplitude integrands was first derived in ref. [15] (representing an early application of what became known as 'prescriptive' unitarity [16]), which succeeded despite the presence of elliptic integrals due to carefully-made (but arbitrary) choices for off-shell evaluations in combinations of leading and subleading singularities. The resulting representations given in [15,50,51] involved terms that were neither Yangian-invariant, nor integrands that were pure. In section 2 we review the salient elements of two-loop prescriptive unitarity, as well as the novel generalization of elliptic leading singularities introduced in [22]. In section 3, the main result of this paper, we revisit the prescriptive unitarity story in the light of our recent work. In particular, we derive two novel representations of amplitudes in planar sYM at two loops, both defined completely prescriptively and unambiguously. The first, described in section 3.1, involves a prescriptive integrand basis chosen by diagonalization on leading singularities (in the new, broader sense); it is intrinsically homological, and results in a representation of amplitudes that, termby-term, involves Yangian-invariant coefficients and pure integrals. In section 3.2, we describe an alternative representation based instead on a cohomological diagonalization of the integrand basis. The resulting form is simpler in many ways (especially algebraically), but involves coefficients that are not Yangian-invariant and a basis of loop integrands that are not generally pure. Review: Prescriptive Integrand Bases for 2-Loop sYM In this section, we briefly review the ingredients of the representation of twoloop integrands in planar sYM as described in ref. [15]. More complete details can be found in [15,16]. For what we need in the following sections, the details of how numerators are chosen for the double-pentagons and pentaboxes will not be critical to us-except for the role played by the double-box integrands as 'contact terms' of these basis elements. Bases of Dual-Conformal Integrands: General Structure A very useful (and arguably accidental) feature of planar integrands at two loops is that a complete and not over-complete basis of dual-conformal integrands exists. This is in contrast to one loop or three or more loops, for which dual-conformality apparently requires over-completeness (see e.g. [52][53][54]). At two loops, a dual-conformal basis can be chosen that consists of three classes of integrands-the double-boxes, pentaboxes, and double-pentagons: (2.1) To be clear, these pictures represent the corresponding set of scalar, massless Feynman propagators and the indices {i, j} ∈ {1, 2} indicate particular choices of loopdependent numerators. All the integrands in our basis can be normalized so-as to be dual-conformal (and when possible, pure). For those integrands with exclusively residues of maximal co-dimension, they are normalized to have unit leading singularities on a choice of such a contour, and made to vanish on all such defining contours for all other integrands in the basis. Most of the integrands, however, have support on double-box sub-topologies which have elliptic structures and therefore cannot be realized as dlog differential forms. It is useful to bear in mind that the loop-independent numerators and the overall normalization of the integrands in (2.1) have considerable flexibility. In particular, even after imposing dual-conformality, the space of possible numerators is relatively large. For the pentabox integrands, dual-conformal, loop independent numerators span a six-dimensional space, which may be decomposed into four contact-term numerators (proportional to one of the four inverse-propagators associated with the edges of the pentagon side of the integrand), and two complementary 'top-level' degrees of freedom (schematically indexed by i ∈ {1, 2}). Graphically, this ambiguity reflects the fact that the double-box topology can be obtained by contracting one of the edges {a, b, c, d}, Algebraically, this can be understood by decomposing the vector space of numerators into the following basis, where Y 1,2 are the solutions to the quadruple cut ( 1 |a) = ( 1 |b) = ( 1 |c) = ( 1 |d) = 0, whose precise form does not concern us here. Thus, each pentabox integrand, is not fully-specified until the four (loop-independent, but kinematic-dependent) 'constants' n i have been specified. The top-level normalization, n i 0 is determined by requiring that (some combination of) the polylogarithmic leading-singularity which encircles the eight Feynman propagators of the pentabox is unity. Triangular Structure of Basis Integrands' Contact-Terms In prescriptive unitarity, the basis of integrands is chosen to be diagonal with respect to some choice of leading singularities or 'cuts' (or, as we will explore later, with respect to forms). To see how this works, consider the double-pentagon integrands. Among all the integrands in the basis (2.1), only the double-pentagons have support on the so-called 'kissing-box' leading singularities of field theory: Here, the indices {i, j} ∈ {1, 2} label the two solutions (for each loop separately) to the cut-equations which put these eight propagators on-shell. Thus, we choose the 2×2 non-contact-term degrees of freedom of the double-pentagon integrands to be unit on the corresponding contour integrals (and to vanish on the other solutions). As no other integrands in the basis have support on these cuts-there being no other integrands with a corresponding subset of propagators-we are ensured to match field theory on all these contour integrals manifestly, regardless of how the rest of the integrand basis is chosen. However, notice that matching the kissing-box contours (2.4) in field theory using double-pentagon integrands would be safe regardless of the particular choices for the contact terms of the double pentagon. Thus, there remains a (6 2 − 4 =)32-dimensional space of possible double-pentagon integrands for our basis even after the top-level degrees of freedom have been chosen, reflecting the 4 contact term ambiguities for each loop's numerator. Of these, 16 contact-term degrees of freedom will be fixed by the requirement that the double-pentagon integrands vanish on the leading singularities upon which the pentaboxes get normalized; the remaining 16 correspond to (potentially elliptic) double-boxes, about which we will have much more to say. The pentabox integrands, in turn, have a bit more subtlety in their definition. For one thing, because the pentabox integrands have only two non-contact-term degrees of freedom in their numerators, they cannot be used to match all four pentabox leading-singularities: where, as before {i, j} ∈ {1, 2} label the particular solutions to the cut-equations which put the eight propagators on-shell. Relatedly, it is not possible to make the double-pentagon integrands vanish on all pentabox contours. This apparent tension is in fact fairly trivial and easy to resolve: we merely need to make some choice of two of the pentabox leading singularities (or two independent combinations thereof), on which to normalize the pentabox integrands, and then require that the doublepentagons vanish on each of these contours. Once this is done, all four of the pentabox leading singularities will be matched in the basis: two, manifestly by the pentaboxes, and the complementary pair matched indirectly by residue theorems involving the double-pentagon integrands. This may seem magical, but follows necessarily from the completeness of the integrand basis. What remains open, however, are the questions of the double-boxes: how should they be normalized, and how should the double-box contact terms of the pentaboxes and double-pentagons be defined? For all the double-box integrands which do support residues of maximal co-dimension (that is, any which involves at least one three-particle vertex), the strategy above may be iterated once more without further subtlety, using a 'composite' leading singularity. (For amplitudes with fewer than ten particles, all double-boxes have support on such additional 'cuts'; and a complete basis can be made polylogarithmic and 'pure' in this way.) For double-box integrands without (traditional) leading singularities, however, the description above falls short (or at least, is incomplete). We will not review how this question was resolved by the authors of ref. [15,16,55], in part because we will find more elegant solutions here. For the sake of our current investigation, however, let us assume that a spanning set of maximal co-dimension, polylogarithmic contours have been chosen to define the top-level degrees of freedom of the pentabox and double-pentagon integrands; as such, we may take for granted that all the pentabox and kissing-box leading singularities of field theory are matched by these integrands in the basis-leaving only the question of matching the sub-leading singularities associated with non-polylogarithmic double-boxes. That is, we need only to address the double-box integrands-to determine their normalizations, their coefficients in field theory, and how to 'diagonalize' the pentaboxes and double-pentagons with respect to these choices. Let us therefore consider these integrands in some detail. Elliptic Double-Box Integrands Up to a normalizing factor denoted n db , the double-box integrands in the DCIpower-counting basis may be defined to be Here, we have included a conventional factor in the numerator which ensures dualconformal invariance, and we have used dual-momentum coordinates for which the massless external momenta are given by p a =:x a+1 −x a (with cyclic labeling understood), and with the association of i ⇔ x i . In terms of dual-momentum coordinates, the Lorentz invariants are defined by (a|b): What should the normalization of this integral be so as to make the representation of field theory amplitudes maximally transparent? One answer comes from the fact that the full-dimensional compact contour integral of the amplitude in planar sYM-which 'encircles' the seven poles corresponding to the propagators of (2.6) and then uses one of the fundamental cycles of the elliptic curve-is Yangian invariant [22]. Let us briefly review this story here, but in the general case-where the momenta flowing into the corners of the box are arbitrary. Elliptic Leading Singularities (& Other On-Shell Functions) Let us start with the sub-leading singularity associated with a contour in field theory with the topology of a double-box integral as shown in (2.6). That is, we'd like to define the sub-leading singularity associated with 2 where α represents the single on-shell degree of freedom of the sub-leading singularity, and ± denotes which of the two branches of the seven-cut equations is chosen. This could easily be represented in the Grassmannian according to [56,57], but we prefer to disambiguate its structure by writing it explicitly. To do this, we express db ± (α) as simple-pole-enclosing contour integral over the co-dimension-six 'kissing triangle' function, for which the non-vanishing term involves three R-invariants, which is a special case of the general expression given in Appendix A and B of [15], written here in terms of R-invariants with (shifted) momentum super-twistor [58] arguments defined as To compute the double-box sub-leading singularity, we consider a 'residue' contour encircling the pole at a c d f = 0, which is a quadratic condition involving both α and β. Without any loss of generality, we are free to solve this condition for β; the (two) branches of solutions to a c d f = 0 are then given by where y 2 (α) is a quartic polynomial defined as 11) and the factor of 1/c 2 y is included to render the quartic y 2 (α) monic by construction. The residue of the kissing-triangle hexa-cut function at β → β * ± follows by noting that Notice that the ± sign on the left hand side reflects the choice of the branch of the cut equations, and y(α) is a square root of the quartic (2.11). Since there are two different solutions to a c d f = 0, it makes sense to define 'the' double-box sub-leading singularity db(α) as the difference of the kissing-triangle on the two contours. We therefore define db + (α) − db − (α) =:db(α) =: where in the second equality we have defined a useful 'hatted' double-box function where the inverse of the square root of the quartic has been factored out. Let us now return to (2.7). The double-box on-shell function has various factorization channels corresponding to the six amplitudes appearing in its definition. Each of these channels corresponds to a simple pole of db(α) at, say, α = a i ∈ C on which we may define an additional contour about a d log pole. According to prescriptive unitarity, each such residue is topologically equivalent to a 'pentabox' leading singularity, which we denote by pb i . We can, therefore, expand our next-to-leading singularity in a basis of those factorizations into pentaboxes according to where the α-independent coefficient of the basis element without any extra simple poles, denoted by db 0 , is defined as: While it is not manifest in (2.15), the coefficient db 0 is, by construction, independent of α. This trivial-yet important-point follows directly from Liouville's theorem of elementary complex analysis: as we have removed every pole, including the pole at α = ∞, the 'function' db 0 is entire on the Riemann sphere, that is, it must be a constant. At the risk of being overly illustrative, let us emphasize that this implies that we may equivalently write where α * is an arbitrarily chosen point. The recent work [22] defines an 'elliptic' leading singularity by integrating the double-box seven-cut differential form db(α) over either the a or b 'cycle' of the elliptic curve defined by y 2 (α), To perform the integral over the a-cycle, (or any cycle for that matter) it is quite useful to factorize the monic quartic polynomial defined in (2.11) in terms of its roots, where, for positive kinematics [59], the roots form two complex conjugate pairs {r 1 , r 2 } and {r 3 , r 4 }, and are ordered so that Re(r 1 ) > Re(r 3 ) and Im(r 1,3 ) > 0. Choosing the branch cuts to connect each complex conjugate pair of roots, we now define the a cycle contour, Ω a , to enclose the cut between r 1,2 . To compute the full a-cycle integral we use the two formulae and where our definitions of the complete elliptic integrals of the first and third kinds, K[ϕ] and Π[q, ϕ], respectively, are in agreement with Mathematica's. We have provided a different-and perhaps more easily generalizable-representation of these elliptic period integrals in terms of Lauricella functions in appendix A. The attentive reader will notice that these two integral formulae differ from the results quoted in [22]; this discrepancy is a consequence of our use ofd = d/(2π) throughout this work. Following the discussion in [22], observe that in the definition (2.14) of the double-box one-form, the coefficient of the pentabox pb i is the same as in (2.16), but for the appearance of α * in place of α. Upon performing the integration, if we choose α → r 4 the terms involving K[ϕ]pb i thus cancel, and we are left with If instead we consider the b-cycle, say the one encircling a branch cut which connect two roots with different real part (e.g. r 1 and r 3 ), we have 23) which is the same with the first expression after the exchange r 2 ↔ r 3 . As discussed at greater length in [22], both expressions (2.22) and (2.23) are non-trivially Yangian invariant, as may be verified by direct computation using, for example, the level one generators written in momentum super-twistor space. New Prescriptive Representations: Two Approaches We shall now argue that the results of the previous section suggest two natural prescriptions for the normalization of the double-box integrand which avoid entirely the arbitrary choices of the original prescriptive unitarity program. Homological Diagonalization-with Respect to Contours One natural choice for the normalization of the double-box integrand n db would be analogous to the choice made in the introduction for scalar ϕ 4 theory. That is, we may choose to normalize the integrand so that it integrates to 1 on the contour associated with, say, the a-cycle of the elliptic curve; since the loop-momentumdependent part of the integrand is proportional to 1/y(α), this fixes its normalization, using (2.20), to be This choice of normalization ensures that we match the elliptic leading singularity in field theory, e a , manifestly with the double-box integrand directly Notice that this representation, however, does not match the b-cycle leading singularity of the amplitude manifestly at all! Although we have normalized the double-boxes so that the a-cycle contours of field theory are manifest, the result on the b-cycle is, rather, which was given in (2.23). This is not in fact a problem: as we will see, the pentaboxes (and double-pentagon integrands) do have support on the b-cycle integrals (even after their contact terms have been fixed), and exactly give the necessary contributions to reproduce the correct b-cycle leading singularity in (2.23). (With hindsight, this 'magic' can be seen to follow directly from the completeness of the integrand basis.) Let us now discuss the implications of prescriptive unitarity for the contact-term rules of the pentabox and double-pentagon integrands. As always, prescriptivity requires that our integrands be diagonal in a choice of contours; therefore, in the homological scheme, the contact terms are determined by the requirement that all other integrands in our basis vanish identically on all elliptic Ω a -cycle contours associated with double-box sub-topologies. To see how this works in practice, consider a pentabox integrand which contains a double-box contact-term: where n i ρ is the coefficient of the term in the numerator proportional to ( 1 |ρ). Schematically, the full contribution of the pentabox on the a-cycle contour can be written as where a ρ corresponds to the pole where ( 1 |ρ) = 0. Notice that the leading term follows directly from the fact that the integral is normalized to have unit leading singularity on some pentabox contour. Diagonalization of the basis according to homologyleading singularities-fixes this contact-term coefficient, n i ρ , of the pentabox by the criterion that (3.6) vanishes. Namely, we must choose One important thing to note is that the pentabox integrands with these contactterms chosen do not vanish on the b-cycle contours. This is a good thing!-as the normalization we have chosen for the double-boxes makes the correctness of the amplitude integrand on the b-cycle extremely non-manifest in this representation. It is not hard to see, however, that when these contact terms are used, they generate on the b-cycle exactly the terms needed to cancel the 'wrong' elliptic-Π terms involving the pentabox leading singularities that arise from the double-box integrands in (3.3). Furthermore, the contact terms of the pentabox integrands also contribute the correct pieces involving the pentabox leading singularities appearing in e b (2.23) once the corresponding contributions from the double-pentagons are included (so as to match all pentabox leading singularities). This diagonalization strategy has several obvious advantages. For one thing, it is morally the direct realization of 'prescriptive unitarity' according to a choice of leading singularities. Moreover, as emphasized in the introduction, it should have the property that all integrands defined in this way are pure, and all coefficients are Yangian-invariant. Nevertheless, there are several reasons to be dissatisfied with this basis of integrands. For example, it deeply obscures the fact that the integrand is a rational differential form in loop momenta. Choosing basis integrands whose normalization depends on the roots of quartics makes this representation fairly unwieldy in practice (at least for most computer algebra packages). Therefore, we are motivated to consider a slightly different strategy, with huge advantages in terms of (algebraic) complexity, but which abandons the desire for a pure integrand basis-and requires the use of non-Yangian-invariant coefficients. Cohomological Diagonalization-with Respect to Forms The attentive reader may already have guessed an alternative strategy for matching amplitudes in sYM-namely, according to the various differential forms that appear in the double-box sub-leading singularity db(α) in (2.14): Considering the fact that the co-dimension seven contour of the scalar double-box integrand (2.6) which encircles its seven propagators results in it would be natural to choose n db to be +i, and to match its coefficient in the representation of field theory amplitudes to be c y db 0 . (With this factor of c yimplicit in the definition (3.8)-this on-shell function is little-group neutral (if not Yangian-invariant).) Moreover, it is easy to see that this combination of integrand and coefficient automatically matches the leading term in e b in (2.23). The only terms missing from both leading singularities are those involving the pentabox leading singularities pb i . These not-yet-matched pieces of e a,b can easily be seen to arise from the pentabox and double-pentagon contributions. Consider again how the contact terms appear in the pentabox integrands' contributions to the a-cycle, say: Taking the contact-term numerator n i ρ → 0 will leave a non-vanishing contribution from the first term in (3.10) on both cycles. It is easy to see that the missing differential form (which results in the combination of the elliptic functions K and Π) in (2.21) is exactly that which appears in the pentabox integrands already-and the coefficients of these integrands are precisely the pb i needed to reproduce the 'missing' pieces in the elliptic leading singularities e a,b . This definition of the double-box and the corresponding rule for the contact terms of the pentabox and double-pentagon integrands results in an extremely simple prescription for the integrand. Moreover, it is morally equivalent to a choice of diagonalization with respect to the various (local) differential-forms in loop momentum space. As such, we call such a prescription a cohomological choice for our basis. Despite the obvious advantages, we have checked that the coefficient of the double-box normalized in this way (namely, c y db 0 ) is not Yangian-invariant. Moreover, the double-box integrands are not pure. We strongly suspect that the pentabox and double-pentagon integrands (that contain elliptic contact-term components) are similarly non-pure. Thus, despite the algebraic and conceptual simplicity of the cohomological approach just described, we suspect that the homological prescriptive representation will ultimately prove the superior one for integration. Consistency Checks for Amplitude Integrands As already mentioned in section 2, the pentabox integrands lack the requisite number of degrees of freedom to match all pentabox cuts of field theory term-byterm. However, the expressions for the elliptic leading singularities e a,b involve them separately-and arising at poles in different locations in the α-plane. Thus, our description above regarding the pentaboxes' roles in these leading singularities does not alone ensure that we have matched everything in field theory. The missing ingredient, of course, are the kissing-box leading singularities times double-pentagons. These terms when combined with those of the pentaboxes ensure that all the pentabox leading singularities do get matched correctly, and individually. Thus, we have been somewhat schematic in our analysis above, relying on the fact that these parts of the amplitude integrands are guaranteed to work correctly once taken in combination. While ensured to work, we have checked this completely in the case of the 10particle N 3 MHV amplitude. Specifically, we have checked that both diagonalization procedures described above-the homological and the cohomological-result in integrand representations that exactly match the results of BCFW recursion, say. Thus, we are confident this procedure is free of any over-looked subtleties. Smooth Degenerations The reader may have considered our preference for normalizing integrands with respect to the a-cycle Ω a rather arbitrary. It was not entirely so. As discussed in [22], the a-cycle integral of the elliptic curve smoothly degenerates to 1 in all relevant cases (or kinematic limits). Thus, our analysis above, if applied to the case of double-box integrands that do support polylogarithmic contour integrals, reduces naturally to ordinary polylogarithmic ones. (Recall that a unit-residue integrand d 4L is equivalent to a unit-contour integrand ford 4L .) Thus, we can apply the results of this work to truly general double-boxes, and thereby construct pure-integrand bases that happen to be 'd log' whenever the double-boxes' elliptic curves turn out to be degenerate. Generalizations: More Loops and Calabi-Yau Manifolds Beyond two loops (and for non-planar theories at two loops), non-polylogarithmic structures beyond elliptic integrals abound [39,[44][45][46]60]. Examples of scalar integrals with such structures include the three-loop traintrack and wheel integrals, the maximal cuts (sub-leading singularities encircling all propagators) of which are known to involve Calabi-Yau 2-and 3-folds, respectively. To see how the corresponding non-polylogarithmic leading singularities can be incorporated analogously to what we have described for the elliptic case, let us briefly outline the structure expected for the traintrack contribution. On the maximal cut surface encircling all 10 propagators of the traintrack, the sub 2 -leading singularity should take the form: where y 2 (α, β) is an irreducible quartic in both variables {α, β} simultaneously [44]. As with the double-box sub-leading singularity (2.7), the traintrack in sYM will have multiple co-dimension one, simple poles around which there are elliptic sub-leading singularities. We may express this by decomposing (4.2) according to where the terms in the sum el a i and el b i are elliptic sub-leading singularities arising from single-pole factorizations of the original traintrack. These in turn can be further decomposed analogously to (2.14), resulting in an expression of the form where pl a i ,b j are the 'penta-ladder' polylogarithmic leading singularities associated with simultaneous factorizations in two different loops. The number and detailed form of terms appearing in this decomposition will depend on the number of factorization channels of the traintrack (which depends on multiplicity), but the basic structure is clear: (4.4) is nothing but a decomposition of the sub 2 -leading singularity (4.2) into a basis of differential forms involving one or two simple poles, respectively-with superfunction coefficients. (For the scalar traintrack contribution to the three-loop 12-particle amplitude in scalar ϕ 4 theory, only the leading term tr 0 is required-which happens to be (−1)-as there are no factorization channels to the participating ϕ 4 tree-amplitudes.) The generalization of this analysis to the three-loop wheel integral-which involves a Calabi-Yau three-fold surface-is relatively straightforward, resulting in a decomposition of the sub 3 -leading singularity into a top-level, irreducible volumeform times some 'CY 3 ' leading-singularity, three separate sums of K3-volume forms with single simple poles times 'K3' leading singularities; three, double-nested sums of elliptic integrals with two simple poles times 'elliptic' leading singularities; and, finally, a triple-sum of terms with simple poles times polylogarithmic leading singularities. It is worth mentioning that, unlike the three-loop traintrack which is known to have support in sYM (as argued in [44]), the three-loop wheel is not any single component of an amplitude in planar sYM; as such, it is possible that the CY 3 leading singularity vanishes. Thus, this makes its evaluation an important open challengeleft for future work. (While we are unaware of closed analytic formulae for the period integrals that would be required for such a check-analogous to those in (2.20) and (2.21) for the elliptic case-we are relatively optimistic that numerical integration will work for these low-dimensional cases.) The implications of these higher-dimensional Calabi-Yau leading singularities for prescriptive unitarity should be clear. In particular, we suspect that if the three-loop basis of planar integrands outlined in [16] were diagonalized homologically, the result would be a complete representation of amplitudes involving term-wise 'pure' integrals times Yangian-invariants; and the cohomological diagonalization of this basis into separate forms should be extremely straightforward to implement from the decompositions as in (4.4). Conclusions and Future Directions In this work we have made use of the new, broadened definition of leading singularities (beyond the polylogarithmic case) to derive two new prescriptive representations of two-loop scattering amplitude integrands in planar sYM. This analysis was illustrative of a more general strategy, with applications well beyond the planar limit and to theories with less or no supersymmetry. In many ways, our results directly reflect the primary goals of prescriptive unitarity: constructing loop-integrand bases that are diagonal in a spanning set of contours. For scattering amplitudes free of non-polylogarithmic structures, this strategy directly reproduces the d log differential forms of the traditional approach; but as we have seen, the generalization beyond polylogarithms is extremely natural. We have also motivated a different strategy: how to construct a prescriptive integrand basis diagonal with respect to cohomology; the resulting basis may not be pure and the coefficients required may not be Yangian-invariant, but the ultimate representation of loop integrands is dramatically simpler from an algebraic point of view. We have described how this story illustrates a broader one-with applications well beyond the case of elliptic leading singularities in planar theories at two loops. It would be extremely interesting to apply these lessons more widely to generate (purportedly) 'pure' master integrals for applications beyond the planar limit, and to theories without supersymmetry. Although we have not proven the 'purity' of this broader class of integrals, and although the strategies and techniques required to efficiently exploit the differential structure of pure integrals are still being developed (even in the elliptic case-but see e.g. [61][62][63]), we strongly suspect that the prescriptive bases we have constructed will prove computationally valuable as master integrals for diverse applications. For example, constructing such bases for massive theories now appears straightforward, and undoubtedly has more immediate applications for real physical applications (see e.g. [62][63][64][65][66]). But we leave such analyses to future work. A Hypergeometric Representations of the Elliptic Integrals Although the representations for the complete elliptic integrals given in (2.20) and (2.21) above are fairly standard ones (with relatively efficient implementations in Mathematica, for example), it is worthwhile to outline an alternative form for these integrals-which we hope has some promise to generalize beyond the elliptic case. In this appendix, we outline how these elliptic periods can be expressed in terms of Lauricella hypergeometric functions. (We refer the reader to e.g. [67][68][69] for some discussions on these functions in the context of Feynman integrals.) Consider first the elliptic period integral given in (2.20):
8,642.2
2021-02-03T00:00:00.000
[ "Physics" ]
Suspended Structures Reduce Variability of Group Risk-Taking Responses of Dicentrarchus labrax Juvenile Reared in Tanks Structural enrichment is considered a useful tool to improve the welfare conditions of captive fish by deliberately increasing the physical heterogeneity and complexity of captivity environments. However, the potential effects of structural enrichment on the stress response at the group level and on social interactions have not been well studied yet. In this study, we demonstrate that suspended vertical structures (U-shaped ropes) can reduce behavioural variability among fish groups (tank level) of European seabass (Dicentrarchus labrax) juveniles. Differences in behavioural responses during group risk-taking tests (e.g., number of passes per fish) between treatments were detected, and these responses in seabass in enriched captive conditions were more homogeneous among tanks compared to fish from non-enriched tanks. These results suggest a positive effect of the structural enrichment on social stabilisation and response to stressful events at the tank level in seabass. However, further research is still needed to improve the knowledge of the potential effects of structural enrichment on fish welfare and aquaculture management, considering different enrichment designs, intensities, and strategies according to farming conditions, biological needs, and preferences of the fish species and life-stage reared in captivity. Introduction Aquaculture is growing faster than other major food production sectors, and the increasing demand for cultured seafood requires further improvement of fish farming practices [1]. Many challenges must be overcome during the rearing process to obtain a final product with the desired characteristics suitable for sale [2]. Among these challenges, the welfare of captive fish can be severely affected by a wide number of stressors involved in routine husbandry [3]. In this sense, the application of environmental enrichment (EE) can help to improve the welfare of captive fish [4,5]. Among different EE strategies, the addition of physical structures in the rearing environment (i.e., structural enrichment) increases the heterogeneity and complexity of the rearing environment, providing diverse effects on the growth performance, physiology, behaviour, and health of farmed fish [6,7]. Vertically suspended plant-fibre ropes have been successfully used as an enrichment strategy for cultured gilthead seabream (Sparus aurata). Arechavala-Lopez et al. [8] reported positive effects of vertical ropes on juvenile seabream, such as aggressiveness reduction and modifications in spatial distribution, which led to fewer interactions with the experimental net-pen and better fin condition. In a similar study, Arechavala-Lopez et al. [9] Fishes 2022, 7, 126 2 of 8 demonstrated that vertical ropes could enhance seabream cognition, exploratory behaviour, and brain physiological functions in experimental rearing tanks. Adding suspended ropes to experimental sea cages of on-growing seabream increased the spatial use of fish in the net-pen [10]. Nevertheless, the existence of null or contradictory effects of structural enrichment on behaviour, physiology, and growth performance on other fish species of aquaculture interest highlights the need for further species-specific studies on the design and particular effects of different enrichment strategies [4,7]. Fish have the ability to modify behavioural and physiological traits to cope with stressful conditions at the individual level [11], so there are individuals within a population with different chances to succeed in the potentially unfavourable conditions of the rearing environment [12]. In this sense, two broad traits, the proactive and reactive stress coping styles, have been reported in fish [13]. Reactive fish are shier individuals that progressively adapt to stress conditions, whereas more aggressive and bolder individuals are considered proactive. Since the coping style is an inter-individual characteristic, reactive and proactive fish can cohabit in the same group. Moreover, farmed fish may establish a hierarchical system in which some individuals (dominants) may have total control over food, shelter, mates, and territory [14]. Indeed, dominant-subordinate relationships can affect physiological status and animal responsiveness [15], triggering social interactions and potential stress [16]. Social stress can be considered the result of physical contact between animals (e.g., high density, lack of space, and agonistic interaction) and psychological components, such as hierarchical instability and submission [16]. Therefore, the social environment can be a considerable source of stress, impairing the welfare of fish, but EE can help to stabilise hierarchies and social interactions within a group [17]. One way to evaluate the EE effects on the potential ability of fish species to cope with stressful conditions in their rearing conditions and to ensure good welfare conditions, is through the characterisation of risk-taking behaviour [18][19][20][21]. Therefore, the aim of this study was to assess the potential effects of suspended structural enrichment on risk-taking behavioural responses at the group level (as a proxy of social stabilisation) of European seabass (Dicentrarchus labrax) juveniles reared in experimental tanks. Seabass is one of the most important fishes in terms of aquaculture potential in Europe [22], and it is well known that the early life stages of this species are especially sensitive to acute stressors [23]. Increasing the knowledge on the possible effects of structures as EE will provide important tools to improve the welfare of captive fish, leading not only to ethical but also to economic benefits for fish farmers [4]. Experimental Design and Settings A total of 420 seabass (mean standard length ± SD: 9.8 ± 1.1 cm; mean body weight ± SD: 16.5 ± 5.6 g) were obtained from a commercial hatchery (Aqüicultura Balear S.A.-Culmarex, Palma de Mallorca, Spain) and acclimated to laboratory conditions for one week at the Laboratory of Marine Research and Aquaculture (LIMIA) in Port d'Andratx, Spain. Then, seabass were randomly distributed in 6 circular tanks (water volume 150 L, Figure 1A) in groups of 70 and maintained at a temperature of 20 ± 1 • C with a light-dark (12:12 h) photoperiod. The salinity was 38 PSU, and dissolved oxygen was kept close to saturation by aeration through diffusion stones. The tanks were provided with mechanical filters, with a flow-through seawater system, UV sterilisation, and compressed air supply. Three tanks were enriched with 3 plant-fibre ropes hanging from one edge of the tank to the other, two parallel (130 cm) and one perpendicular larger (170 cm), all of them at different depths and similar distances among them ( Figure 1B). The other three tanks did not present structural enrichment and were considered the control or non-enriched (NE) treatment. The choice of this type of enrichment was made based on previous studies on seabream [8][9][10] but also regarding the swimming behaviour of the species, given that seabass make vertical movements in the water column and the horizontal ropes might represent an obstacle/challenge. They were fed a commercial pelleted diet (sinking pellets; Fishes 2022, 7, 126 3 of 8 2% of their body mass) specific for seabass (Skretting ® 106 Perla MP, Stavanger, Norway) daily by hand at 13:00 h. All tanks were thoroughly cleaned daily by siphoning faeces and uneaten pellets. The seabass juveniles were maintained under these experimental conditions for 60 days. other three tanks did not present structural enrichment and were considered the control or non-enriched (NE) treatment. The choice of this type of enrichment was made based on previous studies on seabream [8][9][10] but also regarding the swimming behaviour of the species, given that seabass make vertical movements in the water column and the horizontal ropes might represent an obstacle/challenge. They were fed a commercial pelleted diet (sinking pellets; 2% of their body mass) specific for seabass (Skretting ® 106 Perla MP, Stavanger, Norway) daily by hand at 13:00 h. All tanks were thoroughly cleaned daily by siphoning faeces and uneaten pellets. The seabass juveniles were maintained under these experimental conditions for 60 days. Group-Based Risk-Taking Test Fish were individually PIT-tagged (Trovan ® , Aalten, The Netherlands) on the 30th day of the experiment and maintained in the same conditions (EE vs. NE) for another 30 days (totalling 60 days). Each fish was caught and anaesthetised by submersion in an aqueous solution of tricaine methane sulfonate (MS-222, 75 mg L-1, immersion period: 1-2 min). Then, a passive integrated transponder tag (PIT-tag) was implanted in the visceral cavity of each fish. Then, all 70 fish from each tank were exposed to a risk-taking test, a group-based test that consists of assessing the ability to explore a new risky area [24], which has been previously demonstrated to be a consistent and effective method for seabass [19,21]. Two circular cages (60 cm diameter × 50 cm depth) connected by a tubular passage (10 cm diameter × 20 cm length) were settled inside a bigger tank (10,000 L) with the same water conditions as the previous experimental period. One cage was provided with unattainable food to encourage passage, and it was considered the risky area ( Figure 1C). A PIT-tag detection antenna (diameter 100/125 × 620 mm, Trovan ® , The Netherlands) was located around the middle of the tunnel, which allowed for monitoring individual passages from one cage to the other. Each group of fish from each tank was left in the safe area (empty cage) for 1 h and 15 min. They were acclimated during the first period of 15 Group-Based Risk-Taking Test Fish were individually PIT-tagged (Trovan ® , Aalten, The Netherlands) on the 30th day of the experiment and maintained in the same conditions (EE vs. NE) for another 30 days (totalling 60 days). Each fish was caught and anaesthetised by submersion in an aqueous solution of tricaine methane sulfonate (MS-222, 75 mg L-1, immersion period: 1-2 min). Then, a passive integrated transponder tag (PIT-tag) was implanted in the visceral cavity of each fish. Then, all 70 fish from each tank were exposed to a risk-taking test, a groupbased test that consists of assessing the ability to explore a new risky area [24], which has been previously demonstrated to be a consistent and effective method for seabass [19,21]. Two circular cages (60 cm diameter × 50 cm depth) connected by a tubular passage (10 cm diameter × 20 cm length) were settled inside a bigger tank (10,000 L) with the same water conditions as the previous experimental period. One cage was provided with unattainable food to encourage passage, and it was considered the risky area ( Figure 1C). A PIT-tag detection antenna (diameter 100/125 × 620 mm, Trovan ® , The Netherlands) was located around the middle of the tunnel, which allowed for monitoring individual passages from one cage to the other. Each group of fish from each tank was left in the safe area (empty cage) for 1 h and 15 min. They were acclimated during the first period of 15 min, during which they were not allowed to pass through the tunnel. During the following 60 min, the number of movements of each individual between cages was determined through antenna detections. The test was repeated four times (every 4 days for 16 days) in order to assess the effects of EE on learning capabilities of fish from different treatments. The effects of EE on the heterogeneity of behavioural parameters were also assessed by estimation of the coefficient of variation (CV, %). In order to remove "false" passes (fish remaining [25], a segmented regression with an a priori unknown breakpoint was used to identify the time period that could be considered new "real" passes along the tunnel (breakpoint = 19.45 s). Therefore, real passes per individual were recorded in each test, together with the total number of individuals, which allowed us to estimate the total number of passes divided by the number of unique individuals crossing the antenna. Statistical Analyses A Bayesian approach was followed to fit generalised linear mixed-effects models (GLMMs, R library "MCMCglmm") [26], which were used to test for differences in the number of fish passes through the tunnel among tanks, among weeks, and between treatments. The zero-inflated Poisson distribution was considered, accounting for the type of data that was being fitted. The GLMM included week and treatment as fixed effects and the identity of the fish and the tank as random intercept terms. In this model, we used the entire data set without considering differences in the size of the fish because it was previously tested, and no size effect was found on the number of passes through the antenna. The parameters, 97.5% credibility intervals, and p-values were estimated using a Bayesian Markov chain-Monte Carlo approach with uninformative priors. We set the initial iterations to 500,000 after discarding the initial 1000 iterations (burn-in period); 1 out of 100 of the remaining iterations were kept to prevent autocorrelation (thinning strategy), obtaining 4.990 posterior samples. The convergence of the MCMC chains was assessed by visual inspection of the chain trace plots. The adjusted repeatability (Adjusted-R) was estimated as the quotient of the between-individual variance (the variance across random intercepts attributed to the individuals: Vind) and the sum of Vind and the withinindividual or residual variance (the variance associated with the tank and measurement error) for a given behavioural trait in accordance with previous studies [27]. Ethical Statement All procedures with fish were approved by the Ethical Committee of Animal Experimentation (CEEA-UIB, Spain; Ref. 85/02/18) and carried out strictly by trained and competent personnel, in accordance with the European Directive (2010/63/UE) and Spanish Royal Decree (RD53/2013) to ensure good practices for animal care, health, and welfare. Results The developed model showed that trial (time) and treatment (EE or NE) had a significant effect on the total number of passes during the trials, whereas the interaction effect of both variables had no effect on the variable ( Table 1). The mean repeatability per individual resulting from this model was 0.37 (the confidence interval ranged from 0.18 to 0.81). The total number of passes (counts) detected by the antenna for all tagged individuals increased over time, where some differences in the magnitude and the coefficient of variation (CV) could be observed between EE and NE fish during the experimental tests, being more notable during the third week's test, but showing similar values at the other trials (Figure 2A,B). The mean number of tagged fish individuals detected by the antenna exploring the new area increased over time in consecutive tests in both treatments ( Figure 2C). However, CV differed between treatments, remaining similar over time for EE fish groups and increasing in NE fish groups during the test ( Figure 2D). In addition, the mean number of passes (counts) per fish in the EE tanks gradually increased over the experimental tests, whereas this pattern was not clear for NE fish tanks ( Figure 2E), and the CVs in EE tanks were lower than the CVs of NE fish tanks, although in both cases, the variations showed a similar pattern over time ( Figure 2F). in both treatments ( Figure 2C). However, CV differed between treatments, remaining similar over time for EE fish groups and increasing in NE fish groups during the test ( Figure 2D). In addition, the mean number of passes (counts) per fish in the EE tanks gradually increased over the experimental tests, whereas this pattern was not clear for NE fish tanks ( Figure 2E), and the CVs in EE tanks were lower than the CVs of NE fish tanks, although in both cases, the variations showed a similar pattern over time ( Figure 2F). Discussion Structural enrichment is considered a useful tool to improve the welfare conditions of captive fish by increasing the heterogeneity and complexity of captive environments [7]. The aim of this study was to assess the effects of suspended structural enrichment on the group risk-taking behaviour of European seabass juveniles in rearing tanks. This was approached by comparing fish reared in tanks with suspended ropes as enrichment and fish reared in non-enriched environments. We demonstrated that suspended vertical structures (U-shaped ropes) can positively affect the group risk-taking behaviour of captive seabass but also reduce the variations among groups. This behavioural homogeneity at the tank level indicates a more homogeneous group response to stressful situations, which might correspond to better adaptation or a more stable social structure within the fish group. Unstable social hierarchies can have consequences on fish welfare due to dominant/subordinate conflicts (territoriality, mating), competence for resources, and aggressive interactions, triggering social stress in fish groups and the impairment of fish welfare [15,16]. This can be seen particularly in Figure 2F, where the CV is much higher in the NE group, meaning that the number of counts in NE was due to a few individuals crossing very often. Conversely, in the EE group, the CV is very low, demonstrating that all individuals behaved similarly at the group or tank level. Previous studies on seabream demonstrated that vertical structures had direct effects on fish behaviour in rearing conditions, reducing aggressiveness, increasing the effective space used, and promoting the spatial exploration and cognitive abilities of captive fish [8][9][10]. In addition, beneficial effects of the presence of a specific coloured substrate (as a mean of EE) were reported on the growth, behaviour, and stress response of Gilthead seabream [28][29][30][31]. The authors suggested that such positive effects may be related to altered social interactions, indicating the establishment of a less stressful social organisation in enriched-reared fish groups [29]. The relative lack of strong behavioural effects between the different treatments in our study could be related to the type or design of the EE strategy chosen. Some authors demonstrated that the level or intensity of physical structures (i.e., number of structures) influence the social stability and agonistic interactions of territorial species, such as black rockfish (Sebastes schlegelii) and fat greenling (Hexagrammos otakii) [32], Nile tilapia (Oreochromis niloticus) [33], redbreast tilapia (Tilapia rendalli) [34], and convict cichlid (Amatitlania nigrofasciata) [35]. The proper enrichment level can significantly accelerate the formation of social stability, whereas other specific enrichments may slow down this process [32]. In fact, some authors clearly pointed out the positive effects of environmental enrichment in reducing the maladaptive risk-taking behaviour of farmed fish [36,37]. Therefore, the effect of physical enrichment on risk-taking behaviour and social stability can be intensity-dependent, and the direction and magnitude may depend on resources, life stages, and rearing conditions. Further research is still needed to improve the knowledge of the potential benefits of structural enrichment on diverse aspects of fish welfare and aquaculture management, taking into consideration not only the farming conditions and systems but also the biological needs and preferences of fish species reared in captivity [4]. The structural enrichment used in this experiment can be easily implemented in rearing tanks and at other life stages, especially during the grow-out phase normally performed in open-sea cages, but different levels and designs must be explored. U-shaped suspended plant-fibre ropes have some extra benefits when compared with other environmental enrichment structures, namely, being affordable for aquaculture companies and biodegradable, with no negative impacts on the environmental footprint. To conclude, our results suggest a positive effect of structural enrichment on group risk-taking behaviour, which might be influenced by social stabilisation, improving the response to stressful events at the tank level and thus increasing the welfare of seabass.
4,379.2
2022-05-31T00:00:00.000
[ "Biology", "Environmental Science" ]
Impact of Von Willebrand Factor on Bacterial Pathogenesis Von Willebrand factor (VWF) is a mechano-sensitive protein with crucial functions in normal hemostasis, which are strongly dependant on the shear-stress mediated defolding and multimerization of VWF in the blood stream. Apart from bleeding disorders, higher plasma levels of VWF are often associated with a higher risk of cardiovascular diseases. Herein, the disease symptoms are attributed to the inflammatory response of the activated endothelium and share high similarities to the reaction of the host vasculature to systemic infections caused by pathogenic bacteria such as Staphylococcus aureus and Streptococcus pneumoniae. The bacteria recruit circulating VWF, and by binding to immobilized VWF on activated endothelial cells in blood flow, they interfere with the physiological functions of VWF, including platelet recruitment and coagulation. Several bacterial VWF binding proteins have been identified and further characterized by biochemical analyses. Moreover, the development of a combination of sophisticated cell culture systems simulating shear stress levels of the blood flow with microscopic visualization also provided valuable insights into the interaction mechanism between bacteria and VWF-strings. In vivo studies using mouse models of bacterial infection and zebrafish larvae provided evidence that the interaction between bacteria and VWF promotes bacterial attachment, coagulation, and thrombus formation, and thereby contributes to the pathophysiology of severe infectious diseases such as infective endocarditis and bacterial sepsis. This mini-review summarizes the current knowledge of the interaction between bacteria and the mechano-responsive VWF, and corresponding pathophysiological disease symptoms. INTRODUCTION Vascular hemostasis is a live-saving mechanism, which balances coagulation, thrombogenesis, and fibrinolysis in response to vascular injuries and inflammatory processes. Key element of the hemostasis are the Weibel Palade bodies (WPBs), which represent defense vesicles, constitutively produced by the endothelium of the vessel walls. The vesicles are filled with vasoactive substances, immune defense modulators, and proteins involved in coagulation (1,2). In addition to megakaryocytes, endothelial WPBs are the main source of Von Willebrand factor (VWF). This glycoprotein mediates platelet activation, anchorage of thrombocytes to the subendothelial collagen, and induction of plasma haemostasis via factor VIII (3,4). Moreover, VWF promotes cell migration in angiogenesis via interaction with different cell surface receptors and induction of signaling pathways (5). The high importance of VWF for balanced hemostasis is conveyed by the appearance of bleeding disorders such as the von Willebrand disease caused by an inherited quantitative or functional VWF deficiency (3). VWF constantly circulates in the bloodstream at concentrations between 8 and 14.0 µg/mL (3,6). But, vasoactive hormones such as epinephrine and vasopressin as well as the plasma proteins thrombin, histamine, and numerous other mediators of inflammation and/or thrombosis induce the release of VWF in response to vascular injury or inflammatory stimuli. The released VWF increases the plasma levels of this protein, and some proportion of VWF is temporarily retained on the cell surface and binds to collagen of the exposed subendothelial matrix (7,8). This subendothelial immobilization is also significantly strengthened by the endothelial glycocalyx in a heparanase-sensitive manner (9). VWF is a mechano-sensitive protein, which responds to shear stress-mediated forces by conformational changes. Shear stress is defined as the force exerted by the blood flow on blood vessel walls. This stress generates a response in the vascular wall, characterized by release of endothelial mediators, which in turn stimulate structural remodeling through activation of gene expression and protein synthesis (10). The shear stress-derived conformational changes of VWF are crucial for the biological function of VWF in hemostasis. Upon exposition to the shear forces in the bloodstream, the immobilized VWF unfolds to large protein strings, thereby exposing further functionally important binding sites (7,11,12). In particular, the defolded A1 binding site mediates adhesion of platelets and recruits them via binding to the platelet glycoprotein GP?bα (11,13,14). This VWFplatelet interaction finally results in a factor VIII-induced fibrinincorporation and in stabilization of generated thrombi. Elevated VWF-levels are directly associated with cardiovascular diseases (CD) of high-risk groups such as the elderly and diabetes patients (15). Alongside with tissue plasminogen activator (t-PA), and D-dimer of fibrinogen, VWF is characterized as one out of three biomarkers directly associated with atherosclerotic lesions and coronary heart disease (16,17). This unveils the thrombus-generating activity of elevated VWFconcentrations as one of the dominant causative factors for coronary heart disease (18). In addition to the role of VWF in CD, VWF serves as a ligand binding site for bacteria, which cause live-threatening local and systemic infection diseases, such as Staphylococcus aureus and Streptococcus pneumoniae (19,20). S. aureus is a human pathogenic bacterium causing, among others, infective endocarditis and heart valve prosthetic infection (21,22). In this respect, shear-force-mediated adhesion of staphylococci to VWF is directly associated with coagulation and typical disease symptoms (23,24). Similarly, S. pneumoniae, a commensal colonizing the upper respiratory epithelium and a major cause of community-acquired pneumoniae in elderly and immunocompromised patients (25,26), has also been recurringly isolated from heart valve endocardium of patients suffering from subacute endocarditis (27,28). Furthermore, an increasing amount of clinical case studies report that up to one-third of patients suffer from major adverse cardiac effects (MACE) and vascular impairments within months and even years after recovering from severe pneumococcal infections such as pneumoniae and septicemiae (29)(30)(31). The observation of similarities between the association of CD with VWF-release, and symptoms induced by bacterial infections initiated an increasing need to develop infection models and sophisticated visualization techniques in the last decade. With these models, the pathomechanistical function of some crucial bacterial virulence factors in VWF-mediated disease progression could be deciphered. BACTERIAL BINDING TO VWF UNDER SHEAR FLOW The release of VWF from endothelial WPBs is induced by hostderived hormones such as epinephrine and histamine and other plasma factors and is also triggered by pathogenic bacteria (32). For example, in 1991, Sporn et al. were the first to observe that the intracellular pathogen Rickettsia rickettsii, the main cause of the Rocky Mountain spotted fever, induces the release of VWF out of WPB of cultured endothelial cells [(33), Table 1]. Moreover, in our previous studies, we demonstrated that luminal VWF secretion from WPB of human lung endothelial cells is significantly increased in response to pneumococcal adherence and the cytotoxic effects of the pneumococcus toxin pneumolysin (45). These results strongly suggest that in vivo, the interaction between circulating bacteria in the bloodstream and the endothelial vasculature might directly lead to elevated VWF plasma levels. In this respect, the scientific question was raised whether the released VWF is directly subverted by the bacteria for their own benefit, i.e., as a binding site at the host endothelium, for platelet aggregation, or interference with the host coagulation. Indeed, Herrmann et al. were the first to demonstrate the binding of S. aureus bacteria to VWF coated surfaces and VWF in suspension (46). A short time later, a heparin-sensitive bacterial binding to soluble VWF was also reported for coagulasenegative Staphylococcus species, often associated with infections of prosthetic devices [(40), Table 1]. Bacterial adhesion to the vascular endothelium is of high importance for the pathology of blood-born infections, since this promotes bacterial settlement, induces inflammatory responses, and facilitates bacterial transmigration and dissemination into deeper tissue sites. It became obvious that blood-flow induced conformational changes of the VWF molecule, which are crucial for the physiological function of VWF in the bloodstream, might also be of high relevance for VWF-mediated bacterial adhesion. For a long time, it remained a technically challenging task to unreveal details of the bacterial interaction with the mechano-sensitive VWF under shear stress condition. But meanwhile, a variety of model systems have been established that enable the simulation of different physiological shear stress situations including sophisticated visualization techniques [for review, please refer to Bergmann and Steinert (47)]. The first experimental studies on the binding of multimerized VWF to platelets were performed with "Cone-and-Plate" viscometers in combination with flow cytometric quantifications (48). Viscometer-generated shear stress application was also combined with ristocetin-incubation of VWF. Ristocetin is an antibiotic produced by Amycolatopsis lurida, and is still used as the Gold standard in diagnostics of von Willebrand-disease (49). Ristocetin binds to VWF in a shear-stress-independent manner, thereby inducing the exposure of the VWF-mediated platelet binding site for thrombocyte recruitment and aggregation (49). Following the objective to quantitatively analyse the specific protein ligand-interaction with VWF under a defined medium flow, several surface-coating technologies have been established that create so-called "functionalized surfaces." For example, Mascari and Ross have quantified the detachment of staphylococci from collagen in real-time using a parallel plate flow chamber combined with phase-contrast video-microscopy and digital image processing (50). The results provided evidence that staphylococci adhere directly to multimerized VWF strings and attach to collagen of the exposed subendothelium in bloodborne infections (34). In addition to the biochemical interaction studies, several in vivo mouse infection models employing vwf gene-deficient mice and platelet-depleted mice enable evaluation and monitoring of systemic consequences associated with hemostatic processes. The in vivo analyses revealed that S. aureus bacteria directly attach to cell-bound VWF of the endothelial vasculature (23,51). Moreover, visualization of bacterial mouse infection via intravital microscopy confirmed that bacteria, which attached to VWF strings, resist shear stress-mediated clearance by the blood flow [(19), Figure 1]. Deeper insight into the pathophysiological consequences of the pneumococcus interaction with VWF was also provided by infection analyses using zebrafish larvae. Danio rerio serves as a suitable in vivo model, sharing high morphologic and functional similarity to the human endothelial tissue and both, intrinsic and extrinsic coagulation pathways (52)(53)(54). Microscopic real-time visualization of larval infection confirmed the recruitment of endothelial-derived VWF to circulating pneumococci and VWF-mediated attachment to the endothelial vessel walls (20). BACTERIAL VWF BINDING PROTEINS AND BINDING MECHANISMS The bacterial interaction with components of the hemostasis in vivo augurs the presence of specific bacterial surface proteins, which mediate binding to VWF. The protein A (SPA) of S. aureus was identified as a bacterial VWF-binding protein. SPA elicits binding activities to both, the soluble and the surface-immobilized VWF [(35), Table 1]. Six years later, the VWF binding sites of protein A were narrowed down to the IgG-binding domain (55). Using single-molecule atomic force microscopy (AFM), Viela et al. further demonstrated that VWF binds very tightly to SPA via a force-sensitive catch bond mechanism, which involves force-induced structural changes in the SPA domains (36,56). Meanwhile, protein similarities led to the assumption that several bacterial virulence factors may use this binding mechanism to resist clearance by high shear stress during infections (57). In addition to SPA, a second staphylococcal VWF binding protein (VWbp) with coagulase activity was identified from a phage display-library screen in 2002 [ (37,38), Table 1]. Studies using functionalized surface-technology revealed that in contrast to SPA, VWbp appears to be of significant relevance for VWF recruitment rather than under static conditions (58). Likewise, pneumococci bind VWF under static conditions, and also recruit globular circulating VWF via the surface-exposed enolase [(20), Table 1]. This protein also mediates binding of pneumococci to plasminogen and to extracellular nucleic acids, which both promotes bacterial attachment to epithelial and endothelial cells (59). Moreover, similar to the staphylococcal VWbp, the VWF binding site for the pneumococcal enolase is located within the defolded A1 domain of VWF (19,20,60). All bacterial VWFbinding proteins identified so far are listed in Table 1. In addition to the analyses of perfused VWF protein-coated, functionalized surfaces, the group of Schneider et al. established an air pressure-driven, unidirectional, and continuous pump system manufactured by the company ibidi R (19). In contrast to the formerly described flow systems that are employed to analyse protein-protein-interactions under shear stress conditions, the ibidi R pump technology enables sterile long term cultivation of VWF-producing endothelial cells, which can be incubated with bacteria and microscopically analyzed in real-time. As a result, this air-driven microfluidic pump device enabled the analyses of staphylococcal interaction with VWF on endothelial cell surfaces under shear stress conditions (19) and was also used to establish a pneumococcus cell culture infection model of primary endothelial cells in flow (20,61). With this system, the attachment of pneumococci to multimerized VWF strings on the endothelial cell surface was successfully visualized and quantitatively evaluated. In accordance with the VWF binding characteristics of S. aureus, VWF binding to pneumococci is heparin-sensitive and depends on the amount of polysaccharide capsule expression (20). It is of note that pneumococcal attachment to VWF strings is also characterized by remarkable bond stability for longer time periods even at high shear flow parameters, which might be promoted by a concerted action of several additional, yet unidentified VWF-binding proteins (20). In addition, results of surface plasmon resonance binding studies and cell culture infections studies in flow revealed that the pneumococcus enolase interacts with both, globular circulating VWF and with VWF strings with comparable avidity. Based on the observation that multi adhesive proteins such as the bacterial enolase are already detected on the surface of various bacterial species, it can be assumed that the bacterial interaction with VWF is part of a general mechanism with pivotal relevance for pathophysiology. EFFECT OF STAPHYLOCOCCAL AND STREPTOCOCCAL INTERACTION WITH VWF ON COAGULATION AND VASCULAR DISEASES As summarized in Table 1, VWF binding to bacteria has only been studied to detail for staphylococci and streptococci. Taking clinical symptoms into account, different functional aspects of the bacterial interaction with VWF can be directly or indirectly correlated with at least three severe infection diseases: infective endocarditis, bacterial sepsis, and cardiovascular complications. Infective endocarditis is regarded as a paradigm of bacterial diseases associated with vascular inflammation and VWFinteraction (24). Most of the acute infective endocarditis are caused by S. aureus and are associated with up to 100% mortality rate if left untreated (21,22). Compared to that, infective endocarditis caused by S. pneumoniae is rare but no less severe (27,28). Infection of the heart valves is initiated by the attachment of circulating bacteria to the endocardium and the formation of bacterial vegetations, which are embedded in fibrin and platelets ( Figure 1A). During disease progress, the vegetations induce further inflammatory processes, which result in ulceration, rupture, and necrosis of the valve cusps (62,63). Experimental shear stress determination using native porcine aortic valve models revealed that even in a healthy human vasculature, the systolic shear stress at the heart valve leaflet can reach up to 21.3 dyn/cm 2 at the aortic site and up to 92 dyn/cm 2 at the ventricular site (64,65). Similar to the activation of specific proinflammatory and procoagulant protein expression patterns of endothelial cells, the hemodynamic forces also promote the activation of endocardial Notch-dependent signaling pathways in the endocardial cells of the atrio-ventricular valve (66). The observed magnitude of shear stress is sensed by the mechanoresponsive VWF and induces stretching and multimerization of VWF proteins. Thereby, VWF displays crucial binding sites for bacterial surface adhesins and mediates bacterial attachment to the heart valve. In line with this, visualization of staphylococcal mouse infection via 3D confocal microscopy confirmed the adhesion of fluorescent S. aureus to murine aortic valves (23). The mouse infection studies further demonstrated that following valve damage, VWF and fibrin are both deposited on the damaged valve endocardium and serve as attachment sites for S. aureus [(23, 51), Figure 1A]. Moreover, endothelial cell culture infections and intravital microscopy of bacterial mouse infection confirmed that staphylococci and pneumococci resist shear stress-mediated clearance by the blood flow by binding to VWF strings at the endothelial vessel walls (19,20,51). Following disease progress, the VWF-mediated bacterial attachment also promotes the recruitment of large amounts of platelets, capturing S. aureus to the valve surface [ (23,24,67,68), Figures 1A,B]. The observation that among the staphylococci, only S. aureus and S. lugdunensis are able to bind VWF might, in part, explain why these bacteria are more effective in causing endocarditis than other staphylococci (41). Bacterial VWF binding is also involved in the formation of large platelet aggregates within the blood circulation. In this respect, the formation of bacterial-induced platelet aggregates and the depletion of clotting factors from blood represents a crucial pathomechanism, which is directly attributed to disease symptoms typical for bacterial sepsis. For example, staphylococcal sepsis is associated with an increase in coagulation activity and an enhanced thrombosis ( Figure 1B; Table 1). It is assumed that the Staphylococcus-induced dysregulated activation of systemic thrombosis leads to thrombotic microangiopathy, which is associated with an accelerated fibrinolysis and bleeding tendency, referred to as disseminated intravascular coagulation [DIC, (69)]. Moreover, this bacterial mechanism is also assumed to directly induce the formation of abscesses [ (39,(70)(71)(72)(73), Figure 1B]. A similar formation of blood clots, reaching up to 10 µm in diameter, was observed in pneumococcus infection of Danio rerio larvae (20). Based on these data, we suppose that the VWF-mediated bacterial aggregate formation in the blood circulation of the zebrafish cause a partial or complete occlusion of the larval microvasculature. Thus, in severe cases of staphylococcal and pneumococcal septicaemiae, the vascular occlusion of small blood vessels throughout the body represents a life-threatening disease symptom, which might lead to multiorgan failure, resulting in high mortality rates of up to 50% [ (74)(75)(76)(77), Figure 1B]. Bacterial aggregate formation in sepsis and infective endocarditis, in particular, are also prime examples of the strong connection between the hemostatic system and innate immunity, which is referred to as immune thrombosis (78). It is coincidently proposed that the infection-induced coagulase activity mediates bacterial capture within a fibrin meshwork, which enables this pathogen to disseminate via thromboembolic lesions and to resist opsonophagocytic clearance by host immune cells (73). On the other hand, platelets are the crucial mediators of the innate defense against staphylococci by releasing microbicidal proteins from alpha granules that kill the bacteria (79). On the first view, it appears to be contradictory that bacteria induce a clotting mechanism, which is originally developed as an antibacterial immune defense mechanism of the host. However, the biochemical and physiological attributes of the fibrin meshwork formed by staphylocoagulases are thought to be distinct and less solid than those generated by thrombin (80). Therefore, instead of containing the infection, immune thrombosis might rather create the optimal environment for bacteria to survive and to evade the immune defense of the host (24). It is supposed that the bacterial infection mechanism leading to vascular dysfunction and enhanced activation of inflammation might also be implicated in developing cardiovascular complications ( Figure 1C). An increasing number of clinical studies solidify the observation that pneumococci induce vascular inflammation of the endothelial vessel wall, including the aorta (81), and that severe pneumococcal infections such as pneumoniae and septicemiae lead to a higher risk for major adverse cardiac effects (MACE) such as myocardial infarction, ischemic stroke, and arterial thrombosis (29)(30)(31). Since elevated VWF plasma levels are known to be associated with an increased risk for MACE (15), the endothelial VWF release induced by pneumococcal attachment and by pneumolysin activity might be partially responsible for the pathologic effects on the cardio vasculature (45). As further explanation, functional variants of VWF have been identified, which elicit differences in the protein conformation and shear sensitivity. These variants are associated with increased platelet aggregate size and the occurrence of these VWF variants correlates with a higher risk of thromboembolisms including myocardial infarction and stroke (82). In line with these observations, it can be assumed that bacterial interaction with VWF might effect the hemostatic function in various ways, i.e., by sterical hindrance of the platelet binding site, by alteration of the VWF conformation, and by inhibition of dimerization and multimerization activities, thereby increasing the risk for cardiovascular complications. CONCLUSIONS VWF is a live-saving key component of coagulation and immune thrombosis in response to vascular injury and inflammation. Bacterial interaction with VWF is of high medical and scientific importance since this interaction is directly associated with specific clinical manifestations and long-term complications of infectious diseases. It has been demonstrated that binding of S. aureus and S. pneumoniae to VWF strings is controlled by hydrodynamic flow conditions. So far, at least three bacterial pathomechanisms involving host-derived VWF can be named: (i) binding to multimerized VWF strings mediates bacterial attachment to endothelial surfaces in blood flow-a major prerequisite of bacterial colonization, inflammation, and dissemination. (ii) VWF recruitment facilitates bacterial capture within clotted blood, thereby preventing bacterial clearance via immunothrombosis; (iii) recruitment of intravascular VWF induces bacterial aggregate formation, which leads to occlusion of microcapillaries and impaired blood supply. Although several sophisticated technologies such as microfluidic systems and binding force determinations already provided most valuable insights into the cell biological and biochemical details, the multifactorial complexity of the bacterial interaction with VWF still remains a challenging subject of ongoing scientific research. AUTHOR CONTRIBUTIONS SB and MS contributed to text conception and wrote the text. IR has generated the figure and critically revised the text. All authors contributed to manuscript revision, read, and approved the submitted version.
4,792.2
2020-09-03T00:00:00.000
[ "Biology", "Medicine" ]
Obesity-associated genetic variants in young Asian Indians with the metabolic syndrome and myocardial infarction Objective Associations between obesity-related polymorphisms and the metabolic syndrome in 485 young (≤ 45 years) Asian Indian patients with acute myocardial infarction (AMI), and 300 matched controls were assessed. Methods Genetic variants included the adiponectin 45T→G and 276G→T, LEPR K109R and Q223R, MC4R-associated C→T and FTO A→T polymorphisms. Results The metabolic syndrome, as defined by NCEP ATP III and IDF criteria, was diagnosed in 61 and 60% of patients, respectively. No relationship was found between the obesity-associated polymorphisms and the metabolic syndrome, or between AMI patients and controls. The MC4R-associated TT genotype occurred more frequently in patients with lower triglyceride levels (p = 0.024), while the adiponectin 45 TT genotype occurred more commonly in patients with normal fasting glucose levels (p = 0.004). The LEPR Q223R TT genotype was associated with low high-density lipoprotein (HDL) cholesterol levels (p = 0.003). Conclusion The metabolic syndrome occurs commonly in young Asian Indian patients with AMI. No relationship was found between any obesity-associated polymorphism and the metabolic syndrome. Particular genotypes may exert protective or disadvantageous effects on individual components of the metabolic syndrome. Obesity is recognised as a serious chronic disease, the prevalence of which is increasing worldwide. Several studies have demonstrated a strong association between obesity and insulin resistance. This relationship frequently leads to diabetes mellitus, hypertension, dyslipidaemia and vascular inflammation, all of which promote the development of atherosclerotic cardiovascular disease. [1][2][3] The co-occurrence of these metabolic risk factors has given rise to the metabolic syndrome. Although there are several definitions for the metabolic syndrome, the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III) 4 and the International Diabetes Federation (IDF) 5 definitions are the most widely used. Fundamental to the syndrome is the close interaction between abdominal fat patterning, total body adiposity, and insulin resistance. Both definitions include central obesity as one of the criteria, but with different waist circumference cut-offs. The IDF definition of the metabolic syndrome, in fact, proposes central obesity as an essential element, with specific waist circumference thresholds set for different ethnic groups. In addition, the metabolic syndrome is reportedly present in 60% of individuals who are obese, 6 while an increased waist circumference, taken in isolation, identifies up to 40% of individuals who will develop the syndrome within five years. 7 Obesity is believed to be a heritable trait. However, the genes that contribute to the less severe but more common forms of obesity have been difficult to identify. Several studies have demonstrated a possible role for the adiponectin gene in obesity, insulin resistance and type 2 diabetes. [8][9][10][11] Adiponectin, which is an adipocyte-derived cytokine, has been shown to be down regulated in obesity, particularly in those individuals with visceral obesity, while adiponectin levels correlate inversely with insulin resistance. [12][13][14] Furthermore, preliminary data suggest that high adiponectin concentrations are associated with a favourable cardiovascular outcome. [15][16] It has been estimated that between 30 and 70% of the variability in adiponectin levels is determined by genetic factors. 17 Although several polymorphisms in the adiponectin gene have been described, data linking these variants to obesity and related conditions are inconclusive. Leptin is another important protein associated with obesity, and functions by inhibiting food intake and stimulating energy expenditure. 18 Leptin levels are regulated by various proteins, one of which is the leptin receptor (LEPR). Several polymorphisms are found in the LEPR gene but earlier studies, which examined potential associations between LEPR gene polymorphisms and obesity, failed to report conclusive results. 19,20 Nevertheless, both the adiponectin and the LEPR genes remain potential candidates in the aetiology of obesity and coronary heart disease (CHD). Other genes have also been associated with obesity in the general population, most notably the melanocortin-4-receptor (MC4R) and the fat mass and obesity-associated (FTO) genes. Activation of the MC4R gene reduces body fat stores by decreasing food intake and increasing energy expenditure. Functional mutations in the MC4R gene are reportedly associated with hyperphagia, early-onset obesity and hyperinsulinaemia, while polymorphisms near the MC4R gene are linked to increased body mass index (BMI) and abdominal girth. 21,22 Variants in the FTO gene have been associated with obesity in a genome-wide study, 23 and have been shown to predispose to diabetes through an effect on BMI. 24 AFRICA In the present study, we examined single nucleotide polymorphisms (SNPs) in the adiponectin, LEPR, MC4R and FTO genes in association with obesity in a cohort of young South African Asian Indian patients with acute myocardial infarction (AMI), with and without the metabolic syndrome. This study group is of particular interest due to the high incidence of both premature CHD and the metabolic syndrome in the South African Indian population. 25 The genetic variants selected for study included the adiponectin 45T→G (rs2241766) and 276G→T (rs1501299), the LEPR K109R (rs1173100) and Q223R (rs1173101), the MC4Rassociated C→T (rs17882313) and the FTO A→T (rs9939609) polymorphisms. We also compared the frequencies of these polymorphisms in the AMI subjects with the frequencies found in a control group of people free of CHD, to assess the potential of these polymorphisms as risk factors for CHD. Methods A total of 485 Asian Indian subjects presenting with AMI was studied. The study population and demographic profiles have been described in detail previously. 25 Briefly, subjects eligible for inclusion were men and women aged 45 years or younger, who were admitted with a diagnosis of AMI based on the Joint European Society of Cardiology/American College of Cardiology Committee definition. 26 Both the NCEP ATP III and the IDF definitions were used to assess the prevalence of the metabolic syndrome. The investigation conforms to the principles outlined in the Declaration of Helsinki. Informed consent was obtained from all individuals in the study and approval was granted by the Ethics Committee of the Faculty of Health Sciences, Nelson R Mandela School of Medicine, University of KwaZulu-Natal. Blood samples were collected from all AMI patients within 48 hours of admission after an overnight fast. Total cholesterol, triglycerides, high-density lipoprotein (HDL) cholesterol and glucose levels were determined using standard enzymatic methods on a Beckman UniCel DxC800 auto analyser. Anthropometric measurements, including BMI and waist circumference, were used to define obesity. The BMI was calculated as weight (kg) divided by height 2 (m) according to World Health Organisation guidelines. 27 A BMI ≥ 30 kg/m 2 was used as a cut-off to indicate obesity. Waist circumference, which is considered the most practical way to assess central obesity, was measured midway between the lowest rib and the iliac crest on standing subjects, using a soft tape measure. The central obesity threshold limits proposed by both the NCEP ATP III (males > 102 cm, females > 88 cm) and IDF (males ≥ 90 cm, females ≥ 80 cm) were used to define the metabolic syndrome. The control group comprised 300 healthy age-matched Asian Indian subjects drawn from the same community as the patients. None of these subjects suffered from cardiovascular disease or had any associated clinical risk factors. All were non-smokers and none was obese. Blood for DNA analysis was collected from both AMI patients and control subjects in ethylenediaminetetra-acetic acid (EDTA) tubes, and stored at -20°C until DNA isolation by standard techniques. Genotyping of all six SNPs was performed by TaqMan SNP allelic discrimination pre-designed assays (rs2241766: catalogue no C 26426077_10; rs1501299: catalogue no C 7497299_10; rs1173100: catalogue no C 7586955_10; rs1173101: catalogue no C 7586956_10, rs17882313: catalogue no C 32667060_10; rs9939609: catalogue no C 30090620_10) using an ABI 7500 thermal cycler (Applied Biosystems). All results were automatically called. Approximately 10% of all samples were genotyped on more than one occasion and showed 100% concordance. Statistical analysis Data were analysed using the STATA, version 11 (StataCorp LP, TX). The Pearson chi-squared or the Fisher exact test, when appropriate, was used to test associations between independent, categorical exposures and outcomes. Confidence intervals were constructed by use of odds ratios, with all confidence intervals assessed at the 95% level of confidence. Where values were of a continuous nature, the t-test was used to assess mean values between groups. Differences were considered statistically significant when p < 0.05. results The biochemical and clinical characteristics of male and female subjects are shown in Table 1. The study population comprised 485 patients, 86% of whom were males. Compared with males, females had significantly higher mean baseline glucose (10.9 ± 5.37 vs 8.44 ± 4.36 mmol/l, p = 0.005) and HDL cholesterol levels (1.14 ± 0.35 vs 0.95 ± 0.28 mmol/l, p ≤ 0.0001). Triglyceride levels did not differ between genders. The metabolic syndrome as defined by the NCEP ATP III criteria was diagnosed in 61% of patients [males (59%), females (76%)], and With respect to BMI measurements, 19% of all subjects were classified as obese (BMI ≥ 30 kg/m 2 ), with proportionally more females (32%) than males (17%, p = 0.002). Forty-five per cent of patients had an increased waist circumference (visceral obesity) based on the NCEP ATP III definition of the syndrome (mean abdominal girth 108.73 ± 10.81 cm). In the 290 patients with the metabolic syndrome according to the IDF criteria, the mean abdominal girth was expectedly lower (102.50 ± 10.32 cm), because of the lower limits set for waist circumference measurements in the Asian population. The genotype and allele frequency distributions of the adiponectin, LEPR, MC4R and FTO gene polymorphisms in relation to the metabolic syndrome as defined by the NCEP ATP III and IDF criteria are shown in Tables 2 and 3, respectively. The frequencies of all polymorphisms were in Hardy-Weinberg equilibrium. No significant relationship was found between any of the obesity-associated polymorphisms and the metabolic syndrome, irrespective of the definition used, nor were there any differences in polymorphic frequencies with respect to gender, or between patients and controls for both definitions of the syndrome (data not shown). The relationship between the obesity-associated polymor- AFRICA phisms studied here and the clinical and biochemical components of the metabolic syndrome was assessed. Significant associations are shown in Table 4. The TT genotype of the MC4Rassociated polymorphism was found more frequently in patients with lower triglyceride levels (OR 1.55; 95% CI 1.05-2.28; p = 0.024), while the major TT genotype of the adiponectin 45T→G polymorphism occurred significantly more frequently in patients with normal fasting blood glucose levels (OR 1.92; 95% CI 1.21-3.09; p = 0.004). The TT genotype of the LEPR Q223R polymorphism was associated with low HDL cholesterol levels (OR 2.35; 95% CI 1.28-4.52; p = 0.003). TABLE 3. GENOTYPE AND ALLELE FREQUENCIES OF OBESITY-ASSOCIATED POLYMORPHISMS IN PATIENTS WITH AND WITHOUT THE METABOLIC SYNDROME AS DETERMINED BY THE IDF DEFINITION A possible synergistic relationship between the six polymorphisms studied and the metabolic syndrome was analysed by counting the number of variant alleles carried by each individual. No relationship was observed between the cumulative polymorphic load and the metabolic syndrome by either definition, or the control subjects. discussion The metabolic syndrome is a common finding in young South African Indians with AMI, irrespective of the definitive criteria used (61% for NCEP ATP III and 60% for IDF). Male subjects were in the majority, but proportionally more females were found to have the metabolic syndrome (p = 0.007 and 0.02 for NCEP ATP III and IDF, respectively). These results are in agreement with previous studies, which reported a greater prevalence of the metabolic syndrome in women compared with men. 28,29 Another interesting observation in our young patients was the increased prevalence of visceral obesity with the metabolic syndrome, as assessed by waist circumference measurements. Although only 19% of subjects in this study cohort were considered to be obese based on BMI measurements, it has been reported recently in South Asians that the risk level for the development of an adverse metabolic profile, with respect to body fat, is reached at a much lower BMI (21 kg/m 2 ) compared to Europeans (≥ 30 kg/m 2 ). 30 Furthermore, the predominance of visceral adipose tissue with increasing waist circumference in Asian Indians 31 has been shown to be associated with a higher prevalence of the metabolic syndrome, compared with African-Americans in whom subcutaneous fat predominates. 32 Therefore the higher frequency of obesity and the metabolic syndrome in the South African Asian Indian patients in this study may explain in part the accelerated onset of atherosclerotic disease in these subjects compared to other ethnic groups. This concurs with other studies on CHD in Asian Indians, in whom about half of all myocardial infarctions occured in individuals under the age of 50, with 25% being under 40 years of age. 33 With respect to the polymorphic variants in the adiponectin, LEPR, MC4R and FTO genes, no significant differences in allele frequency or genotype distribution were observed between patients with the metabolic syndrome, irrespective of the definition used, compared to those who did not have the syndrome. A previous study by Filippi et al. 34 demonstrated a significant association between the adiponectin 276G→T polymorphism and the early onset of CHD. No similar relationship was found in our young patients with AMI for either the adiponectin 45T→G or 276G→T polymorphisms, which agrees with the findings of Jung et al. in their Korean subjects. 35 Given the complexity of the metabolic syndrome and the lack of clarity surrounding its definition, most reports to date have been restricted to the examination of the relationship between variant polymorphisms and the individual criteria of the metabolic syndrome, rather than with the syndrome as a whole. For example, individuals carrying the adiponectin 276G→T polymorphism were found to be associated with type 2 diabetes mellitus in the Japanese population. 36 In European subjects, conflicting results have been reported on the relationship between the LEPR Q223R polymorphism and obesity. 37,38 Common polymorphisms in the MC4R and the FTO genes have, however, been shown to be predictive of obesity and diabetes. 39 In the current study, the TT genotype of the adiponectin 45T→G gene occurred more frequently in patients with normal fasting blood glucose levels (p = 0.004), suggesting a protective influence of the major allele, while the TT genotype of the MC4R-associated polymorphism was found more frequently in patients with low triglyceride levels (p = 0.024). In contrast, the TT genotype of the LEPR Q223R polymorphism was strongly associated with low HDL cholesterol levels (p = 0.003). To the best of our knowledge, this is the first time that these findings have been reported in the Asian Indian population, and clearly warrant further evaluation in larger studies of different ethnic backgrounds. No other associations were found for any of the other polymorphic variants examined with any individual criteria of the NCEP ATP III and IDF definitions of the metabolic syndrome, including obesity. Several limitations of this study merit consideration. Serum adiponectin and leptin levels were not measured, and therefore the functional significance of the polymorphisms studied cannot be assessed. Previous findings on the association between adiponectin polymorphisms and serum adiponectin levels have been contradictory, 40,41 suggesting that serum levels may not necessarily reflect the overall amount of adiponectin in the body or its concentration in the interstitial space. With respect to subject numbers, the present study population was too small to achieve adequate statistical power in the case of some polymorphisms. However, the restriction of the study group to young patients drawn from a homogeneous population base limits the effects of non-genetic determinants, and provides an advantage in the evaluation of genetic polymorphic variation and associative comparisons. Conclusions Our results show that the metabolic syndrome, as defined by both the NCEP ATP III and IDF criteria, is a common occurrence in young Asian Indian patients with AMI. Although obesity occurs frequently in these individuals, no significant association was found with any of the obesity-associated polymorphisms studied and the metabolic syndrome, or with obesity as determined by both waist circumference and BMI. This lack of association mostly likely reflects the complex pathogenesis of obesity, which involves environmental factors in addition to genetic components. Certain genotypes, most notably the TT genotype of the MC4R-associated gene and the TT genotype of the adiponectin 45T→G polymorphism may, however, exert a protective effect on individual components of the metabolic syndrome, such as blood glucose and triglyceride levels, while others such as the TT genotype of the LEPR Q223R gene was associated with adverse HDL cholesterol levels. We thank Ms A Murally for typing this manuscript.
3,813.6
2011-02-01T00:00:00.000
[ "Medicine", "Biology" ]
The Imaging X-ray Polarimetry Explorer (IXPE) and New Directions for the Future : An observatory dedicated to X-ray polarimetry has been operational since 9 December 2021. The Imaging X-ray Polarimetry Explorer (IXPE), a collaboration between NASA and ASI, features three X-ray telescopes equipped with detectors sensitive to linear polarization set to 120 ◦ . This marks the first instance of a three-telescope SMEX mission. Upon reaching orbit, an extending boom was deployed, extending the optics and detector to a focal length of 4 m. IXPE targets each celestial source through dithering observations. This method is essential for supporting on-ground calibrations by averaging the detector’s response across a section of its sensitive plane. The spacecraft supplies power, enables attitude determination for subsequent on-ground attitude reconstruction, and issues control commands. After two years of observation, IXPE has detected significant linear polarization from nearly all classes of celestial sources emitting X-rays. This paper outlines the IXPE mission’s achievements after two years of operation in orbit. In addition, we report developments for future high-throughput X-ray optics that will have much smaller dead-times by using a new generation of Applied Specific Integrated Circuits (ASIC), and may provide 3D reconstruction of photo-electron tracks. Introduction Cyclotron emission, synchrotron emission, and non-thermal bremsstrahlung [1][2][3] are the most common emission processes in X-ray astronomy providing polarized radiation.Even if emitted as intrinsically non-polarized thermal radiation, radiation can become polarized via scattering in accretion disks, blobs, and accreting columns, which are structures commonly found in astrophysical sources [4,5]. Moreover, X-ray polarimetry can probe isolated neutron stars such as magnetars, as well as neutron stars in binary systems, uncovering the long-sought quantum electrodynamics effect of vacuum birefringence [6][7][8].Despite theorists' expectations for the reasons mentioned above, until very recently the only notable detection was the measurement of polarization from the Crab Nebula [9].At the time this was a significant measurement, as it confirmed for the first time the extension of synchrotron emission to X-rays in this source. In fact, a new generation of X-ray detectors [10][11][12], the Gas Pixel Detectors (GPDs), has allowed polarization to be measured by means of the photoelectric effect in gas.Using this device, we designed a space mission providing sensitive measurement in the classical energy band of X-ray astronomy.Although some Chinese colleagues had previously launched a CubeSat mission equipped with a single Gas Pixel Detector (GPD) and a collimator before IXPE, achieving low-significance results on bright galactic sources over months-long observing times [13][14][15][16][17], it has become evident that sensitive polarimetry requires a substantial number of detected photons.This level of sensitivity can be achieved through the use of X-ray mirrors. Imaging polarimetry's advancements and the launch of the Imaging X-ray Polarimetry Explorer (IXPE) [18,19] have made X-ray polarimetry a standard tool in astrophysics, akin to its use at other wavelengths for the first time.IXPE's data are publicly accessible, allowing every scientist to utilize this newly available resource. While in the future we aim to conduct experiments with optics having a large throughput, the present ASIC of IXPE suffers from high dead time.While a five-fold improved ASIC has already been obtained by INFN, for very large optics a much larger step forward remains necessary.Below, we describe how a new generation of digital ASICs with a parallel readout allows for a drastic reduction in dead time, accompanied by the possibility of 3D imaging of the photoelectron track.Such ASICs promise to devise a photoelectric X-ray polarimeter with a dead time compliant with future high-throughput X-ray missions. The IXPE Mission in Summary IXPE, as the 14th Small Explorer (SMEX) NASA mission in partnership with ASI, was built under the supervision of NASA-MSFC (the PI institution; Philip E. Kaaret serves as PI, with Martin Weisskopf as emeritus).INAF, INFN, and industrial partner OHB-Italia devised, built, tested, and calibrated the three Detector Units, plus one spare unit,containing the GPD, the filter and onboard calibration system and the payload computer named the Detector Service Unit (DSU). The IXPE mission, along with its optics and instrumentation [18][19][20], is shown operating in orbit in Figure 1.NASA-MSFC fabricated and calibrated three mirror modules [20] with the contribution of Nagoya University (thermal shields) along with one spare unit.An instrument located in the service module at the mirror focal plane, provided by ASI and composed of three detector units [19,21], is separated from the mirror by a focal length of 4 m. The IXPE spacecraft has a global positioning system (GPS), allowing for the timing of the events with µs accuracy.Two other star trackers (rear and front) are employed to correct images after dithering by using photon-by-photon ground transmission.An X-ray shield, in conjunction with stray-light collimators on top of each Detector Unit (hearafter DU), absorbs cosmic background X-ray photons originating from outside the field of view.An ion-UV filter is located on top of each DU [22]. The DU calibration system [23] is composed of commercial 55 Fe isotopes (see Figure 2) with a K α line at 5.89 keV and a K β line at 6.5 keV.Polarized radiation at 3 keV (by means of a silver target) and 5.9 keV is produced through 45 • Bragg reflection off a graphite mosaic crystal (Cal-A).Unpolarized 5.9 keV and 6.5 keV X-rays (spot ∼3 mm and flood ∼15 × 15 mm) are source Cal-B and source Cal-C, respectively.Finally, source Cal-D uses a silicon target that produces a wide beam at 1.7 keV (Si K α ) thanks to a 55 In addition to the calibration system, the FCW hosts a filter made of kapton for high-flux sources and an aluminum cap used for gathering the background. A residual [24] miscalibration of a few (2-3) tens of eV is irreducible; this is possibly caused by the gas gain decrement due to ions and secondary electrons attaching to the exposed dielectric surface of the Gas Electron Multiplier (GEM) (charging).Because this effect is rate-and energy-dependent, it may differ during flight calibration and during observation of celestial sources [24].An extensible boom covered with a thermal sock and thermal shields for the mirrors completes the payload system. The IXPE mirrors (see Figure 3a) were fabricated using the classical technique of replica of electro-formed nickel-cobalt shells.The main design of the IXPE mission was based on Pegasus-XL fairing; thus, the very small thickness of the mirror shell allows for both light weight and the necessary effective area (see Figure 3b).Eventually, the Falcon-9 launcher was adopted after a competitive tender.The Falcon 9 rocket is shown in Figure 4.The IXPE DUs performed as expected after on-ground calibration using both polarized and unpolarized monochromatic X-ray sources (see [24]).After extensive ground calibration at INAF-IAPS [25], the three flight DUs were electrically integrated into the flight Detector Service Unit at the same laboratories (see Figure 5) on the optical bench.The instrumentation underwent extensive laboratory testing, including all of the available payload operation modes.The initial analysis of IXPE data revealed that the source counting rate measured by Detector Unit 1 was somewhat higher compared to those measured by Detector Units 2 and 3.The initial response matrices did not accurately account for the differing pressures of the gas mixture inside the detectors.Consequently, this led to a variance in the detected photon flux at a given energy (notably at 1 keV, referred to as the normalization) when observing celestial sources with DU2 and DU3.This issue has since been addressed with updated response matrices that more accurately reflect the time-dependent absorption of dimethyl ether by components within the Gas Pixel Detector (GPD). Indeed, the efficiency of the three detectors slightly diminishes with time because of the absorption of dimethyl ether by the epoxy used for sealing the detector body (Supreme 10HT by Masterbond) and possibly by the beryllium window support structure.The internal gas pressure is asymptotic, with a slow time constant of 2-3 years and a fast time constant of 1 month, as shown in [24]. However, the modulation factor is slightly better due to the increased track length, meaning that the decrease in sensitivity is not dramatic.The introduction of weights (the asymmetric tracks weigh more) [26] provides 13% better sensitivity with respect to the unweighted analysis.HEASARC analysis tools allow weighted analysis to be available to the general user.In addition, a neural network weighted analysis approach [27][28][29] was developed, with an improvement of about 8% with respect to the standard weighted moment analysis [30]. IXPE was designed to fit in a Pegasus-XL launcher.After the launch, we discovered a boom motion due to sunlight-to-night thermal expansion (see Figure 6).We used the portion of the orbit with active star trackers (front or rear) and the temperature sensors on the payload to model the (∼1 arcmin) shift.Eventually, this very accurate modeling was included in the flight pipeline to make it transparent to the general user. In contrast to the first two years of operation, when the IXPE collaboration was carried out based on the observation plan, general observers with a competitive tender managed by HEASARC now decide on the new observations.The IXPE collaboration consisted of about 190 scientists, including about 90 participants from about thirteen countries worldwide. Table 1 summarizes the sources observed during the first two years.The largest group is for binary neutron star and blazar science.The magnetars and SNR group required the largest observing time.Bright source observations are followed by dim sources due to the small size of the onboard memory and the constraints of the S band used at the ASI Ground Station for receiving data, located at Malindi, Kenya.A gray filter is used to cope with the very high flux.We successfully used the gray filter during the observation of Sco X-1 and the target of opportunity source Swift J1727.8-1613.Table 2 shows the celestial sources for which a significance larger than 6σ was arrived at from quick-look analysis of their polarization.This is a very limited list, as this analysis does not resolve polarimetry in terms of the angle, energy, or time, and as no background rejection [32] or subtraction was applied.Indeed, we detected significant polarization for a much larger number of sources (about 70%) by exploiting this capability once the full capabilities of IXPE were utilized. Table 2. Quick-look analysis results providing polarimetry with a significance larger than 6σ.⋆ Cas A, Tycho SNR, and SN 1006 show significant polarization when angularly resolved.† NGC4151 and Circinus galaxy show significant polarization when the background and energy selection are correctly taken into account. WG1 Crab Nebula and pulsar, Vela PWN, MSH 15-52, G21.5-0.9The Main Limitation of IXPE Although a significant success (see Appendix A), the achievements of IXPE in X-ray polarimetry suggest potential for further improvement.The limited effective area of the mirrors restricted the ability to conduct comprehensive 'population studies'.In practice, only the brightest X-ray sources from each category were within reach.Future designs aim for much larger mirror areas, as envisioned for eXTP and Athena.Although Athena does not include a polarimeter, its design goal is to achieve a square meter of effective area.However, such large telescopes cannot utilize the current ASIC technology of IXPE, even considering recent advancements, as noted in [33] (see Section 3). Additionally, IXPE's results indicate that a promising direction for future missions would involve wide-band X-ray polarimetry extending beyond 8 keV.This approach would enhance the analysis of celestial sources where reflection (from disks, tori, winds, molecular clouds, etc.) plays a significant role in the spectrum, a facet that IXPE is barely able to examine.The employment of large multi-layer optics could enable study of the transportation of radiation in magnetized plasma at cyclotron line energies in binary pulsars or of the dynamics between power-law and hard energy tails characteristic of magnetars.Importantly, improved capacity to handle high flux could significantly reduce calibration times, which for IXPE required 40 days per detector operating continuously.This efficiency is crucial, as many missions must limit calibration time to adhere to schedules. A Possible Path to the Future of X-Ray Polarimetry One of the main drawbacks of the GPD currently flying onboard the IXPE is the large dead time [19,21], though this is mitigated by a new version of the ASIC [33].As a matter of fact, a drastic reduction in dead time is already possible thanks to the new generation of ASICs allowing parallel readout with digital information on the pulse amplitude.These ASICs, developed by an international collaboration and are derived from the MEDIPIX family, are the TimePIX3 [34] and the most recent TimePIX4 [35].Their design allows for data-driven operation with dead time-free operation up to 40 Mpixels s −1 cm −2 for TimePIX3 and up to 3.5 Mpixels s −1 mm −2 for TimePIX4.These ASICs (see Figure 7a) allow for a sparse readout as well as simultaneous per-pixel measurement of the time of arrival (with a resolution of 1.56 ns for TimePIX3 and <200 ps for TimePIX4) and time over threshold, with the latter being proportional to the charge content for each pixel.TimePIX3 features 65,536 pixels in a square pattern, with a pixel pitch of 55 µm and a noise of 60 e rms .A practical implementation of TimePIX3 as the front-end for a gas detector is the GridPix [36]) configuration, where the multiplication stage is obtained by applying precise photolithographic techniques to make a metallic Micromega grid above the sensitive ASIC plane at a distance of a few tens of µm (see Figure 7b, [37]).In principle, this design, allows for full 3D photoelectron track reconstruction.We previously proved the suitability of this approach for increased polarization sensitivity [38].Before this practical implementation becomes mature enough for a space experiment, it is first necessary to: (1) prove the performance in terms of the modulation factor and lack of spurious modulation; (2) determine the energy resolution; (3) prove the resistance against heavy ion interaction with the gas;and (4) build a sealed detector body.and will be carried out in the near future. Conclusions IXPE is now a real flown polarimetry mission, and is discovering and explaining new physical phenomena in previously known X-ray sources.In addition, it is helping to disentangle geometry from physics, thereby maintaining what scientists have been promising for decades since the first rocket launches and the discoveries of OSO-8.Thanks to the perseverance of many scientists, we now are in possession of a rapidly developing observational tool to better understand a wide variety of X-ray sources and their environments.The expectations of theory can be tested with the help of accurate X-ray polarimetry.The same scientists are studying a new detector based on a modern ASIC, which promises to overcome the main limitations of the current ASIC employed onboard the IXPE. Acknowledgments: The author acknowledges the IXPE Science collaboration, of which he is a part, and discussions with Klaus Desch, Jochen Kaminsky, Markus Gruber, and Vladislavs Plesanovs from the University of Bonn regarding GridPIX detectors. Conflicts of Interest: The author declares no conflicts of interest. Appendix A. Selected Scientific Results of IXPE Appendix A.1.Pulsar Wind Nebulae and Radio Pulsars Pulsar wind nebulae (PWNe) shine in X-rays emitted via the synchrotron process.Bubbles of plasma accelerated up to 10-100 TeV and magnetic fields produced by a spinning neutron star interact with the interstellar medium.Thees are responsible for the complex morphologies seen in X-rays.The Crab Nebula was the only source for which OSO-8 detected polarized radiation in the 1970s [9] thanks to its collimated Bragg diffraction polarimeter, and has been more recently re-detected by Polarlight [13,14].The angularly resolved polarimetry from IXPE observations have already been published for Vela PWN [39] (see Figure A1a), the Crab Nebula and its pulsars [40] (see Figure A1b), and MSH 15-52 and its pulsars [41] (see Figure A1c).The polarization map obtained by IXPE for these two PWNs are shown in Figure A1.The high level of polarization in Vela (up to 67-72%), Crab PWNe (up to 45-50%), and MSH 15-52 (up to 70%), along with the direction of the magnetic field, show that the turbulence is much less effective than expected.The IXPE image of Vela PWN shows that the polarization structure is symmetric about the projected pulsar spin axis, which corresponds to its proper direction of motion.For Crab PWN, the integrated polarization degree is 20% and the polarization angle is about 145 • .While the polarization degrees are consistent between IXPE and OSO-8, the polarization angle has a small but statistically significant difference from the 154 • measured [9] by OSO-8.Such a difference could be due to a change in the morphology of the inner structure of the Crab Nebula.In MSH 15-52, the magnetic field follows the thumb, fingers, and other linear structures.The polarization reaches about 70% at the end of the jet, while the magnetic field is less ordered at the base of the inner jet. IXPE further investigated the polarization properties of the Crab and MSH 15-52 pulsars, facilitated by its imaging capabilities.For the Crab Pulsar, after subtracting the residual nebular component under the pulsar point spread function (PSF), the phaseresolved polarization properties shows significant detection only at the center of the main (P1) pulse, which is 15% with a polarization angle of about 105 • .The phase-integrated polarimetry of the Crab Pulsar is 2.6 +2.7 −2.6 .Such small polarization is in contrast with most of the existing PSR models [42,43].For MSH-1552, a single significant polarization bin at the maximum of the phase-resolved lightcurve is interpreted as a possible extension of its radio emission. Appendix A.2. Supernova Remnants At the time of writing, IXPE had observed five supernova remnants so far: Cas A (see Figure A2a), Tycho SNR (see Figure A2b), SN 1006 north-east rim (see Figure A2c), RCW86, and RX J1713; however, only the first three have been published [44][45][46] In order to measure the polarization of Cas A [44] and Tycho SNR [45], we first selected an energy range between the calcium/argon line and the iron line, where the thermal emission is expected to be at a minimum.On the contrary, no lines are present in the SN 1006 NE limb [46], and the energy range that maximized the source-to-background ratio was selected.We then performed analysis on a pixel-by-pixel basis (see Figure A2).The results for Tycho and Cas A were inconclusive; thus, we adopted a different technique.Assuming a circular symmetry for the polarization direction, we recalculated the Stokes parameter for each event [47] by calculating a new zero for the direction of the photoelectrons and its position angle with respect to the rotated celestial coordinates, taking the center of both supernovae as the origin.This procedure resulted in new values for the Stokes parameters, providing an overall signal for the signal in all regions corresponding to the tangential and radial Q and U Stokes parameters.For every annular or circular region selected, we found that the polarization was tangential.Because synchrotron emissions require a magnetic field perpendicular to the polarization angle, we discovered that for Cas A and Tycho SNR, just as in the radio wavelength, the magnetic field has a radial global orientation.X-rays are actually emitted close to the accelerating shock fronts, and the 10-100 TeV electrons responsible for this emission have a short lifetime due to cooling.Further, interstellar magnetic fields in the outer shock (and in the reverse shock in Cas A) are eventually compressed tangentially, meaning that the instability mechanism should act quickly to realign the magnetic field in the radial direction.The tangential polarization degree for the whole Cas A emission is 1.8% ± 0.3%, which is smaller than in the radio band.The corresponding average polarization degrees for the sole synchrotron emission, considering the external shock rim, are 2.5% and 5%.For Tycho SNR, the global tangential polarization degree is 3.5% ± 0.7%, corresponding to 9.1% ± 2.0% for the synchrotron component, while for the external rim it is 11.9% ± 2.2%.For Tycho, the levels of polarization are larger than those in the radio band.It is worth noting [45] that in Tycho SNR the west non-circular region containing the stripes shows a significant expected polarization (∼23%), possibly indicating the presence of nonlinear diffusive shock acceleration [48].SN 1006 shows larger polarization than radio emissions, with an average value of about 20% for the whole shell.As in the other SNRs, the direction of the magnetic field is perpendicular to the rim.As a matter of fact, all of the SNRs show a smaller polarization degree with respect to the maximum obtainable by synchrotron emission (≈80%). Appendix A.3. Accreting Stellar-Mass Black Holes The first black hole binary system observed by IXPE was Cyg X-1 [49].During this first point, Cyg X-1 was in a low and hard state, and the polarization found in the IXPE energy band, at ∼4%, was much larger than expected based only on the orbital inclination.This suggests a disk with its most internal part observed more edge-on than expected-a sort of warped disk.A hint of an increase in polarization with energy was found in the data as well (see Figure A3a).The other important result is that the polarization angle was found to be parallel to the radio jet (see Figure A3b).Because most of the emitted X-rays are due to the corona in the low and hard state, the polarization direction excludes a lamppost geometry (see Figure A3c).In such a geometry, the polarization angle should be perpendicular to the radio jet.The corona geometry must be sandwiched against the disk, while the polarization can be either parallel or perpendicular to the disk but the jet cannot be parallel to the disk.Thus, this is the first time that the inner flow toward the black hole has been observed to be perpendicular to the jet direction.A sandwich corona excludes the aborted jet origin, and points to plasma instabilities across the surface.Other black holes were observed, as indicated in Table 1.The most puzzling are 4U1630-47 [50,51] and Cyg X-3 [52].4U1630-47 was observed at two different levels of luminosity in a high soft state where the disk emission dominates.Its complex behavior challenges a simple geometrically thin and optically thick disk model.Cyg X-3 shows polarization perpendicular to the radio ejection, thought to be due to reflection from the circumnuclear material and a polarization degree as high as ∼25%. Appendix A.4. Accreting White Dwarfs and Neutron Stars During the first two years of IXPE operation, we observed both low-magnetized neutron star binaries (LMNSB) and X-ray binary pulsars, with the latter being more polarized than the former.This was not unexpected, as the magnetic field is much larger for pulsars (few 10 12 Gauss) and the photon opacity is anisotropic with respect to the magnetic field direction.In LMXRB, instead, residual polarization may derive from the scattering of primary radiation either on the accretion disk that extends down to the neutron star's surface, from the spreading layer (the layer of material accreting onto the neutron star's surface, which is approximately perpendicular to the accretion disk), or from the boundary layer, which is the parallel layer between the truncated disk and the neutron star surface.The sources observed thus far are listed in Table 1.Among these, Cyg X-2 [53], XTE J1702-462 [54], GX5-1 [55], and Sco X-1 [56] are called "Z sources" because of the characteristic "Z" shape in the color-color diagram.For "Atoll", the observed sources were GS 1826-238 [57], GX 9 + 9 [58], and 4U1830-303 [59]. X-ray pulsars show a much smaller polarization degree (∼10-15%) than was expected (∼60-80%) [5,60,61].The reason for this may be that the reprocessing geometry [62] is much more complex than the simple "fan" or "pencil" model, which involves only simple columns and hot spots at the poles. The low polarization degree found in the archetypal wind-accreting high-mass Xray binary system Vela X-1 [63] as in the other X-Ray pulsars, could be related to the inverse temperature structure of the neutron star atmosphere, the same as for the other XRPs.The low polarization degree found in Vela X-1 may also be due to the evolution of the polarization degree with the energy (a 90 • rotation in the IXPE band) and pulse phase. Despite the smaller than expected observed polarization, thanks to IXPE it was possible to disentangle the physics from the geometry by applying the rotating vector model derived from radio-polarimetry.For the first time, we measured the magnetic obliquity (the angle between the magnetic dipole axis and spin axis and the projection of the spin axis to the plane of the sky).Interestingly, an orthogonal rotator with magnetic obliquity close to ∼90 • was found by IXPE [64]. Appendix A.5. Magnetars Magnetars are isolated neutron stars powered by an extreme magnetic field far larger than what is available on Earth, ranging from 10 14 to 10 15 Gauss.These very useful phenomena allow for studying photon propagation in highly magnetized atmospheres and magnetospheres.IXPE has published results from four magnetars (see Table 1), exploring their energy and phase-resolved polarization and finding very different behavior between 4U0142 + 61 [65] (see Figure A4a) and 1RXS J170849.0 [66] (see Figure A4b) in terms of their energy-resolved polarization (see Figure A4).This difference is explained by the different kinds of emitting regions on the surface (i.e., geometry and physical status).Although vacuum birefringence is considered in the modeling while evaluating polarimetry expectations, the size of the emitting region is not yet sufficiently extended to require unambiguously this QED effect.A large extended region, as determined by a small pulsed fraction and a high polarization degree, are necessary for securing the vacuum birefringence at work in these systems.A third magnetar, SGR 1806-20 (see Figure A4c), was observed to be similar to 4U0142 + 61, albeit with a much smaller significance [67], and was modeled with two hot spots placed near the magnetic equator of the bare neutron star's surface.The fourth and last magnetar observed by IXPE was 1E 2259 + 5586 [68].For this source, as for SGR 1806, the IXPE results were interpreted in light of the spectral analysis derived from simultaneous observations with XMM.The presence of a condensed surface and a plasma loop that scatters the radiation in the magnetosphere is considered the conclusive model for this source. In fact, the four magnetars do not show the unambiguous presence of vacuum polarization and birefringence; thus, we need to wait for observations of additional sources with a much wider emitting surface region and high polarization in order to definitively unveil them. Appendix A.6.Radio-Quiet AGNs and Sgr A ⋆ Accretion disks in AGNs emit mostly in the UV-optical energy band, and the primary X-ray emission is thought to be due to inverse Compton radiation in a hot corona embedding the colder accretion disk [69].Such a geometry can produce polarized radiation [70], and from the degree of polarization it is possible to derive information on the geometry of the corona.An aborted jet origin is derived from a lamppost corona while the presence of instabilities is derived from a corona sandwiching of the accretion disk.An angle of polarization parallel to the disk axis, detected as the direction of the commonly present weakly emitting extended radio emission, is the signature of a corona sandwiching the disk.This is the case for NGC4151; indeed, the measured polarization (4.9 ± 1.1)% is thought to be entirely due to the reflection from the accretion disk.Only the upper limits [71] were found for NGC-5-23-16.Interestingly, for IXPE observation of the Circinus galaxy, a Compton-thick AGN which is observed almost edge-on with respect to its symmetry axis, confirms the presence of a thick obscuring torus as a neutral reflector due to polarization [58] (28 ± 7%).In fact, for this AGN the polarization direction is normal with respect to to the weak radio jet (see Figure A5).Much closer to us, our galactic supermassive black hole is a very dim X-ray source with occasional fast flares.Cold molecular clouds shining in X-rays may reflect [72] photons emitted in the past from Sgr A ⋆ .Thus, the reflected and observed radiation should be polarized [73], with the polarization vector indicating the origin of the radiation and, eventually, the Sgr A ⋆ .IXPE has established this for certain (see Figure A6) [74].Appendix A.7. Blazars and Radio Galaxies The IXPE energy band is particularly suitable for analyzing blazars' polarimetry.Blazars with X-rays either in the synchrotron peak (high-synchrotron peaked HSP) or the Inverse Compton (IC) peak (low-synchrotron peaked LSP) can be probed using polarimetry.Based on the sensitivity of IXPE, only HSP blazars were found to be polarized [75][76][77], while LSP blazars such as BL-Lac were found to be unpolarized [78].The upper limits remain too high to discriminate hadronic versus leptonic models as the origin of the hlIC peak [79], which was not totally unexpected given their lower fluxes.Interestingly, an observation of BL Lac (LSP) during a flare showed significant polarization, with X-rays moved into the synchrotron peak [80]. Restricting ourselves to HSP blazars such as Mrk 501 and Mrk 421, we note that for Mrk 501 IXPE observation [75] showed a polarization degree of ∼10%, which is twice as much as in the optical band, with the polarization angle directed along the jet.Together with a modest, if not null, polarization variability, these characteristic features are considered the signature of an energy-stratified shock acceleration process. The first IXPE observation of Mrk 421 showed a polarization vector that was not coincident with the jet direction [76] but rather with a polarization degree of (15 ± 2)%, ∼3 times larger than that observed in the optical-infrared-mm region.Another later observation surprisingly showed a polarization angle that was rotating quickly with time [77] (see Figure A7a).This rotation indicates the presence of a helical magnetic field (see Figure A7b) in addition to energy-stratified shock acceleration. Fe. Cal sources are used during flight operations and Earth occultation.Cal-C and Cal-D provide the final gain correction for energy determination. Figure 2 . Figure 2. The filter and calibration wheel (FCW) inside each detector unit for onboard calibration.In addition to the calibration system, the FCW hosts a filter made of kapton for high-flux sources and an aluminum cap used for gathering the background. Figure 3 . Figure 3. (a) Top view of a mirror fabricated by NASA-MSFC and (b) effective area of each flight mirror [20]. Figure 4 . Figure 4.The Falcon-9 rocket and its its firing before being attached at the launch pad. Figure 5 . Figure 5.The three detector units integrated into the Detector Service Unit on the optical bench. Figure 6 . Figure 6.Boom motion due to thermal-elastic expansion along the orbit, as accurately modeled and corrected thanks to post facto reconstruction to remove the dithering [31]. Funding: The Imaging X-ray Polarimetry Explorer (IXPE) is a joint US and Italian mission.The US contribution is supported by the National Aeronautics and Space Administration (NASA) and led and managed by its Marshall Space Flight Center (MSFC), with industry partner Ball Aerospace (contract NNM15AA18C).The Italian contribution is supported by the Italian Space Agency (Agenzia Spaziale Italiana, ASI) through the contract ASI-OHBI-2022-13-I.0, the agreements ASI-INAF-2022-19-HH.0 and ASI-INFN-2017.13-H0, and its Space Science Data Center (SSDC) with agreements ASI-INAF-2022-14-HH.0 and ASI-INFN 2021-43-HH.0, as well as by the Istituto Nazionale di Astrofisica (INAF) and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy.This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC) at the NASA Goddard Space Flight Center (GSFC).The author acknowledges funding from the Ministry of Universities and Research (PRIN 2020MZ884C) "Hype-X: High Yield Polarimetry Experiment in X-rays", the INAF MiniGrant "New generation of 3D detectors for X-ray polarimetry: simulation of performances" and the MAECI Contribution 2023-2025 Grant "RES-PUBLICA Research Endeavour for Science Polarimetry with the University of Bonn in Liason with INAF for Cosmic Applications".Data Availability Statement: IXPE data are publicly available at https://heasarc.gsfc.nasa.gov/docs/heasarc/missions/ixpe.html(accessed on 17 March 2024). Figure A2 . Figure A2.(a) Magnetic−field map for the Cas A SNR (Image credit: NASA/CXC/SAO/NASA/ MSFC/Vink et al.).The region in green has a higher-confidence measurement.The magnetic field is mostly radial.(b) IXPE polarization map for Tycho SNR[45].The polarization directions show a mostly radial magnetic field.(c) Polarization map for the SN 1006 NE limb[46].The polarization directions show a mostly magnetic field perpendicular to the limb. Figure A3 . Figure A3.(a) The polarization degree shows a possible increase with energy.(b) The polarization angle is parallel to the radio jet.This discovery (1) establishes that the disk axis is parallel to the jet and (2) that the corona geometry cannot be a lamppost.(c) The different expected polarization degrees and angles for different corona models.All figures are from [49]. Figure A4 . Figure A4.(a) Spectro−polarimetry for 4U0142 + 61.Crosses indicate the measured values and stars indicate the model (the equatorial belt-condensed surface RCS models are indicated by the stars).Contours enclose the 68.3% confidence level.The gray shaded area and the black arrow indicate the direction of the proper motion and its uncertainty [65].(b) Spectro-polarimetry of 1RXS J1708, showing the 50% confidence regions for joint measurement of the polarization degree and angle.Green crosses and orange stars show the prediction of the two different possible emission regions' structures [66].(c) Spectro-polarimetry of SGR1806.The crosses indicate the measures.The model is frozen from the one determined by XMM (black body plus power law).The contours are 68.3% and 99% [67]. Figure A5 . Figure A5.The polarization angle of the Circinus Galaxy is directed along the accretion disk traced by the H 2 O maser shown in the figure; Together with the presence of reflection spectrum from cold matter, this is the signature of an obscuring torus responsible for the observed polarization.The polarization degree and angles contours represent 68%, 90% and 99% confidence level.Based on a comparison of the simulation and the observed polarization, the aperture of the torus is 45-55 • [58]. Figure A6 . Figure A6.(a) Polarimetry map of the molecular clouds in the vicinity of the galactic center.This mapping allows the past X-ray flares of the galactic center to be reconstructed based on their polarization degree and angle.(b) Measurements taken by IXPE show that Sgr A ⋆ was 10 6 times brighter in the X-ray wavelength some 200 years ago.The mapping of different molecular clouds could allow for the determination of whether a single flare or a multiple flares occurred in the past [74]. Figure A7 . Figure A7.(a)The rotation of the polarization angle in Mrk 421 measured in X-rays is much faster (80 • -90 • /day) than that previously measured in the optical band[77] (8 • -9 • /day) for this source.(b) Energy-stratified shock acceleration is active in an environment embedded with a helicoidal magnetic field[77]. Table 1 . Celestial sources observed by IXPE during the first two years of operations.
7,943.6
2024-03-25T00:00:00.000
[ "Physics", "Environmental Science" ]
A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. Introduction Research on autonomous aerial robots has advanced considerably in the last decades, especially in outdoor applications. Enabled by MEMS inertial sensors and GPS, Unmanned Aerial Vehicles (UAVs) that show an awesome set of flying capabilities in outdoor environments have been developed, ranging from typical flight manoeuvres [1], to collaborative construction tasks [2] or swarm coordination [3], among many other applications. Although technological progress has made possible the development of small Micro Aerial Vehicles (MAVs) capable of operating in confined spaces, indoor navigation is still an important challenge for a number of reasons: (i) most indoor environments remain without access to external positioning systems such as GPS; (ii) the onboard computational power is restricted and (iii) the low payload capacity limits the type and number of sensors that can be used. However, there is a growing interest in indoor applications such as surveillance, disaster relief or rescue in GPS-denied environments (such as demolished or semi-collapsed buildings). In these stages is often preferable to use low-cost MAVs that can be easily replaced in case of breakage, damage or total loss. From the navigation point of view, state estimation of the six degrees of freedom (6-DoF) of the MAV (attitude and position) is the main challenge that must be tackled to achieve autonomy. The inaccuracy and high drift of MEMS inertial sensors, the limited payload for computation and sensing, and the unstable and fast dynamics of air vehicles are the major difficulties for position estimation. So far, the most robust solutions are based on external sensors, as in [4], where an external trajectometry system directly yields the position and orientation of the robot, or in [5], where an external CCD camera provides the measurements. However, these solutions require a previous preparation of the environment and are not applicable to unknown spaces, in which the MAV must rely on its own onboard sensors to navigate. The problem of building a map of an unknown environment from onboard sensor data while simultaneously using the data to estimate the robot's position is known as Simultaneous Localization and Mapping (SLAM) [6]. Robot sensors have a large impact on the algorithm used in SLAM. In the last ten years, a large amount of research has been devoted to 2D laser range finders [7,8], 3D lidars [9,10] and vision sensors [11,12]. Recent research pay attention to many other alternatives that can be leveraged for SLAM, such as light-emitting depth cameras [13], light-field cameras [14], and event-based cameras [15], as well as magnetic [16], olfaction [17] and thermal sensors [18]. However, these alternative sensors have not yet been considered in the same depth as range and vision sensors to perform SLAM. Although there have been significant advances in developing accurate and drift-free SLAM algorithms for ground robots using different sensors and algorithms [6,19], attempts to achieve similar results with MAVs have not been as successful. MAVs face a number of specific challenges that make developing algorithms for them much more difficult: limited sensing payload and onboard computation, indirect relative position estimation, fast dynamic, need to estimate velocity and constant motion, among others. In aerial navigation, sensors must be carefully chosen due to mobility, design and payload limitations. All aerial robots are endowed with an Inertial Measurement Unit (IMU) that combines accelerometers, gyroscopes and sometimes magnetometers to measure orientation, angular rate and accelerations. However, tracking the MAV position using dead reckoning based on the IMU (which is known as Inertial Navigation System-INS) produces large errors and drifts [20], so the solution always involves fusing these measurements with other extereoceptive sensors. When GPS is available, the fusion of IMU and GPS information results in INS/GPS systems that yield satisfactory results in outdoor environments [21]. Nevertheless, in GPS-denied environments other alternatives should be explored. Due to their low weight and consumption, most commercial MAVs incorporate at least one monocular camera, so Visual SLAM (VSLAM) methods have been widely used [22][23][24][25]. However, most of these works have been limited to small workspaces with definite image features and well lit, and thus may not work as well for general navigation in GPS-denied environments. In addition, computational time is too high for the fast dynamics of aerial robots, making difficult to control them. Furthermore, despite their greater consumption and weight, range sensors such as RGB-D cameras [26,27] or 2D range sensors [28] have also been used on MAVs due to their direct distance detection. Especially, the high working rate of current 2D laser scanners, along with their direct and accurate range detection, make them a very advantageous sensor for indoor aerial navigation. Several works, such as [29,30], fuse laser an IMU measurements to obtain 2D maps and to estimate the 6-DoF pose of the MAV. In this paper, we propose a SLAM system for GPS-denied environments that can be easily configured according to the onboard sensors of low-cost MAVs. This work is motivated by a recent research line of the RobeSafe Research Group [31] of the University of Alcalá (Spain), whose final aim is the development of a heterogeneous swarm of robots for disaster relief. The swarm consists of an Unmanned Ground Vehicle (UGV) and one or more MAVs for accessing difficult areas. Low-cost MAVs and their on-board sensors have been chosen in order to be easily replaced in the event of breakage or loss. Besides, the SLAM system is executed by a remote PC onboard the UGV, offsetting the low onboard computing capacity of MAVs and facilitating their replacement. This paper focuses on the development of the SLAM system for the MAVs based on IMU, vision and laser sensors (depending on their availability) and fusing different state-of-the-art SLAM methods. The system robustly obtains the 6-DoF pose of the MAV within a local map of the environment. We consider a minimum sensory configuration based on a frontal monocular camera, an IMU and an altimeter. Two recent and robust monocular VSLAM techniques (LSD-SLAM [32] and ORB-SLAM [33]) have been tested and compared when applied to aerial robots, whose onboard sensors permit to solve scale ambiguity and to improve pose estimation through an Extended Kalman Filter (EKF). When available, a 2D laser sensor is used to obtain a local 2.5D map and a footprint estimation of the pose of the MAV, that improve pose estimation in environments with few visual features and/or in low light conditions. The system has been validated with two low-cost commercial drones with different sensor configurations and a remote control unit using a distributed node system based on Robot Operating System (ROS) [34]. The experimental results show that sensor fusion improves position estimation and the obtained map under different test conditions. The paper is organized as follows: Section 2 describes the overall system, including hardware platforms used and software architecture. The SLAM approach is explained in detail in Section 3. Section 4 presents the experimental results. Finally, in Sections 5 and 6 the discussion and conclusion of this work are presented. System Overview We face the problem of autonomous MAV state estimation as a software challenge, focusing on high-level algorithms integration rather than specific hardware. Some state-of-the-art SLAM methods have been tested, compared and fused under different sensor configurations and environment conditions, allowing to obtain conclusions about their capability to be applied to aerial robots. For this reason, we use low-cost commercial platforms and an open-source development environment (ROS), so that drivers of sensors and some algorithms can be used without development. Assumptions and Notation We consider a quadrotor freely moving in any direction in 3 × SO(3). The three main coordinate frames considered are: (1) the drone coordinate frame {D} attached to its body; (2) the drone stabilized frame {S}, whose attitude (roll/pitch angles) are corrected to be parallel to the floor; and (3) the world coordinate frame {W}, that matches the initial position of the drone and is always at ground level. All the coordinate systems are right handed defined, as shown in Figure 1. Orientation is specified in Euler XYZ (roll-pitch-yaw) angles notation, (φ,θ,Ψ). This paper focuses on the development of the SLAM system for the MAVs based on IMU, vision and laser sensors (depending on their availability) and fusing different state-of-the-art SLAM methods. The system robustly obtains the 6-DoF pose of the MAV within a local map of the environment. We consider a minimum sensory configuration based on a frontal monocular camera, an IMU and an altimeter. Two recent and robust monocular VSLAM techniques (LSD-SLAM [32] and ORB-SLAM [33]) have been tested and compared when applied to aerial robots, whose onboard sensors permit to solve scale ambiguity and to improve pose estimation through an Extended Kalman Filter (EKF). When available, a 2D laser sensor is used to obtain a local 2.5D map and a footprint estimation of the pose of the MAV, that improve pose estimation in environments with few visual features and/or in low light conditions. The system has been validated with two low-cost commercial drones with different sensor configurations and a remote control unit using a distributed node system based on Robot Operating System (ROS) [34]. The experimental results show that sensor fusion improves position estimation and the obtained map under different test conditions. The paper is organized as follows: Section 2 describes the overall system, including hardware platforms used and software architecture. The SLAM approach is explained in detail in Section 3. Section 4 presents the experimental results. Finally, in Sections 5 and 6 the discussion and conclusion of this work are presented. System Overview We face the problem of autonomous MAV state estimation as a software challenge, focusing on high-level algorithms integration rather than specific hardware. Some state-of-the-art SLAM methods have been tested, compared and fused under different sensor configurations and environment conditions, allowing to obtain conclusions about their capability to be applied to aerial robots. For this reason, we use low-cost commercial platforms and an open-source development environment (ROS), so that drivers of sensors and some algorithms can be used without development. Assumptions and Notation We consider a quadrotor freely moving in any direction in ℜ 3 × SO(3). The three main coordinate frames considered are: (1) the drone coordinate frame {D} attached to its body; (2) the drone stabilized frame {S}, whose attitude (roll/pitch angles) are corrected to be parallel to the floor; and (3) the world coordinate frame {W}, that matches the initial position of the drone and is always at ground level. All the coordinate systems are right handed defined, as shown in Figure 1. Orientation is specified in Euler XYZ (roll-pitch-yaw) angles notation, (φ,θ, ). Throughout this document, the following notation is used to distinguish actual, measured and commanded variables: the actual variable is indicated without emphasis, while the corresponding measured variable is emphasized with "-"accent, and the commanded one with "ˆ" accent. For example,vz is the commanded vertical velocity, while vz and vz are the measured and the actual vertical velocities respectively. Hardware Platforms One of the main specifications of the proposed SLAM system is that it can be configured to be implemented with different low-cost MAVs, using their onboard sensors. Usually, the software running onboard a commercial MAV is not accessible, and the control software is neither open-source nor documented in any way. Indeed, many low-cost MAVs are primarily sold as high-tech toys and only can be commanded from their own applications running on a Smartphone or a laptop. However, other platforms provide a Software Development Kit (SDK) for programming applications by sending commands through an ad-hoc wireless LAN network set up by the drone, opening the door to research works. Some of these MAVs, whose drivers are available as ROS packages to be easily used, are the Pelican and Hummingbird quadrotors of Ascending Technologies [35], the CrazyFlie of Bitcraze [36], the Matrice 100 of DJI [37], the Erle-copter of Erle Robotics [38], or the AR-Drone and Bebop drones of Parrot [39]. On the other hand, most commercial MAVs incorporate an IMU, a sensor for measuring height (ultrasound and/or barometer) and at least one monocular horizontal camera. It conforms the minimal onboard sensor configuration required by our proposed SLAM system. Some drones with a limited onboard computational capacity and a greater payload can easily incorporate a light 2D laser rangefinder, which will be considered as an additional sensor that will greatly improve the results of the SLAM system under certain environment conditions. For our experiments, we have chosen two commercial platforms that meet the following requirements: (1) low-cost and small dimensions; (2) minimal onboard sensory configuration consisting of IMU, altimeter and frontal camera; and (3) SDK for programming applications by sending commands and reading sensors through a wireless local network (and in both cases, with driver available in ROS framework). These platforms, shown in Figure 2, are: 1. The Parrot Bebop (Parrot, Paris, France). This is a light (400 g) and small (33 × 38 × 3.6 cm) drone, ideal for indoor applications. It is equipped with a frontal "Fisheye" camera, and another vertical camera, which is internally used for stabilization and horizontal velocity estimation. Besides, it has an ultrasonic altimeter, a 3-axis accelerometer, two gyroscopes and a barometer. It incorporates an onboard controller (dual-core Parrot P7 processor), a quad-core graphic processor, flash memory of 8 Gb and a Linux distribution. ROS framework provides the bebop_autonomy package [40] as a driver for communicating with the drone through its wireless local network. 2. The Erle-Copter (Erle Robotics, Vitoria, Spain). This drone weights 1.3 kg and its size is 36 × 34 × 9.5 cm. It has a greater payload than Bebop (1 kg) and an onboard open-brain (ErleBrain2) based on a Raspberry PI with the ROS framework and the APM autopilot. This'allows to board a light 2D laser sensor such as the Hokuyo URG-04LX, and we have added the hokuyo_node ROS package [41] to easily read and transmit the laser measurements to the remote processor. The Erle-Copter can be accessed and commanded from ROS using the mavros package [42]. It is important to remark that the proposed SLAM approach can be applied to other similar platforms and so we treat the drone as a black box, using only the available W-LAN communication channels to access and control it. Namely, the following inputs/outputs are used in our SLAM system: • A command output channel, to send the drone control packages u with the desired velocities of x and y axis, vertical speed and yaw rotational velocity, all them defined respect to the drone stabilized frame {S}: • A video input channel, to receive the video stream of the forward facing camera. • A navigation input channel, to read onboard sensor measurements. The minimum data required by our system are: -Drone orientation as roll, pitch and yaw angles ( , , ) respect to the world frame {W}. Other measurements can be incorporated to the SLAM system when available. For example, the Bebop drone incorporates a downward camera that, by means of an optical-flow based motion estimation algorithm calculated onboard, allows relative precise estimated horizontal velocities in the stabilized coordinate frame {S}, ( , ). This measurement can be easily incorporated through the Extended Kalman Filter, as will be shown in a future section. Even more importantly, the availability of an onboard 2D laser rangefinder, as in the case of the Erle-copter (Figure 2b), allows to introduce these measurements to a scan matcher module of the SLAM system that will notably enhance the estimation results. Software Architecture As stated above, the developed system is distributed between the flying unit (MAV) and a remote ground station, as it is shown in Figure 3. The flying unit is treated as a black box, and includes at least one onboard controller/processor that executes an autopilot and performs the reading and sending of onboard sensors measurements. When a laser sensor is present, as in the Erle-Copter, it is connected to the onboard processor that needs to be able to read and send these measurements through the network created by the drone. The SLAM module is executed in the ground station, as well as the control and planning modules, the last one being out of the scope of this paper. The use of ROS as development framework facilitates the setting of this distributed system. The SLAM system consist of three major modules [43]: (1) a scan matching algorithm that uses laser readings to obtain a 2.5D map of the environment and a 3-DoF pose estimation of the footprint of the MAV on the map; (2) a monocular visual SLAM system that obtains a 6-DoF pose estimation It is important to remark that the proposed SLAM approach can be applied to other similar platforms and so we treat the drone as a black box, using only the available W-LAN communication channels to access and control it. Namely, the following inputs/outputs are used in our SLAM system: • A command output channel, to send the drone control packages u with the desired velocities of x and y axis, vertical speed and yaw rotational velocity, all them defined respect to the drone stabilized frame {S}: • A video input channel, to receive the video stream of the forward facing camera. • A navigation input channel, to read onboard sensor measurements. The minimum data required by our system are: -Drone orientation as roll, pitch and yaw angles φ, θ, ψ respect to the world frame {W}. -Drone height h, obtained from the altimeter. Other measurements can be incorporated to the SLAM system when available. For example, the Bebop drone incorporates a downward camera that, by means of an optical-flow based motion estimation algorithm calculated onboard, allows relative precise estimated horizontal velocities in the stabilized coordinate frame {S}, (vx, vy). This measurement can be easily incorporated through the Extended Kalman Filter, as will be shown in a future section. Even more importantly, the availability of an onboard 2D laser rangefinder, as in the case of the Erle-copter (Figure 2b), allows to introduce these measurements to a scan matcher module of the SLAM system that will notably enhance the estimation results. Software Architecture As stated above, the developed system is distributed between the flying unit (MAV) and a remote ground station, as it is shown in Figure 3. The flying unit is treated as a black box, and includes at least one onboard controller/processor that executes an autopilot and performs the reading and sending of onboard sensors measurements. When a laser sensor is present, as in the Erle-Copter, it is connected to the onboard processor that needs to be able to read and send these measurements through the network created by the drone. The SLAM module is executed in the ground station, as well as the control and planning modules, the last one being out of the scope of this paper. The use of ROS as development framework facilitates the setting of this distributed system. The SLAM system consist of three major modules [43]: (1) a scan matching algorithm that uses laser readings to obtain a 2.5D map of the environment and a 3-DoF pose estimation of the footprint of the MAV on the map; (2) a monocular visual SLAM system that obtains a 6-DoF pose estimation and (3) an Extended Kalman Filter that fuses the last estimations with the navigation data provided by the onboard sensors of the MAV to obtain a robust 6-DoF estimation of the position of the robot. In order to test the SLAM system in autonomous flying conditions, a simple PID controller has been developed. This controller allows the MAV to reach selected reference poses in order to autonomously track a desired path. In order to test the SLAM system in autonomous flying conditions, a simple PID controller has been developed. This controller allows the MAV to reach selected reference poses in order to autonomously track a desired path. (1) reads sensors (blue blocks with continuous line correspond to minimum sensory configuration and blue blocks with dashed line to optional sensors) and (2) receives commands for the autopilot (green block). The ground station executes the SLAM system (yellow blocks), the controller (red block) and the planner, that is out of the scope of this paper. SLAM Method Description In the following subsections, we describe the modules of the SLAM system shown in Figure 3. For the scan matcher and VSLAM modules, we analyze and compare state-of-the-art techniques that can be applied to aerial robots. The estimates of these modules are integrated with the rest of onboard sensors using the EKF module. Monocular Visual SLAM In order to perform the simultaneous localization and mapping of the environment by means of visual information, the process of Visual Odometry (VO) must be accomplished. VO is the process of determining the position and orientation of a robot by analyzing the associated camera images, thus, to estimate the 6-DoF position of the MAV. The VO approaches can be classified into two main categories based on the number of cameras used [44]: monocular and stereo VO methods. A stereo pair is applied as minimum number configuration of cameras for solving the scale ambiguity problem [45]. So, monocular VO methods used for single-camera MAVs have the implicit problem of scale ambiguity. In this work, a method for calculating the scale of the estimated pose and map based on additional MAV sensors is proposed. On the other hand, monocular VSLAM methods that simultaneously recover camera pose and scene structure from video can be divided into two classes [46]: (a) feature-based methods, that firstly extract a set of feature observations from the image, and then compute the camera position and scene (2) receives commands for the autopilot (green block). The ground station executes the SLAM system (yellow blocks), the controller (red block) and the planner, that is out of the scope of this paper. SLAM Method Description In the following subsections, we describe the modules of the SLAM system shown in Figure 3. For the scan matcher and VSLAM modules, we analyze and compare state-of-the-art techniques that can be applied to aerial robots. The estimates of these modules are integrated with the rest of onboard sensors using the EKF module. Monocular Visual SLAM In order to perform the simultaneous localization and mapping of the environment by means of visual information, the process of Visual Odometry (VO) must be accomplished. VO is the process of determining the position and orientation of a robot by analyzing the associated camera images, thus, to estimate the 6-DoF position of the MAV. The VO approaches can be classified into two main categories based on the number of cameras used [44]: monocular and stereo VO methods. A stereo pair is applied as minimum number configuration of cameras for solving the scale ambiguity problem [45]. So, monocular VO methods used for single-camera MAVs have the implicit problem of scale ambiguity. In this work, a method for calculating the scale of the estimated pose and map based on additional MAV sensors is proposed. On the other hand, monocular VSLAM methods that simultaneously recover camera pose and scene structure from video can be divided into two classes [46]: (a) feature-based methods, that firstly extract a set of feature observations from the image, and then compute the camera position and scene geometry as a function of these feature observations and (b) direct methods (dense or semi-dense), that optimize the geometry directly on the image's pixels intensities, which enables using all information in the image. Most monocular VO algorithms for MAVs [47][48][49] rely on PTAM [50]. PTAM is a feature-based VSLAM algorithm that achieves robustness through tracking and mapping many (hundreds) of features ( Figure 4a). It runs in real-time by parallelizing the mapping and motion estimation tasks. However, PTAM was designed for augmented reality applications in small desktop scenes and it does not work properly in large-scale environments. downfacing camera and outputs a sparse 3D reconstructed environment model, but it is not designed to work with forward facing cameras. In [32], the authors describe a direct monocular VLAM algorithm for building consistent, semi-dense reconstructions of the environments, the LSD-SLAM method ( Figure 4b). LSD SLAM employs a pose graph optimization which explicitly allows for scale drift correction and loop closure detection in real-time. LSD SLAM employs three parallel threads after initialization takes place: tracking, depth map estimation, and map optimization. A modified version of LSD-SLAM was later presented in [53] for a stereo camera setup. Later, Dense Piecewise Parallel tracking and Mapping (DPPTAM) [54], was released as a semi-dense direct method similar to LSD-SLAM but including a new thread that performs dense reconstructions using segmented super-pixels from indoor planar scenes. Finally, in [33], a keyframe-based monocular VSLAM system with ORB features that can estimate the 6-DoF pose and reconstruct a sparse environment model is presented (ORB-SLAM- Figure 4c). The main contributions of ORB-SLAM are the usage of ORB features in real-time, re-localization with invariance to viewpoint and a place recognition module that uses bags of words to detect loops. After a study of these state-of-the-art monocular VSLAM methods [51], we decided to implement and compare two of these algorithms in our system [55]: LSD-SLAM and ORB-SLAM, both available as ROS packages and with good execution-times to be implemented on MAVs. Scale ambiguity problem, implicit in both monocular methods, has been overcomed by using measurements of other onboard MAV's sensors and comparing them with the VSLAM ones. Taking into account that scale factor is different for x, y and z axis, the best solution is to use the altimeter for estimating the z-axis scale, and the laser sensor, when available, to estimate x-axis and y-axis scale as follows: Recently, more robust and efficient monocular VSLAM methods have been proposed in the literature [51]. In [52], a semi-direct monocular visual odometry algorithm, Semi-direct Visual Odometry (SVO) is presented. This algorithm has been successfully applied to MAVs with a downfacing camera and outputs a sparse 3D reconstructed environment model, but it is not designed to work with forward facing cameras. In [32], the authors describe a direct monocular VLAM algorithm for building consistent, semi-dense reconstructions of the environments, the LSD-SLAM method ( Figure 4b). LSD SLAM employs a pose graph optimization which explicitly allows for scale drift correction and loop closure detection in real-time. LSD SLAM employs three parallel threads after initialization takes place: tracking, depth map estimation, and map optimization. A modified version of LSD-SLAM was later presented in [53] for a stereo camera setup. Later, Dense Piecewise Parallel tracking and Mapping (DPPTAM) [54], was released as a semi-dense direct method similar to LSD-SLAM but including a new thread that performs dense reconstructions using segmented super-pixels from indoor planar scenes. Finally, in [33], a keyframe-based monocular VSLAM system with ORB features that can estimate the 6-DoF pose and reconstruct a sparse environment model is presented (ORB-SLAM- Figure 4c). The main contributions of ORB-SLAM are the usage of ORB features in real-time, re-localization with invariance to viewpoint and a place recognition module that uses bags of words to detect loops. After a study of these state-of-the-art monocular VSLAM methods [51], we decided to implement and compare two of these algorithms in our system [55]: LSD-SLAM and ORB-SLAM, both available as ROS packages and with good execution-times to be implemented on MAVs. Scale ambiguity problem, implicit in both monocular methods, has been overcomed by using measurements of other onboard MAV's sensors and comparing them with the VSLAM ones. Taking into account that scale factor is different for x, y and z axis, the best solution is to use the altimeter for estimating the z-axis scale, and the laser sensor, when available, to estimate x-axis and y-axis scale as follows: VSLAM ; scale y = dy laser dy VSLAM ; (2) where h altimeter is the direct altimeter measurement, dx laser and dy laser are the laser range measurements in x and y directions, and h VSLAM , dx VSLAM and dy VSLAM are the equivalent estimates of the VSLAM system. As h VSLAM = z VSLAM , it can be seen that z real-sale is directly the altimeter measurement, that is very accurate. If laser is not available, we suppose the same scale factor for the three axis, and estimate it using the altimeter. The error committed under this assumption is negligible in narrow environments, as the indoor ones to which our SLAM system has been designed. However, it could be increased in wider outdoor environments. The scale factors are updated at each iteration of the SLAM system, because they may change over time. In the following subsections, we show some partial results of the VSLAM module with scale factor correction using the two chosen algorithms, so that some conclusions can be obtained about their performance when applied to aerial robots. VSLAM with LSD-SLAM LSD-SLAM performs a highly accurate pose estimation from direct image alignment, and obtains in real-time a 3D reconstruction of the environment as pose-graph of keyframes with associated semi-dense depth maps. We use these maps as local information for VO but, due to their computational and memory requirements, we discard these dense maps for reconstructing the environment. When laser is available, we chose to use the 2.5D laser map for planning and control because its better accuracy and lower computational requirements are more appropriate to control a fast dynamic plant as a MAV. In order to make an initial assessment about the performance of LSD-SLAM in visually adverse scenarios, we have chosen two test environments. The first one, shown in Figure 5a, is a storage area of our building with poor lighting. Figure 6a shows a view of the second scenario, that consists of a room followed by two L-shaped corridors. Corridors are visually unfavourable scenarios due to the lack of features. Figures 5c and 6c show the results for both scenarios, consisting in the 3D semi-dense map and pose estimation obtained by the LSD-SLAM technique with scale correction. As a coarse evaluation, and using some metrics of the actual MAV trajectory, we obtain the following undesirable effects: (1) in the storage area, the lack of light causes a clear lengthening in the estimation of the second segment of the trajectory, from 4 to 5.5 m approximately; (2) by contrast, in the corridor environment the pose estimation is highly accurate in the initial room, but a clear shortening effect of around 25% is observed in the corridors (from 22 m to 16 m in the first corridor and from 16 to 12 m in the second one), due to the loss of visual features. Furthermore, LSD-SLAM is very sensitive to pure rotational movements as the one at the end of the first corridor, where an error of about 15 • is observed. where haltimeter is the direct altimeter measurement, dxlaser and dylaser are the laser range measurements in x and y directions, and hVSLAM, dxVSLAM and dyVSLAM are the equivalent estimates of the VSLAM system. As hVSLAM = zVSLAM, it can be seen that zreal-sale is directly the altimeter measurement, that is very accurate. If laser is not available, we suppose the same scale factor for the three axis, and estimate it using the altimeter. The error committed under this assumption is negligible in narrow environments, as the indoor ones to which our SLAM system has been designed. However, it could be increased in wider outdoor environments. The scale factors are updated at each iteration of the SLAM system, because they may change over time. In the following subsections, we show some partial results of the VSLAM module with scale factor correction using the two chosen algorithms, so that some conclusions can be obtained about their performance when applied to aerial robots. VSLAM with LSD-SLAM LSD-SLAM performs a highly accurate pose estimation from direct image alignment, and obtains in real-time a 3D reconstruction of the environment as pose-graph of keyframes with associated semi-dense depth maps. We use these maps as local information for VO but, due to their computational and memory requirements, we discard these dense maps for reconstructing the environment. When laser is available, we chose to use the 2.5D laser map for planning and control because its better accuracy and lower computational requirements are more appropriate to control a fast dynamic plant as a MAV. In order to make an initial assessment about the performance of LSD-SLAM in visually adverse scenarios, we have chosen two test environments. The first one, shown in Figure 5a, is a storage area of our building with poor lighting. Figure 6a shows a view of the second scenario, that consists of a room followed by two L-shaped corridors. Corridors are visually unfavourable scenarios due to the lack of features. Figures 5c and 6c show the results for both scenarios, consisting in the 3D semi-dense map and pose estimation obtained by the LSD-SLAM technique with scale correction. As a coarse evaluation, and using some metrics of the actual MAV trajectory, we obtain the following undesirable effects: (1) in the storage area, the lack of light causes a clear lengthening in the estimation of the second segment of the trajectory, from 4 to 5.5 m approximately; (2) by contrast, in the corridor environment the pose estimation is highly accurate in the initial room, but a clear shortening effect of around 25% is observed in the corridors (from 22 m to 16 m in the first corridor and from 16 to 12 m in the second one), due to the loss of visual features. Furthermore, LSD-SLAM is very sensitive to pure rotational movements as the one at the end of the first corridor, where an error of about 15° is observed. Figures 5d and 6d show the initial results obtained with ORB-SLAM with scale correction in the same test environments. They show in red colour the ORB-features map and in green colour the estimated trajectory. As a featured-based method, the greatest weakness of ORB-SLAM lies again in poorly featured areas as corridors. In this case, corridors are also shortened about a 35%, from 22 to 14 m the first one, and from 16 to 10 m the second one. However, this method is more accurate than LSD-SLAM in well-lit rooms and in the storage room scenario. In the latter case, the second segment is also lengthened due to poor illumination, but in this case only from 4 to 4.4 m. Besides, ORB-SLAM is more precise than LSD-SLAM in pure rotational movements. Comparison In order to have a clearer comparative of both algorithms in terms of accuracy, an external public benchmark has been applied. The chosen benchmark is "RGB-D SLAM Dataset and Benchmark" [56] of the Computer Vision Group from Technische Universität München (TUM). This benchmark provides some datasets with measurements from different sensors and the ground-truth pose of the camera. As output, it delivers a variety of measures of translational errors. For our tests, the dataset rgbd_dataset_freiburg1_xyz has been used. This dataset contains a video recorded from a monocular camera that describes smooth and rotation free movements that are perfect for an initial comparison of translational errors. In the results section of this paper a more thorough analysis is performed with our own datasets and benchmark. Table 1 shows the median of the results obtained when executing the two VSLAM algorithms five times each with the mentioned dataset. As the real scale cannot be calculated by the monocular algorithms by themselves, the estimations extracted from the dataset were pre-processed. Thanks to it the real-scale was calculated with a Matlab script and added as an argument in the online tool. Each of the methods has a different number of compared pairs because not all of the estimated poses are compared, but only those whose timestamps match the given by the ground truth. According with these results, ORB-SLAM is slightly more accurate than LSD-SLAM in terms of translational errors. In addition, ORB-SLAM is more robust facing pure rotational movements. This conclusion has been reached by means of the trial-and-error approach (LSD-SLAM loses the tracking more times than ORB-SLAM when the camera suffers pure rotational movements). However, the dataset used has many visual features to extract, which benefits feature-based methods as ORB-SLAM. LSD-SLAM, as direct method, achieves better results in poorly featured environments. Besides, the initialization of LSD-SLAM is self-acting from initial random values given to the depth map, while ORB-SLAM needs a specific initialization stage to build a points map of the environment before starting the tracking. As the SLAM system has been designed to execute the VSLAM module in a ground station (in our tests an Intel Core i7-3635QM, 2.4 GHz), these execution times (with median speeds of 36.2 ms for LSD-SLAM and 32.2 ms for ORB-SLAM) are both feasible for tracking the fast dynamics of a MAV. Scan Matcher When a 2D laser rangefinder is available on the MAV, it can be used to substantially improve the pose and map estimation under certain environment conditions. It is clear that VSLAM methods cannot work in low light conditions and their results are poor in featureless environments such as corridors. By contrast, laser sensors work properly under all light conditions and provide high-frequency and direct range measurements of the environment. So, this sensor not only provides redundancy, but also complementarity, and can be easily included in the bayesian framework of our SLAM system through the Extended Kalman Filter, as it is shown in Figure 3. The Scan Matcher module aligns consecutive scans from a laser to improve the MAV's motion estimation of the EKF. Although lots of scan matching techniques have been developed and applied to SLAM for ground robots moving on flat surfaces [57], most of them require odometric information that is not available in MAVs. One of the simplest and most commonly used algorithm for laser scan-matching is Iterative Closest Point (ICP) [58]. The main drawback of most ICP-based methods is the expensive search for point correspondences, which has to be done at each iteration. As a solution, methods based on feature-to-feature correspondences [59] or likelihood maps [60] have also been proposed. One of the benefits of the last ones is that they do not require explicit correspondences to be computed, and can be easily integrated into probabilistic SLAM techniques. Thus, one of the most reliable working solutions for laser-based SLAM is Gmapping [57], based on likelihood maps and Rao-Blackwellized particle filters. This algorithm is available as open source software, and provides successful results in ground robots moving in typical planar indoor scenarios. However, it relies on sufficiently accurate odometry (not available in aerial robots) and is not applicable to platforms with significant roll and pitch motion. However, the HectorSLAM project, developed by the Team Hector [61] of the Technische Universität Darmstadt, presents a system for fast online generation of 2D maps that uses only laser measurements and a 3D attitude estimation system based on inertial sensing [62]. This method requires low computational resources and it is widely used by different research groups because of its availability as open source based on ROS. More recent methods have been suggested in the last years, such as the one proposed in [8], an initial-free 2D laser scan matching method based on point and line features detection. This method, however, requires a pre-processing stage to detect features from scans that reduces the working rate and is not optimal for fast systems as a MAV. In this work, we adapt the HectorSLAM system to our scan matching module. It allows the system to obtain a 2.5D map and a 3-DoF estimation of the MAV's footprint pose within the map, consisting in the (x,y) coordinates and yaw angle Ψ in the world coordinate frame {W}. The 3-DoF pose estimation is based on optimization of the alignment of beam endpoints with the map obtained so far [62]. The endpoints are projected into the current map and the occupancy probabilities are estimated. The scan matching is solved using a Gaussian-Newton equation, which finds the rigid transformation that best fits the laser beams with the map. A multi-resolution map representation is used to avoid getting stuck in local minima. In addition, an attitude estimation system based on the IMU measurements transforms the laser readings from the drone body frame {D} to the drone stabilized frame {S} (horizontal to the ground, as can be seen in Figure 1) in order to compensate the roll and pitch movements when obtaining the 2.5D map. Figures 5b and 6b show the map and footprint pose estimation obtained by the scan matcher in the same environments of VSLAM. Although HectorSLAM provides good results in confined environments (Figure 5b), the lack of odometry information to detect horizontal movements (the only measurement used by the algorithm from the IMU is attitude, in order to stabilize laser measurements into the {S} coordinate frame) results in undesirable effects, such as shortening of corridors (from 22 m to 20 m in the first corridor of Figure 6b). These drawbacks will be compensated in our whole proposed SLAM system due to the addition of visual and other onboard sensors information, as well as prediction estimates, as will be shown in the results section. Data Fusion with EKF In order to fuse all available data, we use an EKF [63]. This EKF is also employed to compensate for the different time delays in the system, arising from wireless LAN communication and computationally complex visual processing. The proposed EKF uses the following state vector: where (x t , y t , z t ) is the position of the MAV in m, (vx t , vy t , vz t ) the velocity in m/s, (φ t , θ t , ψ t ) the roll, pitch and yaw angles in degrees, and . ψ t the yaw-rotational speed in deg/s, all of them in world coordinate frame {W}. In the following sections, we define the prediction and observation models. Prediction Model The prediction model is obtained from the full motion model of the quadcopter's flight dynamics and reaction to control commands derived in [49]. This model establishes that the horizontal acceleration of the MAV is proportional to the horizontal force acting upon the quadcopter, that is, the accelerating force minus the drag force. The drag is proportional to the horizontal velocity of the quadcopter, while the accelerating force is proportional to a projection of the z-axis onto the horizontal plane, resulting in the following equations: . On the other hand, the influence of the sent control command u = (vx,vy,vz,ˆ. ψ) is described by the following linear model, taking into account the angle direction criterions shown in Figure 1: . . .. We estimated the proportional coefficients K 1 to K 8 for the different platforms from data collected in a series of test flights. From Equations (7) to (12) we obtain the following overall state transition function: Navigation Data Observation Model This model relates the state vector with the onboard measurements obtained through the navigation channel of the MAV (that we called "Navdata" in Figure 3). According to the minimal Sensors 2017, 17, 802 13 of 27 onboard sensor configuration described in Section 2.2, this Navdata channel must at least provide the drone's height h obtained from an altimeter (ultrasonic sensors in our both platforms) and the drone's orientation as roll, pitch and yaw angles φ, θ,ψ obtained from the gyroscope. It results in the following minimal measurement vector: When other onboard measurements are available, this measurement vector can easily be enlarged in order to incorporate them. This is the case of the Bebop drone, which provides a measurement of the horizontal velocities of the drone (vx, vy) in the stabilized coordinate frame {S} obtained through the downfacing camera: The roll and pitch angles measured by the gyroscope can be considered as direct observations of the corresponding state variables. Furthermore, we differentiate the height and yaw measurements to be used as observations of their respective velocities. And if present, the velocity measurements of the downfacing camera have to be transformed from the stabilized coordinate frame {S} into the world frame {W}. The resulting measurement vector z N AVDATA and observation function h N AVDATA (χ t ) are: VSLAM Observation Model When the VSLAM module successfully tracks a video frame, its 6-DoF pose estimation is transformed from the coordinate system of the front camera to the coordinate system of the drone {D}, leading to a direct observation of the drone's pose given by: where E C,t SE(3) is the estimated camera pose, E DC SE(3) the constant transformation from the camera to the quadcopter coordinate system and f : SE(3)· 6 the transformation from an element of SE(3) to the position and roll-pitch-yaw representation. Scan Matcher Observation Model The scan matcher obtains a 3-DoF estimation of the MAV's footprint pose, that is considered as a direct observation of the corresponding state variables, as it is shown in the following linear observation model: PID Controller Once the estimated position of the MAV is provided by the EKF, a PID controller has been designed to control de movements of the MAV. A reference (x,ŷ,ẑ,ψ) is needed as the desired position of the drone in world coordinates. The EKF will bring the estimation of the pose, as shown in Figure 3. The difference between the reference and the estimated pose is the error that will be minimized by the PID controller, by sending to the MAV an appropriate control command u = (vx,vy,vz,ˆ. ψ), that is calculated in the following way: . It allows the algorithm to drive the MAV along a series of points in the world coordinate frame {W}, so it can follow a specific trajectory, as will be shown in the results section. Delay Compensation For controlling a quickly reacting system such as a MAV, an accurate and delay-free state estimation is required. The delays in the estimation lead to a poor control even if the estimation is correct. When using a low-cost MAV, the delay caused by the wireless communication channel must be taken into account, mainly in the transmission of the compressed camera images. The time required between the instant a frame is captured and the instant the respective calculated control signal is applied (i.e., the time required for encoding the image on the drone, transmitting it via wireless LAN, decoding it on the PC, applying visual SLAM, data fusion and control calculations and transmitting the resulting control signal back to the drone) lies between 160 ms and 300 ms. Fortunately, these delays can be easily compensated within the EKF framework. Firstly, we chose an execution period for our SLAM system that ensures that a new video frame would be processed on each iteration of the system. The VSLAM algorithm takes a median of 35 ms depending on the selected method, while the execution time of the scan matcher is about 10 ms. The EKF and PID controller execution times are much lower, of only a few milliseconds. So, an execution period of T = 50 ms has been chosen, so that t SLAM+PID < T. The amount of time that (1) a new frame captured by the frontal camera needs to be transmitted via wireless LAN to the ground station t video_trans and (2) a calculated control command needs to reach the MAV and take effect t command_trans , depend on the bandwidth used by nearby wireless LAN networks. If the onboard processor of the MAV is open-access (as in the case of the Erle-Copter), these times can be estimated by measuring two latency values using an echo signal to the drone, and they can be updated in regular intervals while the connection is active. In our tests environment, t video_trans typically lies between 50 ms and 200 ms and t command_trans between 30 ms and 100 ms. In our system, these delays are rounded to an integer multiple of the SLAM execution period T, being: As the measurements of the Navdata channel are transmitted each 5 ms, and the readings and processing of the scan matcher are performed in less than 10 ms, these delays are neglected because they are lower than the execution period T. Figure 7 shows a time diagram with the main times and delays involved in one iteration of the SLAM system at time t = kT. The video frame processed by the VSLAM module at that time was captured by the drone in a time corresponding to n iterations before, and so it must be used to correct the estimation of the state χ t=(k−n)T = χ(k − n). After that, the EKF is rolled forward up to the current time t = kT, using the prediction stage with buffered previously sent commands and the correction stage with laser and navdata buffered previous measurements to estimate the current state χ t=kT = χ(k). Then, only the prediction stage is used to estimate the future pose of the MAV χ t=(k+m)T = χ(k + m) when the control command reaches the MAV at t = (k + m)T. This estimate is used to calculate the control command u t=(k+m)T = u(k + m) that is sent to the MAV and stored in the buffer for future predictions. Experimental Results In this section, we present the experimental results that validate our proposed SLAM system. Firstly, we show some practical aspects about our test bed that include the software implementation and the ground truth system used for validating the estimation results. After that, we show the experimental results obtained with the minimal configuration of the SLAM system that includes only VSLAM, IMU and an altimeter. Finally, some tests that include laser measurements are shown, comparing the effect of using the different sensors separately and fused in the proposed EKF. Implementation and Ground Truth System Two different low-cost commercial MAVs have been used in our experiments: the Bebop drone ( Figure 2a) and the Erle-Copter (Figure 2b), whose main features were detailed in Section 2.2. It proves that the proposed SLAM system can be implemented in different commercial platforms with different sensory configurations, with only minor modifications. In order to achieve this scalability, a node-based software network has been developed based on the ROS framework, using well-tested packages when available. Figure 8 shows this software implementation. Green blocks correspond to a priori available ROS packages: (a) "bebop_autonomy" [40] and "MAVROS" [42] for communicating with the autopilots of Bebop and Erle-Copter respectively, sending control commands and receiving onboard sensor data; (b) "hokuyo_node" [41] to provide access to the Hokuyo laser range finder; (c) "hector_mapping" as a part of the Hector-SLAM stack [64] to implement the scan matcher module; and (d) "ORB-SLAM" [65] or "LSD-SLAM" [66] as implementations of the two tested VSLAM methods. The custom blocks provided to the SLAM and control systems, also as ROS nodes, are shown in red color. One of the major difficulties of the experimental setup was to design a ground truth system to validate the pose estimation based on an external sensor. Since a motion capture system that could obtain a reliable measurement of the actual six degrees of freedom position of the MAV was not available, we have used a simplified system that allows us to approximately estimate some coordinates of the MAV's pose under certain assumptions. The value of this ground system is that it has not accumulative error. It is based on a monocular camera on the ceiling of the test area, as it is shown in Figure 9a. Adding a pair of distinguishable artificial markers to the MAV (two coloured circles, as it is shown in Figure 9b), it is possible to estimate the (x,y,z) and ѱ coordinates of the MAV, Experimental Results In this section, we present the experimental results that validate our proposed SLAM system. Firstly, we show some practical aspects about our test bed that include the software implementation and the ground truth system used for validating the estimation results. After that, we show the experimental results obtained with the minimal configuration of the SLAM system that includes only VSLAM, IMU and an altimeter. Finally, some tests that include laser measurements are shown, comparing the effect of using the different sensors separately and fused in the proposed EKF. Implementation and Ground Truth System Two different low-cost commercial MAVs have been used in our experiments: the Bebop drone ( Figure 2a) and the Erle-Copter (Figure 2b), whose main features were detailed in Section 2.2. It proves that the proposed SLAM system can be implemented in different commercial platforms with different sensory configurations, with only minor modifications. In order to achieve this scalability, a node-based software network has been developed based on the ROS framework, using well-tested packages when available. Figure 8 shows this software implementation. Green blocks correspond to a priori available ROS packages: (a) "bebop_autonomy" [40] and "MAVROS" [42] for communicating with the autopilots of Bebop and Erle-Copter respectively, sending control commands and receiving onboard sensor data; (b) "hokuyo_node" [41] to provide access to the Hokuyo laser range finder; (c) "hector_mapping" as a part of the Hector-SLAM stack [64] to implement the scan matcher module; and (d) "ORB-SLAM" [65] or "LSD-SLAM" [66] as implementations of the two tested VSLAM methods. The custom blocks provided to the SLAM and control systems, also as ROS nodes, are shown in red color. One of the major difficulties of the experimental setup was to design a ground truth system to validate the pose estimation based on an external sensor. Since a motion capture system that could obtain a reliable measurement of the actual six degrees of freedom position of the MAV was not available, we have used a simplified system that allows us to approximately estimate some coordinates of the MAV's pose under certain assumptions. The value of this ground system is that it has not accumulative error. It is based on a monocular camera on the ceiling of the test area, as it is shown in Figure 9a. Adding a pair of distinguishable artificial markers to the MAV (two coloured circles, as it is shown in Figure 9b), it is possible to estimate the (x,y,z) and Ψ coordinates of the MAV, using some basic geometric relationships under the assumption that the drone is always in an horizontal plane (which is quite realistic because we configured the drone to move slowly in the horizontal plane). Under this assumption, the distance between markers gives the height z of the MAV, and the x, y and Ψ coordinates can be easily extracted. Figure 8. Software implementation of the SLAM system: green blocks correspond to a priori available ROS packages; custom blocks provided to the SLAM and control systems are shown in red. As a disadvantage of this ground truth system, it only covers an open test area of 5 × 5 m. This is sufficient for testing the SLAM system with the minimal sensor configuration (based on VSLAM and onboard sensors), by commanding some square trajectories to the control system and comparing the estimated trajectories with the ground truthed ones. However, this test environment (that we call "Ground-truthed test environment") is not suitable to validate the SLAM system when using also laser measurements, because walls are so far to be detected with our laser sensor. For this reason, we have used two other different scenarios to validate the complete SLAM system, coinciding with the two environments in which we performed the initial tests of the VSLAM and scan matcher modules. These are the storage area shown in Figure 5 (called "Storage room test environment") and the lab and corridors sequence shown in Figure 6 (called "Corridors test environment"). In these last environments we do not have a ground truth system, but we can use some metric measurements to validate trajectory estimations. Results with the Minimal Sensor Configuration: VSLAM, IMU and Altimeter In this section we include some experimental results obtained with the Bebop drone in the ground-truthed test environment. In this case we use the minimal sensor configuration of the proposed SLAM system that includes the horizontal camera, IMU and altimeter. The objective of these experiments is to compare the ability of LSD-SLAM and ORB-SLAM when they are applied to a MAV, by solving the scale ambiguity and improving the state estimation fusing the IMU and altimeter data in the EKF. In these tests, the PID controller receives a series of points that conform a rectangle of 120 cm × 60 cm as reference path, maintaining the reference orientation of the drone constant along this path. Firstly, we show the results of using all the stages of the EKF with the two different VSLAM methods in order to compare them. Figure 10 shows the experiment with LSD-SLAM. At the left, Figure 10a displays the view of the frontal camera with the obtained depth map and, below it, the estimated trajectory embedded in the environment's cloud points map. On the right, Figure 10b shows the desired track projected in the XY plane (magenta colour), the ground truth track obtained with the external camera (blue colour) and the path estimated with the SLAM system (red colour). It can be seen that, as expected in a fast dynamic system as a MAV, there are some deviations between the reference trajectory and the actual one that could be mitigated with more sophisticated control laws. However, the position estimated by the proposed SLAM system is good enough to control the MAV. proposed SLAM system that includes the horizontal camera, IMU and altimeter. The objective of these experiments is to compare the ability of LSD-SLAM and ORB-SLAM when they are applied to a MAV, by solving the scale ambiguity and improving the state estimation fusing the IMU and altimeter data in the EKF. In these tests, the PID controller receives a series of points that conform a rectangle of 120 cm × 60 cm as reference path, maintaining the reference orientation of the drone constant along this path. Firstly, we show the results of using all the stages of the EKF with the two different VSLAM methods in order to compare them. Figure 10 shows the experiment with LSD-SLAM. At the left, Figure 10a displays the view of the frontal camera with the obtained depth map and, below it, the estimated trajectory embedded in the environment's cloud points map. On the right, Figure 10b shows the desired track projected in the XY plane (magenta colour), the ground truth track obtained Figure 11 shows the same experiment using ORB-SLAM. Figure 11a displays the obtained marks cloud that conform the map and the estimated trajectory, while Figure 11b shows the projection onto the XY plane of the desired (magenta), actual (red) and estimated (blue) trajectories. with the external camera (blue colour) and the path estimated with the SLAM system (red colour). It can be seen that, as expected in a fast dynamic system as a MAV, there are some deviations between the reference trajectory and the actual one that could be mitigated with more sophisticated control laws. However, the position estimated by the proposed SLAM system is good enough to control the MAV. Figure 11 shows the same experiment using ORB-SLAM. Figure 11a displays the obtained marks cloud that conform the map and the estimated trajectory, while Figure 11b shows the projection onto the XY plane of the desired (magenta), actual (red) and estimated (blue) trajectories. Figure 11. Results with the minimal sensor configuration using ORB-SLAM. (a) shows at the top the view of the frontal camera with the obtained marks and below, the points cloud map and the estimated trajectory; (b) shows the desired track projected in the XY plane (magenta colour), the ground truth path obtained with the external camera (blue colour) and the path estimated with the SLAM system (red colour). In order to compare both VSLAM methods with our own sensor and ground truth data, we have calculated the same translational error measures that we obtained with the external benchmark and dataset used in Section 3.1.3. Five tests were made using each VSLAM algorithm, and median results are shown in Table 2. In addition to the results obtained in Section 3.1.3, these new tests confirm that ORB-SLAM is more accurate than LSD-SLAM in well-lit environments with sufficient visual features. Besides, the errors are smaller than those obtained when using only the VSLAM algorithm with scale Figure 11. Results with the minimal sensor configuration using ORB-SLAM. (a) shows at the top the view of the frontal camera with the obtained marks and below, the points cloud map and the estimated trajectory; (b) shows the desired track projected in the XY plane (magenta colour), the ground truth path obtained with the external camera (blue colour) and the path estimated with the SLAM system (red colour). In order to compare both VSLAM methods with our own sensor and ground truth data, we have calculated the same translational error measures that we obtained with the external benchmark and dataset used in Section 3.1.3. Five tests were made using each VSLAM algorithm, and median results are shown in Table 2. In addition to the results obtained in Section 3.1.3, these new tests confirm that ORB-SLAM is more accurate than LSD-SLAM in well-lit environments with sufficient visual features. Besides, the errors are smaller than those obtained when using only the VSLAM algorithm with scale correction, thanks to the prediction and correction stages of the EKF. Table 3 presents the error between the estimated yaw angle and the one obtained from the ground truth system using ORB-SLAM. It can be seen that this error is kept within close tolerances with this method. On the contrary, LSD-SLAM presents tracking problems with fast rotational movements, which leads us to choose ORB-SLAM as first choice in sufficiently illuminated and featured environments. On the other hand, the main contribution of the proposed SLAM system is multi-sensor fusion, exploiting the typical onboard sensors of commercial drones. The usage of IMU and altimeter measurements, all them integrated into an EKF that also takes into account the commanded velocities through its prediction stage, greatly improves the estimation results of either of the VSLAM techniques. In Figure 12, red crosses show the actual trajectory of the drone when following the previously established reference points, obtained with the ground truth system. The light blue trajectory is the estimated position of the MAV using only the proprioceptive information of the drone, namely the Navdata measurements and the prediction model of the EKF. Since in this case no external reference is used to correct the estimation, a large deviation respect to the actual trajectory is addressed. In dark blue and green we show the results of both VSLAM techniques, LSD-SLAM and ORB-SLAM respectively, using the altimeter sensor only for scale correction. In these cases, only the visual correction model is used in the EKF and we can see that the estimation results are very poor. Black line shows the estimated trajectory when fusing all the information in the EKF. It is easily observable that estimation results are clearly improved. Table 4 shows a comparison of the translational errors, demonstrating that they are drastically reduced to the half, getting an RMSE error of around 5 cm. Results Including the Laser Sensor In this section we show the experimental results obtained with the Erle-Copter drone using the complete multi-sensorial SLAM system (laser, vision, IMU and altimeter). As it was stated, these experiments have been performed in confined test environments in which walls can be detected by our URG-04LX sensor (Hokuyo, Osaka, Japan) whose detection range is 4 m. In these environments we do not have a ground truth system, so we use some metrics of the followed trajectory on the XY plane to validate the estimations. Firstly, the results obtained in the storage room test environment are shown in Figure 13. The blue line represents the metrics of the actual path followed by the drone. Three different estimations are shown in the same figure. The magenta trajectory corresponds to the estimation of the VSLAM module using ORB-SLAM. As previously noted, the poor lighting in this area yields to a wrong estimation of the length of the second and third segments of the path. The green trajectory shows the estimation performed by the scan matcher module. As it is a confined space with easy features as corners, this estimation is better than the visual one, but, even so, a shortening effect is observed. Finally, the red trajectory shows the estimation obtained as output of the EKF with our complete SLAM system. In this case, laser and visual estimations are fused with the measurements of IMU and altimeter, and combined with the prediction stage of the EKF. This results in a much more accurate pose estimation, as it is shown in Table 5, that displays the translational errors of the three estimations. descriptors ("corridor", "hall", "well-lit", "poorly-lit", etc.). This classifier will use the onboard sensors of the MAV (laser and vision) as source information. The result of the classification will be used to automatically adapt the SLAM system parameters to obtain the best results in each situation. For example, in corridor environments could be useful to compensate the shortening effect of both main sensors (laser and vision) by overestimating the movement of the MAV in the prediction stage of the EKF, and/or increasing the covariance matrixes of laser and vision observation models. Indeed, it may be desirable to completely discard VSLAM in poorly lit environments or the scan matcher estimation in wide environments out of the detection range of the laser sensor. Finally, we show in Figure 14 the results obtained in the corridors test environment. In this case, the followed path starts in a well-lit room, and continues along two L-shaped corridors in which nearly all doors were closed, hindering laser features detection. Again, blue line is the actual followed path, magenta and green lines are the estimations of the VSLAM module and scan matcher module respectively. The red path is the estimation of our complete SLAM system. The three estimations provide good results in the initial room, because it is a well-lit confined area. However, it is observed that both laser and vision produce a shortening effect in the estimation of the length of the corridors. This effect is also reflected in the laser map that is constructed by the scan matcher module using only laser measurements. Although the estimation of the laser module is much more accurate, it cannot compensate the harmful effect of VSLAM in the corridor. So, translational errors become quite large in this environment, as can be verified in Table 6. Anyway, the scan matcher module allows a clear improvement in the results of the estimation. This experiment proves that pose estimation along corridors continue to be a challenge in indoor navigation of aerial robots that do not dispose of classical odometric systems based on direct contact with the environment. Discussion Estimating the movement of a MAV relative to its environment is a hard challenge. In indoor and GPS-denied environments, systems that achieve similar results to classical odometric methods for ground robots have not been yet developed. The SLAM system proposed in this paper has to be evaluated in the context of the application to which it is intended. As we stated in the introduction, a small low-cost MAV with very limited onboard computational capacity moves near a ground robot with a powerful remote unit which executes the SLAM algorithms. So, execution time is not a constraint in this case. However, only typical onboard sensors can be used and communication delays must be taken into account. One of the requirements imposed to our system is sensor configurability, that we achieve using Taking these results into account, the authors of this paper are currently working on a module that automatically detects and classifies the environment using different visual and structural descriptors ("corridor", "hall", "well-lit", "poorly-lit", etc.). This classifier will use the onboard sensors of the MAV (laser and vision) as source information. The result of the classification will be used to automatically adapt the SLAM system parameters to obtain the best results in each situation. For example, in corridor environments could be useful to compensate the shortening effect of both main sensors (laser and vision) by overestimating the movement of the MAV in the prediction stage of the EKF, and/or increasing the covariance matrixes of laser and vision observation models. Indeed, it may be desirable to completely discard VSLAM in poorly lit environments or the scan matcher estimation in wide environments out of the detection range of the laser sensor. Discussion Estimating the movement of a MAV relative to its environment is a hard challenge. In indoor and GPS-denied environments, systems that achieve similar results to classical odometric methods for ground robots have not been yet developed. The SLAM system proposed in this paper has to be evaluated in the context of the application to which it is intended. As we stated in the introduction, a small low-cost MAV with very limited onboard computational capacity moves near a ground robot with a powerful remote unit which executes the SLAM algorithms. So, execution time is not a constraint in this case. However, only typical onboard sensors can be used and communication delays must be taken into account. One of the requirements imposed to our system is sensor configurability, that we achieve using an EKF for sensor fusion. A minimal sensor configuration that adds the monocular camera to typical proprioceptive sensors (IMU and altimeter) has been supposed. We have demonstrated that state-of-the-art monocular VSLAM methods can be applied to aerial robots taking advantage of onboard sensors to solve scale ambiguity and to improve the pose estimation through the EKF. So far, PTAM has been the most popular choice for implementing VSLAM in aerial robots. In this work, we have compared two more recent and precise methods as ORB-SLAM and LSD-SLAM, proposing a scale correction method based on altimeter and laser (if available) measurements. ORB-SLAM has demonstrated to be a more accurate method in visually positive environments, such as well-lit and sufficiently featured ones. It also provides better results with rotational movements, while LSD-SLAM could be a better choice in featureless environments. In any case, our SLAM proposal fuses the estimate of VSLAM with the onboard sensor measurements and a prediction model, and this is proven to provide a substantial improvement in the pose estimation. As a future work line, the VSLAM method could be automatically chosen as a function of the characteristics of the environment and the trajectory. When available, a 2D laser rangefinder is a highly suited sensor for MAVs in indoor environments, due to its fast and direct range detection, so a scan matcher module has been added to our EKF-based SLAM framework, based on the Hector Mapping algorithm. It has been demonstrated that in most environments, the fusion of visual and laser information with onboard sensors and prediction models provides a better estimation result. However, some environments as corridors are still a major challenge because both sensors (vision and laser) tend to short their length. Although this is compensated with IMU measurements and the prediction model, estimation results are still poorer than in more featured environments. As a future research, we are working on detecting and classifying the environment into some types to adjust the covariance matrixes of the EKF to minimize these undesired effects. It is important to note that we use SLAM as a framework for robust pose estimation and tracking. Obtaining a global and consistent map is not within the objectives of this work. So, depth maps or point clouds are used only as local maps for localization, and discarded once the MAV obtains a new keyframe. Furthermore, loop detection and closure methods have not been added to our system. As a future research line, we are working on integrating visual 3D maps with laser 2.5D maps to obtain wide global and consistent maps. Therefore, the importance and contributions of this paper are focused on the fusion of several sensors for solving the SLAM problem of a complex robot platform as a MAV. We have applied successful state-of-the-art algorithms in order to advance the research in the fusion and control topics. We have developed specific models for fusing the estimations of all the sensors of the MAV (vision, laser, altimeter and IMU) and a prediction model to improve the estimates with respect to those made independently by any of the sensors. We have demonstrated that the proposed SLAM architecture can be used in real time for controlling the movements of the MAV. Conclusions This paper shows a multi-sensorial SLAM system that successfully tracks the 6-DoF pose of low-cost MAVs in GPS-denied environments using a remote control station. To do this, several state-of-the art SLAM algorithms have been applied and compared. The fusion of monocular vision with laser measurements (when available) and other typical onboard sensors (IMU and altimeter) by means of an EKF framework simplifies the configurability of the system. Using two commercial platforms, it has been demonstrated with experimental results that the proposed SLAM system improves the results of the baseline techniques when estimating the trajectory of the MAV in different environment conditions.
17,297
2017-04-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Applications of node-based resilience graph theoretic framework to clustering autism spectrum disorders phenotypes With the growing ubiquity of data in network form, clustering in the context of a network, represented as a graph, has become increasingly important. Clustering is a very useful data exploratory machine learning tool that allows us to make better sense of heterogeneous data by grouping data with similar attributes based on some criteria. This paper investigates the application of a novel graph theoretic clustering method, Node-Based Resilience clustering (NBR-Clust), to address the heterogeneity of Autism Spectrum Disorder (ASD) and identify meaningful subgroups. The hypothesis is that analysis of these subgroups would reveal relevant biomarkers that would provide a better understanding of ASD phenotypic heterogeneity useful for further ASD studies. We address appropriate graph constructions suited for representing the ASD phenotype data. The sample population is drawn from a very large rigorous dataset: Simons Simplex Collection (SSC). Analysis of the results performed using graph quality measures, internal cluster validation measures, and clinical analysis outcome demonstrate the potential usefulness of resilience measure clustering for biomedical datasets. We also conduct feature extraction analysis to characterize relevant biomarkers that delineate the resulting subgroups. The optimal results obtained favored predominantly a 5-cluster configuration. Electronic supplementary material The online version of this article (10.1007/s41109-018-0093-0) contains supplementary material, which is available to authorized users. Introduction Clustering comprises a prolific research area for data exploration and knowledge discovery applications with a great variety of approaches. With the growing ubiquity of data in network form, clustering in the context of a network represented as a graph has become increasingly important. In graph theory contexts, clustering involves finding a kpartitioning of the vertices of a graph. The concepts and properties of graph theory make it very convenient to describe clustering problems by means of graphs (Xu and Wunsch II 2009). Nodes V = {v i , i = 1, . . . , N} of a weighted graph G correspond to N data points in the pattern space, and edges E = {e ij , i, j ∈ V , i = j} reflect the proximities between each pair of data points. Use of graph theoretic clustering techniques is not restricted to cases where the data is inherently graph-based. They have also been shown to be effective on other types of data by transforming the data to a graph form using an appropriate graph representation (Alpert et al. 1999). Brugere et al. (2018) provide an in depth overview regarding creating networks from data as well as examples of network structure inference in diverse fields such as computational biology, neuroscience, epidemiology, ecology, and mobile device technology. There are many benefits to converting data to a network representation as networks are an excellent way of representing complex relationships. The following benefits are highlighted and discussed in details in Ref. (Brugere et al. 2018). Networks aid in uncovering the higher-order structure emerging from dyadic relationships. They are also useful in exploring the heterogeneity that exists among individual entities. Diverse measures can be applied in interpreting and/or evaluating network representations such as density, degree distribution, clustering coefficient, centralities, etc. Networks are interpretable models for further analysis and hypothesis generation. Many useful tools also exist for network analysis that can be used across domains. Thus, networks provide a common language through which biological researchers can communicate with computer scientists. Graph based methods aid ease of visualization of analysis, a natural co-occurrence of network representation. Given a dataset, the main challenge usually lies in determining which particular network will be the most useful representation to provide meaningful inference. There are various successful examples of the use of graphs in analyzing biological and health-related data. Pan et al. (2018) converted gene expression data to an appropriate graph representation and computed betweenness centrality (a graph-theoretic measure) to find important regulator genes in tumors. Their study is a useful motivation for the current work, which also uses betweenness centrality in a heuristic to find important data points. Dale et al. (2018) employed graph clustering techniques to gene expression data to identify genes potentially related to powdery mildew disease resistance in grapevines. Alves et al. (2018) applied graph clustering and graph theoretic measures (degree distribution, average clustering coefficient, and average short path length) to evaluate the effects of an antibody on chick embryos. The specific application of classification of human traits and diseases in patient networks using graph analysis is conducted for a variety of medical applications including pathological narcissism in (Pierro et al. 2018), dark personality traits in (Marcus et al. 2018), post-traumatic stress disorder in (Akiki et al. 2018), and inflammatory bowel diseases in (Abbas et al. 2018). This paper investigates the application of graph theoretic clustering on analysis of clinical data relating to Autism Spectrum Disorder (ASD) phenotypes. Clinical data, such as in ASD, is commonly characterized by significant heterogeneity, high dimensionality, complexity in structure and mixture of variables, disparate data sources, and missing data. There is a critical need to identify and validate more homogeneous subgroups as well as learn the distinct features (biomarkers) associated with the subgroups. This work significantly extends preliminary results presented in ) on clustering ASD phenotype data using our node-based resilience clustering framework (NBR-Clust) (Matta et al. 2016;Borwey et al. 2015). NBR-Clust is unique in its focus on critical attack sets of nodes S ⊂ V whose removal disconnects the network into multiple components that form the basis of resultant clusters. Due to natural properties of sparse node-cuts, the NBR-Clust approach is useful not only for traditional clustering scenarios where the number of clusters may be unknown a priori, but also for clustering in the presence of outliers or noise, and/or overlapping nodes (Matta et al. 2016;Borwey et al. 2015). In (Matta et al. 2016), we generalized the usefulness of node-based resilience measures for clustering, particularly when the number of clusters is not known a priori. We conducted an in-depth comparative analysis using existing known resilience measures such as integrity, toughness, tenacity, and scattering number as well as a parametrized version of vertex attack tolerance (VAT). The results obtained demonstrated the effectiveness of VAT and integrity over the other methods in clustering the datasets with high accuracy. Additionally, integrity was likely to cluster datasets in one step, and tenacity was useful for giving an upper bound to cluster number determination. In this work, we conduct a systematic exploration of application of NBR measures to delineate heterogeneous ASD data into more meaningful subgroups using a sample population drawn from the Simons Simplex Collection (Fischbach and Lord 2010). We investigate three NBR measures (VAT, Integrity and Tenacity) along with multiple graph constructions to determine appropriate representations for the ASD phenotype data. We also employ feature extraction techniques to determine a potential set of ASD phenotype biomarkers that discriminate the resulting subgroups. A varied set of statistical methods is applied to validate and interpret the clinical significance of the results. Autism spectrum disorders ASDs are childhood neurodevelopmental disorders diagnosed on the basis of behavioral assessments of social, communicative, and repetitive symptoms (Association et al. 2013). Although ASD is behaviorally distinctive and reliably identified by experienced clinicians, it is clinically and genetically extremely heterogeneous (Miles 2011). Children with ASD exhibit a wide diversity in type, number, and severity of social deficits, behaviors, and communicative and cognitive difficulties, which are assumed to reflect multiple etiologic origins (Eaves et al. 1994). Given the increase in ASD prevalence (Autism and Developmental Disabilities Monitoring Network Surveillance Year 2010 Principal Investigators 2014) and the corresponding increasing associated economic burden (Lavelle et al. 2014), there is a need for automated approaches to detect more homogeneous subgroups of patients, and more importantly for biomarkers (biologically based phenotypes) to inform tailored intervention and improved outcomes. Biomarkers are useful to index diagnostic status or risk, demonstrate engagement of specific biological systems, and provide more rapid assessment of change than traditional measures based on clinical observation and caregiver report (McPartland 2016). In the unsupervised learning context, biomarkers can be regarded as significant features that characterize a subgroup (or cluster). Thus, the problem of inferring meaningful biomarkers translates to unsupervised learning of discriminant features. A better understanding of heterogeneity in autism itself, based on scientifically rigorous approaches centered on systematic evaluation of the clinical and research utility of the phenotypic and genotypic markers ), would generate useful information for the study of etiology, diagnosis, treatment and prognosis of the disorder. There have been varied cluster analysis approaches on ASD phenotype/clinical data over the past two decades. Prior to DSM-5 (Association et al. 2013), some of these approaches (Stevens et al. 2000;Ingram et al. 2008;Cuccaro et al. 2012) focused on exploring empirical subgroups that aligned with pre-defined subgroups (such as ASD DSM-IV subtypes) or illuminated some knowledge on etiologically distinct subgroups i.e. which behavioral and physical phenotypes will most likely subdivide ASD. Since the introduction of the DSM-5, emphasis is placed on the spectrum of autism i.e. on a severity gradient under the diagnostic umbrella of Autism Spectrum Disorder. According to , the task of categorizing the clinical heterogeneity in children with autism is still of critical importance, regardless of how the DSM changes its definition. Hence, there have been even more studies Ousley and Cermak 2014;Veatch et al. 2014;Al-Jabery et al. 2016;) that attempt to better classify the ASD heterogeneity under DSM-5 using a varied set of ASD phenotype data. Some ASD studies (Chaste et al. 2015) suggest that attempts to stratify children based on phenotype will not increase the power of ASD genetic discovery studies. This is possibly true when the methods are limited by a very restricted set of phenotyping variables (diagnosis, IQ, age at first words, ASD severity, insistence sameness, and symptom profiles) and do not account for possible outliers in the dataset. Spencer et al. (2018) demonstrated that ASD phenotype subgroups could aid discovery of novel ASD genes. It is important to employ clustering methods that simultaneously identify and remove possible outliers that could be skewing the results and add pertinent and relevant phenotype ingredients that may uncover meaningful subtypes. Ultimately, the validity of any subgrouping paradigm depends on whether the ASD subgroups actually uncover/expose some biologic or genetic variation, which can be used to predict prognosis, recurrence risks or treatment responses. Hence, in this work, we also apply rigorous statistical analysis to validate the significance of the results as well as guide the optimal clustering configuration selection. NBR-Clust Algorithm Node-based resilience measures compute a critical attack set of nodes S ⊂ V whose removal disconnects the network with relative severity. Given a node-based resilience measure, NBR-Clust conducts robust clustering by using the set of components that result from the removal of the computed critical attack set as a basis for the set of clusters. We explore the following three node-based resilience measures in this work: vertex attack tolerance (VAT), integrity, and tenacity. The VAT of an undirected, connected graph G = (V, E) is denoted τ (G) and defined as (Ercal 2014;) where S is an attack set and C max (V − S) is the largest connected component in V − S. Normalized integrity (Barefoot et al. 1987) is defined as Tenacity (Cozzens et al. 1995) is defined as where ω(V − S) is the number of connected components in V − S. Traditional clustering usually ensures assignment of all nodes to a specific cluster. In complex datasets, some nodes could be outliers (nodes that don't really belong to a specific cluster) or overlapping nodes (i.e. nodes that could be assigned to more than one cluster). In these scenarios, the critical attack set may be used to determine outliers or overlap data points (Matta et al. 2016;Borwey et al. 2015). In this work, we consider both the traditional complete clustering scenario where all critical attack nodes are reassigned to cluster-components, as well as the non-traditional situation where the critical attack set is removed from the base clusters (i.e. without node reassignment). Given that we are clustering phenotype data that could involve some errors from the data collection process, outliers would imply potential erroneous data points. Removal of these outliers may result in better defined clusters. Overlap nodes could also be a pertinent feature, like in biological networks when proteins are classified to different clusters to reflect their multiple functions. However, the concept of overlap nodes is not clearly defined for medical data. We plan to explore this concept further in future work. The NBR-Clust algorithm consists of four main phases: i) Transform point data into a graph G; ii) Approximate resilience measure of graph, R(G), with acceptable accuracy, and return the candidate attack set S whose removal results in some number of candidate groupings (components C); iii) Perform a node-assignment strategy that assigns each node of S to a component C from step ii; iv) If more clusters are desired, choose the component with the lowest resilience measure and divide it into additional components using steps ii and iii. If fewer clusters are desired, join components with the greatest number of adjacent edges. The dividing and combining can continue until a desired number of clusters is obtained. The VAT-Clust, Integrity-Clust, and Tenacity-Clust algorithms (Borwey et al. 2015;Matta et al. 2016) utilize a heuristic known as Greedy-betweenness centrality (Greedy-BC). The betweenness centrality of a node is the ratio of shortest paths that include that node to the total number of shortest paths. High betweenness centrality is a measure of the importance of a node, as it implies that the node is more likely to be part of a path used when traversing the graph. The Greedy-BC heuristic estimates candidate attack sets by repeatedly taking the highest betweenness node, removing it from the network, taking the next highest betweenness node, removing it from the network, etc. Matta (2017); demonstrated that Greedy-BC approximates VAT, integrity and tenacity with acceptable accuracy. We implemented the NBR-Clust framework using weighted betweenness centrality computations (Brandes 2001). In the NBR-Clust method, if there is a desired number of clusters k for the output clustering configuration, a regrouping or hierarchical (Borwey et al. 2015) algorithm can be applied to attain this. None of the three clustering algorithms are guaranteed to output an exact k number of clusters. When more clusters are produced than desired, we regroup clusters by finding the pair of current components C1 and C2 that maximizes the normalized cut quantity: E(C1,C2)/(C1*C2), where E(C1,C2) is the number of edges between C1 and C2 and C1*C2 is the product of the number of nodes in C1 and the number of nodes in C2. C1 and C2 are combined into one cluster. Regrouping of clusters is repeated until the desired number of clusters is obtained. If the algorithm outputs fewer clusters than desired, then the hierarchical approach (Borwey et al. 2015) is applied to split the clusters till the specified number of clusters is achieved. Data preprocessing Given that the sample was drawn from a rigorous data collection (Simons Simplex Collection (Fischbach and Lord 2010), it contained very few missing values, approximately 0.1% missing values. Majority of the missing values were localized in two features, out of a total of 36 features. To impute missing values for these two attributes we used a standard regression, computed in Matlab, on the remaining 34 attributes to determine likely values. For other features that had very few missing values (0.002%), the mean of the remaining values for the specific feature was used. Feature selection is commonly used for selecting a small subset of features for building a learning model with good generalization performance (Guyon and Elisseeff 2003). Usually, the task of a feature selection algorithm is to prune the feature space by eliminating as many irrelevant and redundant features as possible and thus reducing the dimensionality of the dataset. In the dataset used, the number of features is relatively small compared to the number of examples. We apply the correlation filter algorithm introduced in (Obafemi-Ajayi et al. 2017) to exclude highly correlated features from the subsequent analysis. The filter algorithm automatically identifies and filters highly correlated features using pairwise Pearson correlation function based on a user defined threshold value. In this work, we investigate the effect of applying the correlation filter prior to clustering vs. simply using the entire set of features. Graph representations To apply the NBR-Clust framework on our dataset, we first convert the data into a knearest neighbor (kNN) graph G. In a kNN graph G k , vertices u and v have an edge between them if v is amongst the k closest vertices to u with respect to the distance metric considered. While any distance metric may be used to determine nearness of neighbors, we use the n-dimensional Euclidean distance following normalization of the feature space, where n is the number of features considered. In both (Matta et al. 2016;Cukierski and Foran 2008) evidence is presented in favor of minimal connectivity (min-conn) parameter k in the construction of kNN graph G k . Min-conn k implies choosing the minimal k such that ∀ k ≥ k ∀ (u,v∈V ) ∃ u-v path inG k . Additional information may be revealed at different levels of connectivity. A graph where parameter k is above connectivity contains more information in the form of additional edges. If nodes that should be clustered together are near to each other, edges are more likely to be added within potential clusters than between them. This will make it easier to identify clusters, and may give better clustering results than graphs where k is at minimum connectivity. The cost of using additional information is increased time and complexity. We consider three different connectivity settings for k in the kNN graph construction: min-conn, min-conn+1, and min-conn+2. Determining optimal clustering configuration We applied a holistic approach in determining the most optimal set of results per resilience measure (VAT, Integrity, and Tenacity) using three main criteria: internal cluster validation indices (ICVIs), graph quality measures, and distribution of resulting clusters. Clustering configurations that resulted in clusters with very few nodes (i.e. less than 10) were discarded, given that we had a total of 2680 nodes to cluster. Highly un-skewed clustering configurations tend to bias the cluster validation indices. An internal cluster validation index determines the optimal clustering solution most appropriate for the input dataset based on two measurement criteria: Compactness and Separateness (Kovács et al. 2005). Compactness measures how close the members of each cluster are to each other. Separateness measures how separated the clusters are from each other. The optimal cluster configuration should yield clusters that are compact and well separated. We explored the application of nine commonly used ICVIs (Liu et al. 2010(Liu et al. , 2013; Aggarwal and Reddy 2013) (Silhouette index (SI), Davies-Bouldin (DB) index, Dunn's index, Xie-Beni index (XB), Calinski-Harabasz (CH) index, I index (I), SD validity index (SD), S_Dbw validity index (S_Dbw), and Clustering Validation index based on Nearest Neighbors (CVNN)) on the clustering results to measure the goodness of the clusters. The metrics are described fully in (Liu et al. 2010(Liu et al. , 2013 and were implemented following their guidelines. We applied a large number of ICVIs to attain a more robust decision, given multiple studies (Brun et al. 2007;Vendramin et al. 2010;Arbelaitz et al. 2013;Liu et al. 2013) that demonstrate the diversity in range of results chosen by different indices. The optimal number of clusters is determined based on the majority vote of the validation indices along with the graph validation measures. A summary of these internal validation metrics utilized in this work for selecting the optimal clustering configuration is presented in Table 1. The notations and definitions employed are similar to those presented in (Liu et al. 2013). Since the clustering is done on graph representations of the data, we also utilized specific graph quality measures to evaluate the quality of the resulting graphs: modularity (Newman 2006) and conductance (Arora et al. 2009). 1 Modularity: This quantifies the strength of modules (analogous to clusters) created when clustering a graph. A graph with high modularity has more than expected edges internal to its modules, and fewer than expected edges between modules. We applied modularity to evaluate the "clusterability" of a graph based on a minimal threshold of 0.6. 2 Conductance: The conductance of a cluster is the fraction of all edges in the graph that point outside the cluster (Yang and Leskovec 2012). A low conductance implies a "better" cluster, because a higher proportion of a graph's edges are internal to that cluster. For our experiments, clustering configurations were acceptable conductance-wise if they had a conductance value of 0.07 or less. Feature Extraction Phase The objective of this phase is to obtain a set of features that discriminate among the clusters, as these features could be potential biomarkers for delineating the ASD subgroups. We employed the BestFirst search method (Eibe et al. 2016), implemented in Weka (Hall et al. 2009). The BestFirst search method traverses the attribute (feature) space to find a good subset. The quality of the subset found is measured by an attribute subset evaluator. It performs a greedy hill climbing, i.e. searching forward from the empty set of attributes, toward the goal of finding the most locally predictive attributes. The CFS (Correlation-based Feature Selection) subset evaluator was used to determine the merit of each subset. The CFS subset evaluator (Frank et al. 2016) assesses the predictive ability of each attribute individually and the degree of redundancy among them, preferring sets of attributes that are highly correlated with the class but with low inter-correlation. where O j is the jth object in C i , and q j is the number of nearest neighbors of O j which are not in cluster C i . Description of phenotype features The ASD sample analyzed in this work is drawn from the Simons Simplex Collection (SSC) (Fischbach and Lord 2010) population, a comprehensive, rigorous, reliable and consistent dataset supported by the Simons Foundation for Autism Research Initiatives (SFARI). (Simplex indicates that only one child in the family is affected with ASD while both parents and at least one sibling are unaffected.) To ensure reliability of clustering results, individuals missing any Autism Diagnostic Interview-Revised (ADI-R) (Lord et al. 1994) or Autism Diagnostic Observation Schedule (ADOS) (Lord et al. 1989) scores were excluded. The final dataset consisted of 2680 subjects, 2316 males (86.4%) and 364 females (13.6%) between ages of 4 and 17 years old. In cluster analysis, the quality of input features has a significant impact on the outcome. Hence, having a robust and diverse set of features is key to meaningful results. In contrast to previous work Al-Jabery et al. 2016;, we included some new sets of features: ADOS social affect score, word delay, ADI-R Q86 abnormality evident score, and ADI-R Q30 language total score. A total of 36 features (Table 2) were used in this work that spanned core diagnostic (ADIR and ADOS scores), ASD-specific symptoms, cognitive and adaptive functioning Statistical analysis of ASD outcome measures Additional features, not used in clustering, were selected as outcome measures to assess the clinical relevance of resulting cluster configuration. These include overall (total) scores for ABC, RBS, IQ, Vineland II composite standard score as well as the ADOS calculated severity score (ADOS CSS), a history of non-febrile seizures (i.e. diagnosis of epilepsy), and Peabody Picture Vocabulary Test (PPVT-4A) standard score. Note that these outcome measures are not completely independent of the input features used for clustering. We included the total scores of each of the aggregate features (ABC, RBS, IQ, Vineland) applied in the cluster analysis, as these scores tend to provide an overall picture of the ASD severity level of the proband. For example, the Vineland composite score provides an overall picture of the adaptive functioning skills. The ADOS CSS is a quantitative variable calculated using the summation of the ADOS social communication and RRBs scores. It provides a continuous measure of overall ASD symptom severity that is less influenced by child characteristics, such as age and language skills, than raw totals (Hus et al. 2014). It can be used to compare ASD symptom severity across individuals of different developmental levels. As such, they provide a "purer" metric of overall ASD severity. A higher level implies higher severity with 10 as the highest level of severity. The PPVT-4A score quantifies the language skill. A higher score implies fewer deficits, and better developed skills. The epilepsy data was only available for 99.85% of the sample. To validate the significance of the differences (quantified by mean and standard deviation) in these outcome measures by clusters, we employed the univariate one-way analysis of variance (ANOVA) test along with the Tukey HSD test (pairwise comparisons) for continuous variables (all except epilepsy). The ANOVA p-value reported for each ASD measure generalizes the Student's t test for between comparisons for multiple groups. The Tukey test informs us on which pairs of clusters are actually statistically different since the ANOVA's p value only indicates that at least one cluster is statistically different from another. The eta squared test (η 2 ) was conducted to determine the overall effect size for each clustering configuration per feature. The effect size conveys the practical significance of the ANOVA results. The Cohen's d test was also applied to quantify the effect sizes for each pairwise comparison. Experimental setup In the evaluation of our model, we investigate the effect of the following parameters: • NBR measure: VAT, Integrity and Tenacity algorithms were employed with the NBR-Clust framework. • Critical attack set (S ): we compared the performance of reassignment of all nodes belonging to S (i.e. complete clustering) to no node reassignment of S. • Connectivity level of the kNN graph representation: from minimum connectivity (kNN2) to two above connectivity (kNN4). • Use of correlation filter algorithm: the threshold value was set at 0.8. This resulted in removal of three features (ADOS Social Affect, Verbal IQ, and SRS T score). We compared the performance using the entire set of 36 features to clustering with only these 33 features (tagged as "corr" in the results). Each feature was normalized between 0 and 1 using known standard score ranges for the phenotype feature. The source code of the NBR-Clust algorithm is publicly available at (Node-Based Resilience Measure Clustering Project Website 2018) while the cluster validation platform suite is accessible at . The statistical analyses were implemented using IBM SPSS software while the feature extraction experiments were carried out in WEKA (Frank et al. 2016). The combinations of different levels of connectivity (kNN2, kNN3, kNN4) and using all features (36) versus correlation filtered set (33) resulted in a total of six base graphs. These six graphs were clustered using VAT-Clust, Integrity-Clust, and Tenacity-Clust, to yield results that had k=2, 3, 4 and 5 clusters with and without attack set node reassignment for a total of 144 different clustering output configurations. Results The critical attack set node reassignment results (traditonal clustering) are analyzed separately from without node reassignment (NR) configurations. The set of 7 optimal clustering configurations selected based on majority voting scheme of the nine ICVIs and graph quality measures per NBR measure algorithm is presented in Table 3. The instances where the clustering output attained the best score for the specified ICVI or graph quality measure are highlighted in bold. All optimal configurations, except for Tenacity-Clust with node reassignment, were obtained from the kNN2 graphs, implying the usefulness of min-connectivity graphs, as expected. Four out of the seven groupings examined in Table 3 favored a 5-cluster configuration as optimal. In general, the filtered data set did not seem to demonstrate an impact on the clustering outcomes, except in the case of the kNN3 graphs. The visualizations of the set of 7 optimal clustering configurations, using the ForceAtlas layout algorithm in Gephi (Bastian et al. 2009), are illustrated in Figs. 1 and 2. The demographics (mean age at ADOS, ethnicity as quantified by percentage Caucasian, and gender) of each cluster of the optimal clustering configurations are shown in Tables 4 and 5. We observe that there are no significant differences in the demographics across clusters for age and gender distribution. However, the distribution of percentage Caucasian varied across clusters. Statistical analyses of the optimal clustering configurations for each ASD outcome measures are presented in Tables 6 and 7. Note that for the no node reassignment results (Table 7), though the mean and standard deviation values for S is reported for each outcome measure, it is excluded from the Anova, Tukey and Eta-squared analysis. Higher . We can observe that the overall effect sizes, as quantified by the η 2 value is consistently high for kNN2 Tenacity 5-Cluster result in Table 6. Cluster C4 appears to be the most severe ASD subgrouping in terms of low overall IQ, relatively high occurrence epilepsy (non-febrile seizures), low functioning skills (as quantified by the Vineland composite scores), and high ADOS CSS scores. However, their ABC and RBS-R scores are not the most severe scores, and are slightly better compared to cluster C0. Cluster C0 has very high mean IQ scores (not the highest -C2), but the ABC and RBS-R scores for that subgroup are the lowest. For the no node reassignment analysis (Table 7), the 2-cluster VAT-clust result does not seem to convey much practical significance based on the relatively low η 2 values across all ASD outcome measures evaluated. Figure 3 illustrates the visualization of the graph of the optimal clustering result for kNN2 Tenacity 5-Cluster results in terms of distribution of high overall IQ (≥ 70) vs. lower IQ (< 70). Large circles denote high IQ while small circles denote low IQ. Only the green cluster (C4) shows a high concentration of low IQ nodes (small circles). We can observe the complexity of the variation in the 5-cluster result given by Tenacity kNN2 with node reassignment. This demonstrates that the resulting clustering obtained is a combination of various factors, not just IQ scores. The outcome of the feature extraction phase is summarized in Tables 8 and 9 for each of the seven clustering configurations. Overall, 20 different features were uncovered as discriminant for at least one of the 7 optimal clusterings. The regression feature was consistently selected for all seven results. Overall level of language (ADI-R Q30) was selected six times while both BAPQ Mother overall average score and word delay were selected five times. Discussion Regarding appropriate graph representation, the results confirmed advantageous aspects of the min-conn setting as the kNN2 graph exhibited optimal clusterings that were not sensitive to preprocessing parametric changes compared to the kNN4 graph. This implies robustness of min-connectivity graphs. As expected, there were no significant differences in age and gender distribution across various cluster configurations. This suggests that the variations in the ASD severity is unrelated to age or gender. However, interestingly, the distribution of percentage Caucasian varied across clusters. We had hypothesized that the results obtained by excluding the critical attack set (i.e. no node reassignment) would result in more clearly defined clusters. This is based on the assumption that the critical attack set contains possible outlier and/or overlapping nodes. As mentioned earlier, outliers in the context of this application could denote patients that may have some errors in their phenotype data from the data collection process. However, the results obtained for the configurations without node reassignment (NR) are not conclusive. The removal of the nodes, though relatively few, impacts the resulting configuration especially for VAT-Clust, which has the largest critical attack set of 108 nodes. When we compare the visualizations (Fig. 1) of the NR results to the traditional clustering results, in which every node is assigned to a cluster, the differences are subtle. This is probably due to the relatively small sizes of the critical attack sets (Table 7) obtained in this work based on the grouping algorithm applied to attain the desired number of clusters. From the statistical analysis (Table 7), the no node reassignment appears beneficial for Tenacity-Clust and Integrity-Clust The clinical outcomes analyses (Tables 6 and 7) demonstrate the significance and usefulness of the varied cluster configurations. Cluster attributes are consistent in the kNN2 integrity k = 3 clustering (Table 6). Cluster C1 has the most severe symptoms by all measures, such as lowest Overall IQ, and highest incidence of epilepsy. Cluster C0 has the lowest overall scores for ABC, RBS-R and ADOS CSS, as well as the highest Vineland Composite Score, Overall IQ, Learning Vocabulary Score (PPVTA), and the lowest incidence of epilepsy. For all measures, Cluster C2 lies between clusters C0 and C1. It is interesting also to note the cluster sizes. For this dataset, the subjects with the most severe symptoms account for approximately 7% of the sample. The group with the least severe symptoms is 71% of the sample, and the middle group counts for 22%. However, the η 2 values are very low which conveys a relatively low confidence in the results. The clustering obtained by the VAT 4-clustering is in many ways similar to the integrity 3-clustering, as can observed visually by comparing Figs. 1a and c, along with statistical Fig. 3 Visualization of optimal clustering result for kNN2 Tenacity 5-cluster graph in terms of distribution of high overall IQ (≥ 70) vs. lower IQ (< 70). Large circles denote high IQ while small circles denote low IQ. Only green cluster shows a high concentration of low IQ nodes. This demonstrates that the clustering obtained is a combination of various factors, not just IQ scores results from Table 6. The most severe cluster has the smallest size while the least severe is the largest cluster. As mentioned in the previous section, the overall effect sizes are consistently high for kNN2 Tenacity 5-Cluster result in Table 6 which conveys a strong confidence in the results. The variations observed in the varying levels of ASD severity complexity is interesting across clusters, for example, between clusters C0 and C4. Cluster C4 is characterized by the largest ASD severity level in terms of low overall IQ, relatively high occurrence epilepsy (non-febrile seizures), low functioning skills (as quantified by the Vineland composite scores), and high ADOS CSS scores. However, their aberrent behavior checklist and stereotyped behavior scores are not the most severe scores. It is slightly better compared to cluster CO. Cluster C0 has very high mean IQ scores (not the highest -C2) but the aberrent behavior checklist and stereotyped Behavior scores for that subgroup is the lowest. This provides further evidence that there is an ASD subgroup with relatively IQ scores but very severe behavioral problems . Four of the seven optimal clusterings consisted of 5 clusters. Two of the clusterings were obtained using integrity (both with and without reassigment) and two of the clusterings were obtained using tenacity (again both with and without reassignment). These clusterings can be compared visually in Figs. 1 and 2. We can observe that they all share some similarities in their configuration. According to Table 6, the kNN2 Tenacity 5-clustering configuration obtained using the filtered 33 features set had consistently high eta-squared values across all the outcome measures. Figure 4 summarizes the trends across the outcome measures for its five clusters using box plot charts. These charts (Fig. 4) were generated using the normalized values of the outcome measures between 0 and 100 to aid ease of comparisons across the diverse ranges for each measure. The outcome measures for which higher values implies higher ASD severity (ABC, RBS-R and ADOS-CSS) are illustrated in Fig. 4a while the measures for which higher values implies lower ASD severity (ABC, RBS-R and ADOS-CSS) are illustrated in Fig. 4b. Cluster C2 (the red cluster in Fig. 1e) denotes the subgroup with the lowest ASD severity (i.e. high functioning group) across all six measures. It is also the largest subgroup. Cluster C4 (the green cluster in Fig. 1e) denotes the subgroup with the highest ASD severity (i.e. low functioning group) across all six measures. It is also the smallest subgroup. Cluster C0, the gold cluster in Fig. 1e), is characterized by high IQ and PPVTA (vocabulary) scores as well as a low ADOS-CSS score but severe Vineland composite, ABC and RBS-R scores. The RBS-R and ABC scores are the lowest among all the clusters. This suggests that there is a subgroup with high IQ and vocabulary skills but very severe behavioral skills. Cluster C3, the blue cluster in Fig. 1e), is a subgroup that consistently lies in between the C2(least severe, red) and C4 (most severe, green) subgroups in all measures. In contrast, C1, the purple cluster in Fig. 1e), is consistently in between C0 (gold) and C2 (red) except for its ADOS-CSS scores, that is slightly higher for both. When we comparing C1 and C3 subgroups with each other, we can observe that C1 (purple) is less severe than C3 (blue) across all six outcome measures. The feature extraction results seem to suggest that the following phenotypes could be useful biomarkers in delineating ASD subgroups: Regression, Word Delay, ADI-R Q30 (Overall Level of Language), ADI-R Q86 (Abnormality evidence), RBS-R aggregate score (Ritualistic Behavior), ABC aggregate scores (Irritability, Inappropriate Speech), CBCL Externalizing T Score, Verbal score (ADI-R B), RBS-R-Stereotyped Behavior, BAPQ Avg (Mother), ADI-R C (Repetitive Behavior), Social (ADI-R A), and SRS aggregate scores (Mannerisms, Cognition, overall T Score). These results support evidence that language delay, regression and social scores are useful biomarkers for delineating meaningful subgroups. Conclusion This paper investigated the application of the NBR-Clust graph-based method to cluster analysis of ASD phenotypes of 2680 simplex ASD probands using different node resilience measures. To determine the optimal clustering configuration, we applied a holistic approach using three main criteria: internal cluster validation indices, graph quality measures, and distribution of resulting clusters. We presented a rigorous clinical/behavioral analysis of the highly ranked results by graph type and resilience measure. The results obtained demonstrate the potential and usefulness of NBR-Clust. The results favored a 5-cluster ASD sub-grouping configuration and identified a set of potentially useful phenotype biomarkers. Future work will include refinement of the critical attack set to identify specifically the outlier nodes for enhanced biomarker detection. Further studies are also needed to verify the potential ASD biomarkers identified in this work with respect to their application in management of ASD. Additional file Additional file 1: Cohen-d test values for Tables 6 and 7 to evaluate the effect sizes for each pairwise comparison. (XLSX 32 kb)
8,939
2018-08-29T00:00:00.000
[ "Computer Science" ]
Business Development Analysis Of Online Travel Enterprises Under The New Trend - Using Ctrip As An Example . The COVID-19 pandemic has had a systemic negative impact on China's economy and social development. Tourism industry has been hit hard. The capital chain of many small and medium-sized enterprises has been broken, and some have even gone bankrupt. Of course, some companies have successfully transformed themselves in the process. But on the other hand, this public health emergency has provided new opportunities for major online travel companies to solve the problems of stagnation in the tourism industry and open up new areas under the new trend of the epidemic. Therefore, taking Ctrip as an example, this paper first analyzes Ctrip's online tourism business, and then analyzes Ctrip's competitive advantages and existing problems in the market. Finally, this paper puts forward some suggestions for the existing problems of the company, hoping that Ctrip can establish a more diversified enterprise system in the future and become more competitive in the same type of enterprises. Research Background Ctrip is now one of the largest travel websites in China, based in Shanghai. Currently, Ctrip has set up branches in 17 cities in mainland China, a service contact center in Nantong, and subsidiaries in Hong Kong and Taiwan, accounting for more than half of China's online travel market share, making it one of the world's largest online travel agencies [1]. Before the pandemic, the contribution of tourism to GDP increased every year, even exceeding 10 trillion yuan in 2019 [2].When China launched a nationwide epidemic prevention and control campaign after 2020, the mainland's air, rail, online travel companies, travel agencies and hotels experienced a wave of ticket refunds, and the tourism industry was almost completely shut down. During the National Day holiday in 2021, the average travel distance of urban and rural residents dropped by 33.66 per cent, according to the Analysis of Tourism Economy Operation in 2021 and Development Forecast for 2022 released by the China Tourism Academy. The "short-distance travel and low consumption" represented by local tourism have become the basic market of the tourism industry [3]. Literature Review With the rapid development of online tourism, the competition among online tourism platforms in China has become increasingly fierce. Many domestic scholars have conducted research and analysis on online tourism. Chen Yin Jiang (2014) in the journal of business accounting analysis is proposed using Harvard analysis framework study Ctrip's financial statements to predict the Ctrip prospects the future development tendency. Mainly use Harvard analysis framework of market of strategy and financial analysis for Ctrip company analyzed. Finally, it is concluded that Ctrip company financial statement analysis framework. Through financial statement analysis framework for Ctrip revenue model, based on the evaluation of asset scale and cash flow, it is concluded that Ctrip has a strong tourism market share and a large amount of capital income, and it is predicted that Ctrip has a strong BCP Business & Management MEEA 2022 Volume 34 (2022) 842 ability to cope with future market changes [4]. Ji Yu (2017) studied the platform model of Ctrip. By comparing the typical domestic internet companies with platform model, she conducted that Ctrip should not only ensure the profit of its own products and perfect after-sales service, but also build a diversified platform to meet the multiple needs of consumers. To have greater competitiveness in the era of flow [5]. Zhou Yan (2018) through 4C theories to analyze Ctrip marketing methods, respectively from the customer, cost, convenience and communication four perspectives carries on the detail analysis, pointed out that Ctrip exist in marketing strategy positioning is not enough diversity [6]. Based on the above research issues and results, this paper will analyze and compare the advantages of Ctrip and its competitors and explore Ctrip's development strategy in the post-epidemic era mainly from the aspects of customer consumption behavior, new media live broadcast, layout of sinking market and cross-border platform cooperation. Research Purpose The long-term epidemic situation has made the tourism market slump. This essay will display a comprehensively understand and existing problems of China's online tourism industry, grasp the analysis of online tourism development, guide the operation of online tourism agencies, and provide a basis for decision-making. The current tourism industry has been limited by the pandemic since 2020, with China's international tourism revenue declining by $53.4 billion, or -40% of that in 2019 [3]. In such a severe economic environment, the online tourism industry is faced with cash flow and turnover difficulties and the urgency of transformation. One of them is Ctrip, a leader in the online travel industry that has been overtaken by rivals due to the pandemic. This paper will provide constructive development suggestions for Ctrip through various analysis, which is conducive to the revitalization of China's cultural and tourism economy after the epidemic. Epidemic Policy Limits Offline Tourism Looking at Ctrip's financial reports from 2019 and 2020, the total transaction volume of Ctrip GMV in 2020 was 395 billion RMB, while before the epidemic in 2019, the total transaction volume of GMV was 865 billion RMB [7,8]. According to a comparison of total transaction volume in these two years, the epidemic has had a significant impact on Ctrip's transaction volume. The domestic epidemic control policy is quite strict. Once more confirmed cases are discovered both within and outside the province, tourist attractions, hotels, and traffic in this area will be suspended. The outbound tourism policy is that it is not necessary to travel in an unfavorable situation, and it has not been fully opened to the outside world, preventing Ctrip's overseas business from progressing. China's epidemic policy slows the trading pace of Ctrip's online travel. Meituan During the outbreak, Meituan detected the importance of home delivery to the user. It established an emergency without contact distribution team immediately after the pandemic hits. The "no contact delivery" service was officially launched on the first day of 2020. Within a week, it covers 184 cities, and then spread to the country. During the Wuhan lockdown, Meituan delivered 3.96 million orders and more than 90,000 meals to medical teams in Hubei. During the epidemic period, 56.22 million orders were sent to hospitals across the country, nearly 800,000 new riders were recruited, more than 600,000 businesses were supported, and more than 4 million masks were distributed in 20 provinces and cities. At the same time, Meituan is also working with local government departments to launch intelligent takeout counters in batches across the country, which is convenient for riders to deliver meals [9]. The COVID -19 outbreak has changed a lot in the consumer lifestyle. Many users who prioritised offline shopping before the pandemic, began shopping through online channels to buy daily necessities. This change makes online shopping needs faster growth, the growth of the business flow. The logistics volume continues to rise, which has brought vitality to the development of the instant distribution industry. Fliggy During the epidemic period, Fliggy made great efforts to develop a variety of business activities such as e-commers live broadcast, short-distance family tour, and group construction, so as to grasp the core needs of customers during the epidemic and make timely adjustments and layout to the market. Among them, e-commers live streaming is of tremendous help to Fliggy during the epidemic. Thanks to Alibaba's first-mover advantage in e-commerce content since 2015, Fliggy was the first to introduce live streaming into the tourism industry. According to the data of Fliggy, the number of users watching e-commers live broadcast in the second quarter of 2021 increased by roughly 200% than that in the first quarter of 2021 [10]. About 20% of the e-commers with regular live broadcast on the platform maintained broadcasting at least once a week. From the second quarter of 2021, more and more merchants are opening the "daily live" mode. Relying on Marine animal resources, Haichang Ocean Park has carried out 17 consecutive "online Tours". From the perspective of public education, the two parties have simultaneously created a variety of live games, such as "online feeding" and "online adoption", to reinforce the interactive fans and customers. Since its inception, Haichang daily store subscribers have expanded by 50%, which adequately reflects the appeal of innovative content to users. To sum up, Ctrip's reaction to the epidemic was not as prompt as Meituan's and Fliggy's, which led to Meituan becoming a giant in the distribution industry in a short period of time, while Fliggy was a pioneer in livestream and content platforms. Nonetheless, the insufficient response of Ctrip directly led to the loss of Ctrip users, and Ctrip was faced with the pressure of cash flow after the epidemic. Ctrip should not only assurance the rights and interests of consumers, but also promptly locate a new way to strengthen the market to safeguard the profit and cash flow of Ctrip Group. Provide A Variety Of Customized Travel Services To Customers Ctrip's core audience consists of middle-to-high-end business users. From 2014 to 2019, the compound annual growth rate of users whose annual expenditure exceeds RMB 5,000 was 29% [11]. One of Ctrip's major actions in the personalized travel for middle and high-end consumers is to make efforts to customize travel and promote the standardization of customized travel. On April 26th, 2017, Ctrip's customized travel platform 2.0 was released in Beijing with the theme "Customize as you please, accompany with your service" [12]. On May 10th, the industry's first "customized tourism service standard" was officially released, and the "customized tourism supplier rating and elimination mechanism" was fully implemented. On August 17th, it broke the industry's unspoken rules once more, announcing Ctrip's customized platform and over 1,000 customized service providers, fully implementing the "transparent quotation," and clearly splitting the fees and service fees of customized tours to users. Tourists will benefit from a transparent and standardized service list and quotation, as well as a packaged general quotation. A major issue in the development of the customized tourism market is that prices and services are not standardized and are opaque. Ctrip has once again purified the market by establishing service standards and raising the competition threshold. When compared to the competitor Flying Pig, booking the same hotel in the same time period on two platforms at the same time, there are usually personalized convenience services such as "check-in and get two masks" or "free nucleic acid service" in Ctrip's settlement interface, which makes it more convenient for users to travel in the epidemic environment. In terms of specifics, the company considers the needs of passengers and reflects the diversity of Ctrip services. Content Marketing Training Innovation Although the traditional OTA platform has become increasingly centralized as a result of years of fierce competition and mergers and acquisitions, in recent years, the rise of short video and social platforms (such as Tik Tok and Little Red Book) has become a new competitor of Ctrip in terms of travel through the mode of "influencer marketing-fan traffic conversion-consumption punchingsharing communication." Marketing is crucial in Ctrip's response to new competitors' competition. Ctrip also devotes significant resources to live delivery and content community management. In March of 2021, Ctrip released the "Tourism Marketing Hub Strategy," proposing that the flagship store of "Star" be used as a carrier to aggregate the three major sectors of traffic, content, and commodities, superimpose travel scenes, and build a marketing ecological circulation system. Sun Jie described it as taking Planet's flagship store as a position, connecting merchant content, private domain, and transaction in series to form a complete link, and realizing a complete closed loop from content to transaction. According to Ctrip data, the overall fan size of Ctrip Planet flagship store increased by 34% in the fourth quarter of 2021 [13]. At the same time, the content is expanding rapidly. In the fourth quarter, the daily average interactive users of Ctrip Community Content Channel doubled year on year; the number of Ctrip KOLs and the daily average number of releases of all creators are both increasing significantly. Evaluation Of Ctrip's Competitive Strategy Ctrip mainly adopts the strategy of differentiated competition, competing with its competitors by differentiating products, channels, services and target groups. This strategy has laid a good foundation for Ctrip in the current market field. It can be seen from the market share data of online travel enterprises in 2021 that Ctrip has a market share of 36.3%, ranking the top. Meituan ranked second with a market share of 20.6 percent. The market share of same-trip travel was 14.8%, ranking third. Qunar Travel and Fliggy took up 13.9 percent and 7.3 percent of the market respectively, ranking fourth and fifth. Ctrip is currently in the leading position in the domestic online travel industry. At present, Ctrip focuses most of its business on middle-high consumer groups by utilizing the differentiated competition strategy, and has gained a number of loyal customers by adhering to the principle of providing customers with the best after-sales service and travel experience. However, despite the development of Ctrip, the disadvantages of this strategy are still undeniable. If competitors copy Ctrip's model and Ctrip's business model attacks the intermediate and high consumption groups, Ctrip's advantages will no longer exist. For example, Ctrip is in a difficult situation. Alibaba's travel platform Fliggy, relying on the traffic and resources of Alibaba's system, is trying to compete for travel users by means of subsidies of 10 billion yuan and low-cost hotels. For future development, Ctrip needs to strengthen the platform's service content. For example, Ctrip can cooperate with Amap or Baidu Map so that customers can not get lost during travel and use the map APP co-branded with Ctrip directly as a navigation tool. Ctrip should consolidate its work on big data analysis. As can be seen from its competitor, Fliggy, the drainage of Fliggy is more accurate than Ctrip. With the help of Alibaba, Fliggy's data analysis is much better than Ctrip's. Ctrip should also go through a big data analysis and recommend some hotel and park tickets to its own users that they may consider. Big data analysis not only enables Ctrip to better understand its user group portrait but also enables Ctrip to build a better foundation for future expansion. At present, developing a content platform is also very necessary for Ctrip. Although Ctrip has set up a "XingQiuHao" whih is a content platform created by Crirp.few businesses write and shoot for Ctrip exclusively. They do little Red Book, Douyin and video number first, and then post to XingQiuHao". A sound content platform can enhance the differentiation between Ctrip and its competitors. At the same time, more high-quality content creators will become Ctrip's natural advertisers and further bring Ctrip more revenue. All in all, Ctrip should maintain its current differentiated competitiveness and intensify efforts to explore additional market areas. Finally, through cooperation with other Internet companies, big data analysis and the creation of content platforms, to comprehensively improve their platform services. Live-steaming sales Today, all industries are digitally connected and the tourism industry is witnessing the rise of digital ascendancy For Ctrip, live broadcasting is more intuitive, interactive, and timely, which shortens the distance between products and users and gives people the impression that they are shopping in an online store. The sense of trust is increased by the anchor's vision, hearing, and touch. At the same time, preferential treatment is strong, affordable, and low price is always the sales magic weapon. The overall supply of domestic products exceeds demand. Aside from physical stores and online stores, there is now another low-cost marketing method, particularly for some remote economically underdeveloped areas. Smartphones can be sold live, which is unquestionably advantageous. At the same time, Ctrip can improve the humanization of after-sales service in the live broadcast room by offering refunds to passengers who need to unsubscribe temporarily due to the epidemic without charging service fees. This type of adaptability will increase passengers' trust in Ctrip, increasing the customer repurchase rate. Indeed, since the launch of Ctrip Live, it has focused on the integrated marketing of destinations linked by government and enterprises, assisting high-star hotels affected by the epidemic to speed up their blood return, and Ctrip has driven the recovery of hundreds of cities. At the same time, despite the ongoing impact of the Black Swan epidemic, Ctrip's achievements in settling the domestic tourism live broadcast market are also noteworthy. To summarize, live broadcasting for Ctrip is not only an increase in transaction volume, but also a necessary and important premise for Ctrip to transition from a platform of carrying goods in the travel industry to a platform of content creation. Collaborate with influencer In the wake of the Internet age, the growing information technology revolution has led to more disruptions and changes that are hard to realizefar beyond the content or information industry itself -Individual media outlets that produce video content through digital platforms often have a wider audience than professional stars.Therefore, in many vertical fields such as beauty, technology, fashion, and travel, Internet influencer have played their own influence and helped brand marketing to perform more vigorously. At the same time, a variety of content creation tools also provide ordinary netizens with more possibilities to fully express themselves in today's information explosion, and short videos with the characteristics that everyone can create have become the most important content carriers. Travel photos, punch cards in popular attractions, and record fleeting beauty with short videos are the habits of most young people, and have also laid the foundation for the continuous output of UGC content on short video platforms. On the other hand, travel is a low-frequency product that requires a longer decision-making cycle. The user conversion cycle is often longer. And, Ctrip, as the industry leader, has built a high end brand image and popularity among consumers. In the long run, if Ctrip continues to carry out full-site travel season marketing through the joint efforts of various Internet influencer and seizes the characteristics of netizens who are keen to record their lives with short videos, this will bring long-term benefits to Ctrip. For example, Ctrip can invite influencers to take beautiful photos or small videos at tourist attractions, attach a link to the Ctrip APP in the comment area, and say that they can receive exclusive machine wine offers for fans. In this way, in addition to influencing users' travel decisions through travel content, it will not only deepen travel users' and video viewers' memory of the brand, but also form a universal word-of-mouth and interactive discussion of the brand, and moreover achieve seeding of mainstream users in peak travel seasons such as the National Day. Expand the Market by Partnering with Other Platforms At present, whether it is an e-commerce giant or a tourism giant, they are paying more and more attention to content. And, content monetization has gradually become a major trend. On December 20th, Ctrip announced the launch of the Travel Photography Channel. It is reported that the channel encourages its users to post interesting things about their travel here in the form of pictures, texts or short videos. Furthermore, the fact that the platform has put the channel in the middle of the bottom navigation bar on its home page proves that the platform attaches great importance to the development of content creation. Through cooperation with major Internet platforms, Ctrip can drive its users to generate monetization and conversion of tourism consumption. Users can click on the scenic spots and hotel information attached to the short video and jump to the Ctrip reservation page to encourage users to purchase products. For example, the "FUN Travel Shake" activity which is jointly held by Ctrip and Douyin has brought a lot of heat and attention to both sides. It is reported that the video of this challenge activity has reached 100 million views in the shortest time period in Dou Yin's history. In the future, if Ctrip can cooperate more with other platforms, not only can it increase Ctrip's flow rate, but ultimately it can play a broader advertising effect to increase the transaction rate of orders for Ctrip. 4.2.1Attempt to develope the sinking consumer market In the past decades, the sinking market was generally defined as low consumption level, low prices and affordable goods. In recent years, more and more internet platforms have realized the importance of sinking market. Such as Pinduoduo, their main consumer groups in cities below the third tier, counties, towns and rural areas are mostly young people. These young people who have a high understanding and cognition of the trend of the first and second tier cites. With the spread of short video platforms, the information gap is getting smaller and smaller. Town youth also have a "world so big, I want to see the world" the yearning heart. Ctrip can according to the rural area youth APP search key words that to recommend some tourism commodities for these young people. Hence, through this method to promote the transaction of tourism commodities. Explore business travel retail formats Throughout the Ctrip's overall business model, mainly for air tickets, railway, hotel booking three blocks. Meituan and Fliggy have a variety of crossover business forms. For example, Meituan involves catering and entertainment business forms. Fliggy travel consumers can use Taobao platform directly. With the economy continues to recover, people visit for business and travel are also rise gradually. On the other hand, coffee culture is becoming more and more accepted by the most people. Therefore, Ctrip can also explore business travel retail formats. For example, Ctrip can choose some popular airport and high-speed rail site set up some coffee retail store. Hence, not only keep the clients spending on Ctrip's commodities, but also can enhance the ability of the monetization. Increase Ctrip consumer's live streaming watching time When consumer search for commodity's keywords, Ctrip will push relevant product live broadcast or live playback of product important content information to consumers, at the same time, Ctrip can set the duration of watching live broadcast to reach the standard before consumers can receive product coupons for use in product purchases. Based on this method, it can keep more people watching live broadcast room, not only improve the popularity of the live broadcast room, but also push the live broadcast into a large flow pool. So as to expand the scope of live broadcast and push it to more consumers to watch. In this way, the commodities monetization ability can be improved. Ctrip platform increases welfare distribution to consumers All kinds of vouchers issued by Meituan and Fliggy to users greatly stimulate their consumption desire. Ctrip can adjust various functions of user's homepage so that consumers can catch the beneficial to them and redeem a certain proportion of cash in a certain accumulated number of days by sign in. in addition, the product welfare community, red envelopes with a variety of welfare, coupons to achieve the universal use of all products, stimulated consumption, in a real sense to meet the psychological needs of consumers. Key Findings To sum up, the impact of the pandemic on tourism had been huge. Through the study, through analyzed the impact of tourism policies on Ctrip in the context of the epidemic. In addition, through multiple comparisons with competitors, the study found that Ctrip was not sensitive enough to cope with the epidemic. By prioritizing the expansion the of the sinking market, Meituan has captured the consumers of the sinking market in advance. On the other hand, Fliggy is expanding its market through live broadcast of new media. With the competition between two similar platforms, Therefore, Ctrip faces the consequences of decline performance and consumers loss. However, Ctrip provides personalized travel service and perfect after-sales service for consumers in emergency situation, which are not available on other two similar platforms. Live broadcast channel is also the sector that Ctrip has been working hard on its product marketing strategy. Hence, under the new trend, Ctrip should cooperate with other platforms or internet celebrities in the broadcast to expand the exposure radius of products. Therefore, increase the proportion of commodity transaction through these methods. Expectations For Ctrip's Future Based on the problems and suggestions to Ctrip outbreak era, the future of online travel industry will be more competitive. Online travel agent will continue to industrial transformation and upgrading, hope Ctrip in the next several years to take the initiative to seek in the retail, education, new media and social networking platform cooperation. On the other hand, the industry should be improved from many aspects of business monetization ability.
5,716.2
2022-12-14T00:00:00.000
[ "Business", "Economics" ]
Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at √s = 13 TeV with the ATLAS detector : Results of a search for new phenomena in events with an energetic photon and large missing transverse momentum with the ATLAS experiment at the Large Hadron Collider are reported. The data were collected in proton-proton collisions at a centre-of-mass energy of 13 TeV and correspond to an integrated luminosity of 3.2 fb (cid:0) 1 . The observed data are in agreement with the Standard Model expectations. Exclusion limits are presented in models of new phenomena including pair production of dark matter candidates or large extra spatial dimensions. In a simpli(cid:12)ed model of dark matter and an axial-vector mediator, the search excludes mediator masses below 710 GeV for dark matter candidate masses below 150 GeV. In an e(cid:11)ective theory of dark matter production, values of the suppression scale M (cid:3) up to 570 GeV are excluded and the e(cid:11)ect of truncation for various coupling values is reported. For the ADD large extra spatial dimension model the search places more stringent limits than earlier searches in the same event topology, excluding M D up to about 2.3 (2.8) TeV for two (six) additional spatial dimensions; the limits are reduced by 20{40% depending on the number of additional spatial dimensions when applying a truncation procedure. Introduction Theories of dark matter (DM) or large extra spatial dimensions (LED) predict the production of events that contain a high transverse momentum (p T ) photon and large missing transverse momentum (referred to as γ + E miss T events) in pp collisions at a higher rate than is expected in the Standard Model (SM).A sample of γ + E miss T events with a low expected contribution from SM processes provides powerful sensitivity to models of new phenomena [1-5]. The ATLAS [6,7] and CMS [8,9] collaborations have reported limits on various models based on searches for an excess in γ + E miss T events using pp collisions at centre-of-mass energies of √ s = 7 and 8 TeV (LHC Run 1).This paper reports the results of a search for new phenomena in γ + E miss T events in pp collisions at √ s = 13 TeV. Although the existence of DM is well established [10], it is not explained by current theories.One candidate is a weakly interacting massive particle (WIMP, also denoted by χ), which has an interaction strength with SM particles near the level of the weak interaction.If WIMPs interact with quarks via a mediator particle, they could be pair-produced in pp collisions at sufficiently high energy.The χ χ pair would be invisible to the detector, but γ + E miss T events can be produced via radiation of an initial-state photon in q q → χ χ interactions [11]. A model-independent approach to dark matter production in pp collision is through effective field theories (EFT) with various forms of interaction between the WIMPs and the SM particles [11].However, as the typical momentum transfer in pp collisions at the LHC could reach the cut-off scale required for the EFT approximation to be valid, it is crucial to present the results of the search in terms of models that involve the explicit production of the intermediate state, as shown in Fig. 1 (left).This paper focuses on simplified models assuming Dirac fermion DM candidates produced via an s-channel mediator with axial-vector interactions [12][13][14].In this case, the interaction is effectively described by five parameters: the WIMP mass m χ , the mediator mass m med , the width of the mediator Γ med , the coupling of the mediator to quarks g q , and the coupling of the mediator to the dark matter particle g χ .In the limit of large mediator mass, these simplified models map onto the EFT operators, with the suppression scale1 M * linked to m med by the relation M * = m med / √ g q g χ [15]. The paper also considers a specific EFT benchmark, for which neither a simplified model completion nor the simplified models yielding similar kinematic distributions are implemented in an event generator [16].A dimension-7 EFT operator with direct couplings between DM and electroweak (EW) bosons, and describing a contact interaction of type γγχ χ, is used [14].The effective coupling to photons is parameterized by the coupling strengths k 1 and k 2 , which control the strength of the coupling to the U(1) and SU(2) gauge sectors of the SM, respectively.In this model, dark matter production proceeds via q q → γ → γχ χ, without requiring initial-state radiation.The process is shown in Fig. 1 (right).There are four free parameters in this model: the EW coupling strengths k 1 and k 2 , m χ , and the suppression scale Λ. The ADD model of LED [17] aims to solve the hierarchy problem by hypothesizing the existence of n additional spatial dimensions of size R, leading to a new fundamental scale M D related to the Planck mass, M Planck , through M 2 Planck ≈ M 2+n D R n .If these dimensions are compactified, a series of massive graviton (G) modes results.Stable gravitons would be invisible to the ATLAS detector, but if the graviton couples to photons and is produced in association with a photon, the detector signature is a γ + E miss T event.Examples of graviton production are illustrated in Fig. 2. The search follows a strategy similar to the search performed using the 8 TeV data collected during the LHC Run 1 [7] .Due to the increased centre-of-mass energy, the search presented here achieves better sensitivity for the ADD model case where direct comparison with the 8 TeV search result is possible, as is shown later.Different DM models, proposed in Ref. [14], are also considered. The paper is organized as follows.A brief description of the ATLAS detector is given in Section 2. The signal and background Monte Carlo (MC) simulation samples used are described in Section 3. The reconstruction of physics objects is explained in Section 4, and the event selection is described in Section 5. Estimation of the SM backgrounds is outlined in Section 6.The results are described in Section 7 and the systematic uncertainties are given in Section 8.The interpretation of results in terms of models of new phenomena including pair production of dark matter candidates or large extra spatial dimensions is described in Section 9. A summary is given in Section 10. The ATLAS detector The ATLAS detector [18] is a multi-purpose particle physics apparatus with a forward-backward symmetric cylindrical geometry and near 4π coverage in solid angle. 2 The inner tracking detector (ID) covers the pseudorapidity range |η| < 2.5, and consists of a silicon pixel detector, a silicon microstrip detector, and, for |η| < 2.0, a straw-tube transition radiation tracker (TRT).During the LHC shutdown in 2013-14, an additional inner pixel layer, known as the insertable B-layer [19], was added around a new, smaller radius beam pipe.The ID is surrounded by a thin superconducting solenoid providing a 2 T magnetic field.A high-granularity lead/liquid-argon sampling electromagnetic calorimeter covers the region |η| < 3.2 and is segmented longitudinally in shower depth.The first layer, with high granularity in the η direction, is designed to allow efficient discrimination between single photon showers and two overlapping photons originating from a π 0 decay.The second layer collects most of the energy deposited in the calorimeter in electromagnetic showers initiated by electrons or photons.Very high energy showers can leave significant energy deposits in the third layer, which can also be used to correct for energy leakage beyond the EM calorimeter.A steel/scintillator-tile calorimeter provides hadronic coverage in the range |η| < 1.7.The liquid-argon technology is also used for the hadronic calorimeters in the end-cap region 1.5 < |η| < 3.2 and for electromagnetic and hadronic measurements in the forward region up to |η| = 4.9.The muon spectrometer (MS) surrounds the calorimeters.It consists of three large air-core superconducting toroidal magnet systems, precision tracking chambers providing accurate muon tracking out to |η| = 2.7, and fast detectors for triggering in the region |η| < 2.4.A two-level trigger system is used to select events for offline analysis [20]. Monte Carlo simulation samples Several MC simulated samples are used to estimate the signal acceptance, the detector efficiency and to help in the estimation of the SM background contributions. For all the DM samples considered here, the values of the free parameters and the event generation settings were chosen following the recommendations given in Ref. [14]. Samples of DM production in simplified models are generated via an s-channel mediator with axialvector interactions.The g q coupling is set to be universal in quark flavour and equal to 0.25, g χ is set to 1.0, and Γ med is computed as the minimum width allowed given the couplings and masses.A grid of points in the m χ -m med plane is generated.The parton distribution function (PDF) set used is NNPDF30_lo_as_0130 [21].The program MG5_aMC@NLO v2.2.3 [22] is used to generate the events, in conjunction with Pythia 8.186 [23] with the NNPDF2.3LOPDF set [24,25] and the A14 set of tuned parameters (tune) [26].A photon with at least 130 GeV of transverse momentum is required in<EMAIL_ADDRESS>a fixed m χ , higher m med leads to harder p T and E miss T spectra.For a very heavy mediator (≥ 10 TeV), EFT conditions are recovered. For DM samples from an EFT model involving dimension-7 operators with a contact interaction of type γγχ χ, the parameters which only influence the cross section are set to k 1 = k 2 = 1.0 and Λ = 3.0 TeV.A scan over a range of values of m χ is performed.The settings of the generators, PDFs, underlying-event tune and generator-level requirements are the same as for the simplified model DM sample generation described above. Signal samples for ADD models are simulated with the Pythia 8.186 generator, using the NNPDF2.3LOPDF with the A14 tune.A requirement of pTmin > 100 GeV, where pTmin defines the lowest transverse momentum used for the generation, is applied to the leading-order (LO) matrix elements for the 2 → 2 process to increase the efficiency of event generation.Simulations are run for two values of the scale parameter M D (2.0 and 3.0 TeV) and with the number of extra dimensions, n, varied from two to six. For W/Zγ backgrounds, events containing a charged lepton and neutrino or a lepton pair (lepton is an e, µ or τ), together with a photon and associated jets are simulated using the Sherpa 2.1.1 generator [27].The matrix elements including all diagrams with three electroweak couplings are calculated with up to three partons at LO and merged with Sherpa parton shower [28] using the ME+PS@LO prescription [29].The CT10 PDF set [30] is used in conjunction with a dedicated parton shower tuning developed by the Sherpa authors.For γ * /Z events with the Z decaying to charged particles a requirement on the dilepton invariant mass of m > 10 GeV is applied at generator level. Events containing a photon with associated jets are also simulated using Sherpa 2.1.1,generated in several bins of photon p T from 35 GeV up to larger than 4 TeV.The matrix elements are calculated at LO with up to three partons (lowest p T slice) or four partons and merged with Sherpa parton shower using the ME+PS@LO prescription.The CT10 PDF set is used in conjunction with the dedicated parton shower tuning. For W/Z+jets backgrounds, events containing W or Z bosons with associated jets are again simulated using Sherpa 2.1.1.The matrix elements are calculated for up to two partons at NLO and four partons at LO using the Comix [31] and OpenLoops [32] matrix element generators and merged with Sherpa parton shower using the ME+PS@NLO prescription [33].As in the case of the γ+jets samples, the CT10 PDF set is used together with the dedicated parton shower tuning.The W/Z+jets events are normalized to NNLO cross sections [34].These samples are also generated in several p T bins. Multi-jet processes are simulated using the Pythia 8.186 generator.The A14 tune is used together with the NNPDF2.3LOPDF set.The EvtGen v1.2.0 program [35] is used to simulate the bottom and charm hadron decays. Diboson processes with four charged leptons, three charged leptons and one neutrino or two charged leptons and two neutrinos are simulated using the Sherpa 2.1.1 generator.The matrix elements contain all diagrams with four electroweak vertices.They are calculated for up to one parton (for either four charged leptons or two charged leptons and two neutrinos) or zero partons (for three charged leptons and one neutrino) at NLO, and up to three partons at LO using the Comix and OpenLoops matrix element generators and merged with Sherpa parton shower using the ME+PS@NLO prescription.The CT10 PDF set is used in conjunction with the dedicated parton shower tuning.The generator cross sections are used in this case, which are at NLO. For the generation of t t and single top quarks in the Wt and s-channel, the Powheg-Box v2 [36,37] generator is used, with the CT10 PDF set used in the matrix element calculations.For all top processes, top-quark spin correlations are preserved.For t-channel production, top quarks are decayed using MadSpin [38].The parton shower, fragmentation, and the underlying event are simulated using Pythia 6.428 [39] with the CTEQ6L1 [40] PDF sets and the corresponding Perugia 2012 tune [41].The top mass is set to 172.5 GeV.The EvtGen v1.2.0 program is used for properties of the bottom and charm hadron decays. Multiple pp interactions in the same or neighbouring bunch crossings superimposed on the hard physics process (referred to as pile-up) are simulated with the soft QCD processes of Pythia 8.186 using the A2 tune [42] and the MSTW2008LO PDF set [43].The events are reweighted to accurately reproduce the average number of interactions per bunch crossing in data. All simulated samples are processed with a full ATLAS detector simulation [44] based on Geant4 [45].The simulated events are reconstructed and analysed with the same analysis chain as for the data, using the same trigger and event selection criteria discussed in Section 5. Event reconstruction Photons are reconstructed from clusters of energy deposits in the electromagnetic calorimeter measured in projective towers.Clusters without matching tracks are classified as unconverted photon candidates.A photon is considered as a converted photon candidate if it is matched to a pair of tracks that pass a requirement on TRT-hits [46] and form a vertex in the ID which is consistent with originating from a massless particle, or if it is matched to a single track passing a TRT-hits requirement and has a first hit after the innermost layer of the pixel detector.The photon energy is corrected by applying the energy scales measured with Z → e + e − decays [47].The trajectory of the photon is reconstructed using the longitudinal (shower depth) segmentation of the calorimeters and a constraint from the average collision point of the proton beams.For converted photons, the position of the conversion vertex is also used if tracks from the conversion have hits in the silicon detectors.Identification requirements are applied in order to reduce the contamination from π 0 or other neutral hadrons decaying to two photons.The photon identification is based on the profile of the energy deposits in the first and second layers of the electromagnetic calorimeter.Candidate photons are required to have p T > 10 GeV, to satisfy the "loose" identification criteria defined in Ref. [48] and to be within |η| < 2.37.Photons used in the event selection must additionally satisfy the "tight" identification criteria [48] and be isolated as follows.The energy in the calorimeters in a cone of size ∆R = (∆η) 2 + (∆φ) 2 = 0.4 around the cluster barycentre excluding the energy associated with the photon cluster is required to be less than 2.45 GeV + 0.022p γ T , where p γ T is the p T of the photon candidate.This cone energy is corrected for the leakage of the photon energy from the central core and for the effects of pile-up [47]. Electrons are reconstructed from clusters in the electromagnetic calorimeter matched to a track in the ID.The criteria for their identification, and the calibration steps, are similar to those used for photons.Electron candidates must satisfy the "medium" identification requirement of Ref. [47].Muons are identified either as a combined track in the MS and ID systems, or as an ID track that, once extrapolated to the MS, is associated with at least one track segment in the MS.Muon candidates must satisfy the "medium" identification requirement [49].The significance of the transverse impact parameter, defined as the transverse impact parameter d 0 divided by its estimated uncertainty, σ d 0 , of tracks with respect to the primary vertex3 is required to satisfy |d 0 |/σ d 0 < 5.0 for electrons and |d 0 |/σ d 0 < 3.0 for muons.The longitudinal impact parameter z 0 must be |z 0 | sin θ < 0.5 mm for both electrons and muons.Electrons are required to have p T > 7 GeV and |η| < 2.47, while muons are required to have p T > 6 GeV and |η| < 2.7. If any selected electron shares its inner detector track with a selected muon, the electron is removed and the muon is kept, in order to remove electron candidates coming from muon bremsstrahlung followed by photon conversion. Jets are reconstructed using the anti-k t algorithm [50,51] with a radius parameter R = 0.4 from clusters of energy deposits at the electromagnetic scale in the calorimeters.A correction used to calibrate the jet energy to the scale of its constituent particles [52,53] is then applied.In addition, jets are corrected for contributions from pile-up interactions [52].Candidate jets are required to have p T > 20 GeV.To suppress pile-up jets, which are mainly at low p T , a jet vertex tagger [54], based on tracking and vertexing information, is applied in jets with p T < 50 GeV and |η| < 2.4.Jets used in the event selection are required to have p T > 30 GeV and |η| < 4.5.Hadronically decaying τ leptons are considered as jets as in the Run 1 analysis [7]. To resolve ambiguities which can happen in object reconstruction, an overlap removal procedure is performed in the following order.If an electron lies within ∆R < 0.2 of a candidate jet, the jet is removed from the event, while if an electron lies within 0.2 < ∆R < 0.4 of a jet, the electron is removed.Muons lying within ∆R < 0.4 with respect to the remaining candidate jets are removed, except if the number of tracks with p T > 0.5 GeV associated with the jet is less than three.In the latter case, the jet is discarded and the muon kept.Finally if a candidate photon lies within ∆R < 0.4 of a jet, the jet is removed. The momentum imbalance in the transverse plane is obtained from the negative vector sum of the reconstructed and calibrated physics objects, selected as described above, and is referred to as missing transverse momentum, E miss T .The symbol E miss T is used to denote its magnitude.Calorimeter energy deposits and tracks are associated with a reconstructed and identified high-p T object in a specific order: electrons with p T > 7 GeV, photons with p T > 10 GeV, and jets with p T > 20 GeV [55].Tracks from the primary vertex not associated with any such objects ("soft term") are also taken into account in the E miss T reconstruction [56].This track-based soft term is more robust against pile-up and provides a better E miss T measurement in terms of resolution and scale than the calorimeter-based soft term used in Ref. [7]. Corrections are applied to the objects in the simulated samples to account for differences compared to data in object reconstruction, identification and isolation efficiencies for both the selected leptons and photons and for the vetoed leptons. Event selection The data were collected in pp collisions at √ s = 13 TeV during 2015.The events for the analysis are recorded using a trigger requiring at least one photon candidate with an online p T threshold of 120 GeV passing "loose" identification requirements based on the shower shapes in the EM calorimeter as well as on the energy leaking into the hadronic calorimeter from the EM calorimeter [57].Only data satisfying beam, detector and data quality criteria are considered.The data used for the analysis correspond to an integrated luminosity of 3.2 fb −1 .The uncertainty in the integrated luminosity is ±5%.It is derived following a methodology similar to that detailed in Ref. [58], from a preliminary calibration of the luminosity scale using x-y beam-separation scans performed in August 2015. Quality requirements are applied to photon candidates in order to reject events containing photons arising from instrumental problems or from non-collision background [46].Beam-induced background is highly suppressed by applying the criteria described in Section 6.5.In addition, quality requirements are applied to remove events containing candidate jets arising from detector noise and out-of-time energy deposits in the calorimeter from cosmic rays or other non-collision sources [59].Events are required to have a reconstructed primary vertex. The criteria for selecting events in the signal region (SR) are optimized considering the discovery potential for the simplified dark matter model.This SR also provides good sensitivity to the other models described in Section 1. Events in the SR are required to have E miss T > 150 GeV and the leading photon has to satisfy the "tight" identification criteria, to have p γ T > 150 GeV, |η| < 2.37, excluding the calorimeter barrel/end-cap transition region 1.37 < |η| < 1.52, and to be isolated.With respect to the Run 1 analysis, a re-optimization was performed that leaded to the following changes: a higher threshold for p γ T (150 GeV instead of 125 GeV) and a larger |η| region (|η| < 2.37 instead of 1.37) are used for the leading photon.It is required that the photon and E miss T do not overlap in the azimuth: ∆φ(γ, E miss T ) > 0.4.Events with more than one jet or with a jet with ∆φ(jet, E miss T ) < 0.4 are rejected.The remaining events with one jet are retained to increase the signal acceptance and reduce systematic uncertainties related to the modelling of initial-state radiation.Events are required to have no electrons or muons passing the requirements described in Section 4. The lepton veto mainly rejects W/Z events with charged leptons in the final state.For events satisfying these criteria, the efficiency of the trigger used in the analysis is 0.997 +0.003 −0.008 , as determined using a control sample of events selected with a E miss T trigger with a threshold of 70 GeV. The final data sample contains 264 events, of which 80 have a converted photon, and 170 and 94 events have zero and one jet, respectively.The total number of events observed in the SR in data is compared with the estimated total number of events in the SR from SM backgrounds.The latter is obtained from a simultaneous fit to various control regions (CR) defined in the following.Single-bin SR and CRs are considered in the fit: no shape information within these regions is used. Background estimation The SM background to the γ + E miss T final state is dominated by the Z(→ νν)γ process, where the photon is due to initial-state radiation.Secondary contributions come from Wγ and Zγ production with unidentified electrons, muons or with hadronically decaying τ leptons.There is also a contribution from W/Z production where a lepton or an associated radiated jet is misidentified as a photon.In addition, there are smaller contributions from top-quark pair, diboson, γ+jets and multi-jet production. All background estimations are extrapolated from orthogonal data samples.Control regions, built to be enriched in a specific background, are used to constrain the normalization of W/Zγ and γ+jets backgrounds.The normalization is obtained via a simultaneous likelihood fit [60] to the observed yields in all single-bin CRs.Poisson likelihood functions are used to model the expected event yields in all regions.The systematic uncertainties described in Section 8 are treated as Gaussian-distributed nuisance parameters in the likelihood function.The fit in the CRs is performed to obtain the normalization factors for the Wγ, Zγ and γ+jets processes, which are then used to constrain background estimates in the SR.The same normalization factor is used for both Z(→ νν)γ and Z decaying to charged leptons in SR events. The backgrounds due to fake photons from the misidentification of electrons or jets in W/Z+jets, top, diboson and multi-jet events are estimated using data-driven techniques based on studies of electrons and jets faking photons (see Sections 6.3 and 6.4). Zγ and Wγ backgrounds For the estimation of the W/Zγ background, three control regions are defined by selecting events with the same criteria used for the SR but inverting the lepton vetoes.In the first control region (1muCR) the Wγ contribution is enhanced by requiring the presence of a muon.The second and third control regions enhance the Zγ background by requiring the presence of a pair of muons (2muCR) or electrons (2eleCR).In both 1muCR and 2muCR, to ensure that the E miss T spectrum is similar to the one in the SR, muons are treated as non-interacting particles in the E miss T reconstruction.The same procedure is followed for electrons in the 2eleCR.In each case, the CR lepton selection follows the same requirements as the SR lepton veto, with the addition that the leptons must be isolated with "loose" criteria [49].In both the Zγ-enriched control regions, the dilepton invariant mass m is required to be greater than 20 GeV.The normalization of the dominant Zγ background process is largely constrained by the event yields in the 2muCR and the 2eleCR.The signal contamination in all CRs is negligible.The expected fraction of signal events in the 1muCR is at the level of 0.15%.In the 2muCR and 2eleCR the contamination is zero due to the requirement of two leptons. γ+jets background The γ+jets background in the signal region consists of events where the jet is poorly reconstructed and partially lost, creating fake E miss T .This background is suppressed by the large E miss T and the large jet-E miss T azimuthal separation requirements.It is estimated from simulated γ+jets events corrected with a normalization factor that is determined in a specific control region (PhJetCR), enriched in γ+jets events.This CR is defined with the same criteria as used for the SR, but requiring 85 GeV < E miss T < 110 GeV and azimuthal separation between the photon and E miss T , ∆φ(γ, E miss T ), to be smaller than 3, to minimize the contamination from signal events.The upper limit on the expected fraction of signal events in the PhJetCR has been estimated to be at the level of 3%.The extrapolation in E miss T of the gamma+jets background from the CR to the SR was checked in a validation region defined with higher E miss T (125 < E miss T < 250 GeV) and requiring ∆φ(γ, E miss T ) < 3.0; no evidence of mismodeling was found. Fake photons from misidentified electrons Contributions from processes in which an electron is misidentified as a photon are estimated by scaling yields from a sample of e + E miss T events by an electron-to-photon misidentification factor.This factor is measured with mutually exclusive samples of e + e − and γ + e events in data.To establish a pure sample of electrons, m ee and m eγ are both required to be consistent with the Z boson mass to within 10 GeV, and the E miss T is required to be smaller than 40 GeV.The misidentification factor, calculated as the ratio of the number of γ + e to the number of e + e − events, is parameterized as a function of p T and pseudorapidity and it varies between 0.8% and 2.6%.Systematic uncertainties from three different sources are added in quadrature: the difference between misidentification factors measured in data in two different windows around the Z mass (5 GeV and 10 GeV), the difference when measured in Z(→ ee) MC events with the same method as used in data compared to using generator-level information, and the difference when measured in Z(→ ee) and W(→ eν) MC events using generator-level information.Similar estimates are made for the three control regions with leptons, by applying the misidentification factor to events selected using the same criteria as used for these control regions but requiring an electron instead of a photon.The estimated contribution of this background in the SR and the associated error are reported in Section 7. Fake photons from misidentified jets Background contributions from events in which a jet is misidentified as a photon are estimated using a sideband counting method [61].This method relies on counting photon candidates in four regions of a two-dimensional space, defined by the transverse isolation energy and by the quality of the identification criteria.A signal region (region A) is defined by photon candidates that are isolated with tight identification.Three background regions are defined, consisting of photon candidates which are either tight and non-isolated (region B), non-tight and isolated (region C) or non-tight and non-isolated (region D).The method relies on the fact that signal contamination in the three background regions is small and that the isolation profile in the non-tight region is the same as that of the background in the tight region.The number of background candidates in the signal region (N A ) is calculated by taking the ratio of the two non-tight regions (N C /N D ) multiplied by the number of candidates in the tight, non-isolated region (N B ).This method is applied in all analysis regions: the SR and the four CRs.The systematic uncertainty of the method is evaluated by varying the criteria of tightness and isolation used to define the four regions.This estimate also accounts for the contribution from multi-jet events, which can mimic the γ + E miss T signature if one jet is misreconstructed as a photon and one or more of the other jets are poorly reconstructed, resulting in large E miss T .The estimated contribution of this background in the SR and the associated error are reported in Section 7. Beam-induced background Muons from beam background can leave significant energy deposits in the calorimeters, mainly in the region at large |η|, and hence can lead to reconstructed fake photons.These beam-induced fakes do not point back to the primary vertex, and the photon trajectory provides a powerful rejection criterion.The |z| position of the intersection of the extrapolated photon trajectory with the beam axis is required to be smaller than 0.25 m, which rejects 98.5% of these fake photons.The residual beam background after the final event selection is found to be negligible, about 0.02%. Final background estimation Background estimates in the SR are derived from a simultaneous fit to the four single-bin control regions (1muCR, 2muCR, 2eleCR and PhJetCR) in order to assess whether the observed SR yield is consistent with the background model.For each CR, the inputs to the fit are: the number of events seen in the data, the number of events expected from MC simulation for the W/Zγ and γ+jets backgrounds, whose normalizations are free parameters, and the number of fake-photon events obtained from the data-driven techniques.The fitted values of the normalization factors for Wγ and Zγ are k Wγ = 1.50 ± 0.26 and k Zγ = 1.19 ± 0.21, while the normalization factor for the γ+jets background is k γ+jets = 0.98 ± 0.28.The uncertainties include those from the various sources described in Section 8.The factor k Wγ is large owing to the data-MC normalization difference in the 1muCR, which can potentially be reduced using higher-order corrections for the Vγ cross sections [62], which are not available for the selection criteria used here. Post-fit distributions of E miss T in the three lepton CRs and in the PhJetCR are shown in Fig. 3 and Fig. 4.These distributions illustrate the kinematics of the selected events.Their shape is not used in the simultaneous fit, which is performed on the single-bin CRs. Results Table 1 presents the observed number of events and the SM background predictions in the SR, obtained from the simultaneous fit to the single-bin CRs.The same numbers are also shown in the three lepton CRs and in the PhJetCR.The contribution from W/Zγ with W/Z decaying to τ includes both the leptonic and the hadronic τ decays, considered in this search as jets.The fraction of W(→ τν) and Z(→ ττ) with respect to the total background corresponds to about 12% and 0.8%, respectively.The post-fit E miss T distribution and the photon p T distribution in the SR are shown in Fig. 5. Table 1: Observed event yields in 3.2 fb −1 compared to expected yields from SM backgrounds in the signal region (SR) and in the four control regions (CRs), as predicted from the simultaneous fit to all single-bin CRs.The MC yields before the fit are also shown.The uncertainty includes both the statistical and systematic uncertainties described in Section 8.The individual uncertainties can be correlated and do not necessarily add in quadrature to equal the total background uncertainty. Systematic uncertainties Systematic uncertainties in the background predictions in the SR are presented as percentages of the total background prediction.This prediction is obtained from the simultaneous fit to all single-bin CRs, which provides constraints on many sources of systematic uncertainty, as the normalizations of the dominant background processes are fitted parameters.The dominant systematic uncertainties are summarised in Table 2. The total background prediction uncertainty, including systematic and statistical contributions, is approximately 11%, dominated by the statistical uncertainty in the control regions, which amounts to approximately 9%.The largest relative systematic uncertainty of 5.8% is due to the electron fake rate.This is mainly driven by the small number of events available for the estimation of the electron-to-photon misidentification factor yielding a precision of 30-100%, depending on p T and η.PDF uncertainties have an impact on the Vγ samples in each region but the effect on normalization is largely absorbed in the fit.They are evaluated following the prescriptions of the PDF group recommendations [63] and using a reweighting procedure implemented in the LHAPDF Tool [64].These uncertainties contribute 2.8% to the background prediction uncertainty affecting mainly the Z(→ νν)γ background.The uncertainty on the jet fake rate contributes a relative uncertainty of 2.4% and affects mainly the normalization of W(→ ν)γ background, while the uncertainty on the muon reconstruction and isolation efficiency gives a relative uncertainty of 1.5% and mainly affects the Z(→ )γ background.Finally the uncertainty on the jet energy resolution accounts for 1.2% of the uncertainty and the most affected background is γ + jets.After the fit, the uncertainty on the luminosity [58] is found to have a negligible impact on the background estimation. For the signal-related systematics, the PDF uncertainties are evaluated in the same way described above for the background samples, while QCD scale uncertainties are evaluated by varying the renormalization and factorization scales by factors 2.0 and 0.5 with respect to the nominal values used in the MC generation.The uncertainties due to the choice of underlying-event tune used with Pythia 8.186 are computed by generating MC samples with the alternative underlying-event tunes described in Ref. [26].[65] 1.2% Photon energy scale 0.6% E miss T soft term scale and resolution 0.4% Photon energy resolution 0.2% Jet energy scale [53] 0.1% Interpretation of results The 264 events observed in data are consistent with the prediction of 295 ± 34 events from SM backgrounds.The results are therefore interpreted in terms of exclusion limits in models that would produce an excess of γ + E miss T events.Upper bounds are calculated using a one-sided profile likelihood ratio and the CL S technique [66,67], evaluated using the asymptotic approximation [68].The likelihood fit includes both the SR and the CRs. Limits on the fiducial cross section of a potential signal beyond the SM, defined as the product of the cross section times the fiducial acceptance A, are provided.These limits can be extrapolated within some approximations to models producing γ + E miss T events once A is known.The value of A for a particular model is computed by applying the same selection criteria as in the SR but at the particle level; in this computation E miss T is given by the vector sum of the transverse momenta of all invisible particles.The value of A is 0.43-0.56(0.4) for the DM (ADD) samples generated for this search following the specifications given in Section 3. The limit is computed by dividing the limit on the visible cross section σ × A × by the fiducial reconstruction efficiency .The latter is conservatively taken to be 78%, corresponding to the lowest efficiency found in the ADD and DM models studied here, for which the efficiency ranges from 78% to 91%.The observed (expected) upper limits on the fiducial cross section σ × A for the production of γ + E miss T events are 17.8 (25.5) fb at 95% confidence level (CL) and 14.6 (21.7) fb at 90% CL.The observed upper limit at 95% CL would be 15.3 fb using the largest efficiency value of 91%. When placing limits on specific models, the signal-related systematic uncertainties calculated as described in Section 8 affecting A × (PDF, scales, initial-and final-state radiation) are included in the statistical analysis, while the uncertainties affecting the cross section (PDF, scales) are indicated as bands around the observed limits and written as σ theo . Simplified models with explicit mediators are robust for all values of the momentum transfer Q tr [14].For the simplified model with an axial-vector mediator, Fig. 6 shows the observed and expected contours corresponding to a 95% CL exclusion as a function of m med and m χ for g q = 0.25 and g χ =1.The region of the plane under the limit curves is excluded.The region not allowed due to perturbative unitarity violation is to the left of the line defined by m χ = √ π/2m med [69].The line corresponding to the DM thermal relic abundance [70] is also indicated in the figure.The search excludes mediator masses below 710 GeV for χ masses below 150 GeV. Figure 7 shows the contour corresponding to a 90% CL exclusion translated to the χ-proton scattering cross section vs. m χ plane.Bounds on the χ-proton cross section are obtained following the procedure described in Ref. [71], assuming that the axial-vector mediator with couplings g q = 0.25 and g χ = 1.0 is solely responsible for both collider χ pair production and for χ-nucleon scattering.In this plane a comparison with the result from direct DM searches [72][73][74] is possible.The search provides stringent limits on the scattering cross section at the order of 10 −41 cm 2 up to m χ masses of about 150 GeV.The limit placed in this search extends to arbitrarily low values of m χ , as the acceptance at lower mass values is the same as the one at the lowest m χ value shown here. In the case of the model of γγχ χ interactions, lower limits are placed on the effective mass scale M * as a function of m χ , as shown in Fig. 8.The EFT is not always valid, so a truncation procedure is applied [75].In this procedure, the scale at which the EFT description becomes invalid (M cut ) is assumed to be related to M * through M cut = g * M * , where g * is the EFT coupling.Events having a centre-of-mass energy larger than M cut are removed and the limit is recomputed.The effect of the truncation for various representative The observed and expected 95% CL exclusion limit for a simplified model of dark matter production involving an axial-vector operator, Dirac DM and couplings g q = 0.25 and g χ = 1 as a function of the dark matter mass m χ and the axial-mediator mass m med .The plane under the limit curves is excluded.The region on the left is excluded by the perturbative limit.The relic density curve [70] is also shown.Figure 7: The 90% CL exclusion limit on the χ-proton scattering cross section in a simplifed model of dark matter production involving an axial-vector operator, Dirac DM and couplings g q = 0.25 and g χ = 1 as a function of the dark matter mass m χ .Also shown are results from three direct dark matter search experiments [72][73][74].values of g * is shown in Fig. 8: for the maximal coupling value of 4π, the truncation has almost no effect; for lower coupling values, the exclusion limits are confined to a smaller area of the parameter space, and no limit can be set for a coupling value of unity.In the ADD model of LED, the observed and expected 95% CL lower limits on the fundamental Planck mass M D for various values of n are shown in Fig. 9.The values of M D excluded at 95% CL are larger for larger n values: this is explained by the increase of the cross section at the centre-of-mass energy of 13 TeV with increasing n, which is an expected behaviour for values of M D which are not large with respect to the centre-of-mass energy.Results incorporating truncation in the phase-space region where the model implementation is not valid are also shown.This consists in suppressing the graviton production cross section by a factor M 4 D /s 2 in events with centre-of-mass energy √ s > M D .The procedure is repeated iteratively with the new truncated limit until it converges, i.e., until the difference between the new truncated limit and the one obtained in the previous iteration differ by less than 0.1σ.It results in a decrease of the 95% CL limit on M D .The search sets limits that are more stringent than those from LHC Run 1, excluding M D up to about 2.3 TeV for n = 2 and up to 2.8 TeV for n = 6; the limit values are reduced by 20 to 40% depending on n when applying a truncation procedure. Conclusion Results are reported on a search for new phenomena in events with a high-p T photon and large missing transverse momentum in pp collisions at √ s = 13 TeV at the LHC, using data collected by the ATLAS experiment corresponding to an integrated luminosity of 3.2 fb −1 .The observed data are consistent with the Standard Model expectations.The observed (expected) upper limits on the fiducial cross section for the production of events with a photon and large missing transverse momentum are 17.8 (25.5) fb at 95% CL and 14.6 (21.7) fb at 90% CL.For the simplified DM model considered, the search excludes mediator masses below 710 GeV for χ masses below 150 GeV.For the EW-EFT model values of M * up to 570 GeV are excluded and the effect of truncation for various coupling values is reported.For the ADD model the search sets limits that are more stringent than in the Run 1 data search, excluding M D up to about 2.3 TeV for n = 2 and up to 2.8 TeV for n = 6; the limit values are reduced by 20-40% depending on n when applying a truncation procedure.Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America.In addition, individual groups and members have received support from BCKDF, the Canada Council, CANARIE, CRC, Compute Canada, FQRNT, and the Ontario Innovation Trust, Canada; EPLANET, ERC, FP7, Horizon 2020 and Marie Skłodowska-Curie Actions, European Union; Investissements d'Avenir Labex and Idex, ANR, Région Auvergne and Fondation Partager le Savoir, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF; BSF, GIF and Minerva, Israel; BRF, Norway; Generalitat de Catalunya, Generalitat Valenciana, Spain; the Royal Society and Leverhulme Trust, United Kingdom. Figure 1 : Figure 1: Production of pairs of dark matter particles (χ χ) via an explicit s-channel mediator, med (left) and production of pairs of dark matter particles (χ χ) via an effective γγχ χ vertex (right). Figure 2 : Figure 2: Graviton (G) production in models of large extra dimensions. Figure 3 :Figure 4 : Figure 3: Distribution of E miss T , reconstructed treating muons as non-interacting particles, in the data and for the background in the 1muCR (left) and in the 2muCR (right).The total background expectation is normalized to the post-fit result in each control region.Overflows are included in the final bin.The error bars are statistical, and the dashed band includes statistical and systematic uncertainties determined by a bin-by-bin fit.The lower panel shows the ratio of data to expected background event yields. Figure 5 : Figure 5: Distribution of E miss T (left) and photon p T (right) in the signal region for data and for the background predicted from the fit in the CRs.Overflows are included in the final bin.The error bars are statistical, and the dashed band includes statistical and systematic uncertainties determined by a bin-by-bin fit.The expected yield of events from the simplified model with m χ = 150 GeV and m med = 500 GeV is stacked on top of the background prediction.The lower panel shows the ratio of data to expected background event yields. Figure6: The observed and expected 95% CL exclusion limit for a simplified model of dark matter production involving an axial-vector operator, Dirac DM and couplings g q = 0.25 and g χ = 1 as a function of the dark matter mass m χ and the axial-mediator mass m med .The plane under the limit curves is excluded.The region on the left is excluded by the perturbative limit.The relic density curve[70] is also shown. Figure 8 : Figure 8: The observed and expected 95% CL limits on M * for a dimension-7 operator EFT model with a contact interaction of type γγχχ as a function of dark matter mass m χ .Results where EFT truncation is applied are also shown, assuming representative coupling values of 2, 4, 8 and 4π. For very low values of M * , most events would fail the centre-of-mass energy truncation requirement, therefore, the truncated limits are not able to exclude very low M * values.The search excludes model values of M * up to 570 GeV and effects of truncation for various coupling values are shown in the figure. Figure 9 : Figure 9: The observed and expected 95% CL lower limits on the mass scale M D in the ADD models of large extra dimensions, for several values of the number of extra dimensions.The untruncated limits from the search of 8 TeV ATLAS data [7] are shown for comparison.The limit with truncation is also shown. [ 1 ] CDF Collaboration, T. Aaltonen et al., Search for large extra dimensions in final states containing one photon or jet and large missing transverse energy produced in p p collisions at √ s = 1.96TeV, Phys.Rev. Lett.101 (2008) 181602, arXiv:0807.3132[hep-ex]. Table 2 : Breakdown of the dominant systematic uncertainties in the background estimates.The uncertainties are given relative to the expected total background yield.The individual uncertainties can be correlated and do not necessarily add in quadrature to equal the total background uncertainty.
11,039
2016-04-05T00:00:00.000
[ "Physics" ]
On quadruples of Griffiths points Tabov (Math Mag 68:61–64, 1995) has proved the following theorem: if points A1, A2, A3, A4 are on a circle and a line l passes through the centre of the circle, then four Griffiths points G1, G2, G3, G4 corresponding to pairs (Δi,l) are on a line (Δi denotes the triangle AjAkAl, j,k,l ≠ i). In this paper we present a strong generalisation of the result of Tabov. An analogous property for four arbitrary points A1, A2, A3, A4, is proved, with the help of the computer program “Mathematica”. Introduction Tabov [1] has proved the following theorem: if points A 1 , A 2 , A 3 , A 4 are on a circle and a line l passes through the centre of the circle, then four Griffiths points G 1 , G 2 , G 3 , G 4 corresponding to pairs (Δ i , l) are on a line (Δ i denotes the triangle A j A k A l , j, k, l = i). Explanation. When a point P moves along a line through the circumcenter of a given triangle Δ, the circumcircle of the pedal triangle of P with respect to Δ passes through a fixed point, called Griffiths point, on the nine-point circle of Δ. The pedal triangle of P with respect to Δ is the triangle the vertices of which are feet of the perpendiculars from P to the sides of Δ. A very simple construction of the Griffiths point for a pair (Δ, l) is given in [2]. Namely, we project orthogonally the intersection points of l and the circumcircle of Δ onto the sides of Δ. The projections of each of these points are collinear and the common point of the two lines is the Griffiths point associated with (Δ, l). Main results In this paper we present a much stronger generalisation of the result of Tabov. We consider four arbitrary points A 1 , A 2 , A 3 , A 4 , no three of them collinear. By Δ i is denoted the triangle A j A k A l , j, k, l = i. l i is a line passing through the circumcenter of Δ i i = 1, 2, 3, 4. Finally, G i is the Griffiths point corresponding to (Δ i , l i ) i = 1, 2, 3, 4. Theorem. If lines l 1 , l 2 , l 3 , l 4 have a common point at infinity (every two of them are parallel), then points G 1 , G 2 , G 3 , G 4 are collinear. Proof. As in [1] points A 1 , A 2 , A 3 , A 4 are represented by complex numbers a, b, c, d, respectively. Without loss of generality, we may assume that points A 1 , A 2 , A 3 are on the circle of centre 0 and radius 1, i.e. |a| = |b| = |c| = 1. Similarly, we may assume that lines l 1 , l 2 , l 3 , l 4 are parallel to the real axis. are on the circle of centre 0 and radius R instead of 1, then After short calculations we find the number . Now we introduce a new coordinate system by the formula: z = z + c 3 . In the new system, according to (2.1), Then in the former coordinate system we have In an analogous way we obtain Points G 2 , G 3 , G 4 are collinear iff [1] the equality holds. In order to prove it, we use the computer program "Mathematica". The consecutive steps are as follows: First we write the complex numbers a, b, c, d in the form a = cos x + i sin x, b = cos y + i sin y, c = cos z + i sin z, d = R(cos u + i sin u). Beginning from now, all formulae are obtained with the help of "Mathematica". On quadruples of Griffiths points 397 Similarly, c 2 = e 0.5i(x+y) (1 − R 2 ) −2 cos x−y 2 + 2 cos u − x 2 − y 2 (c 2 represents the circumcenter (cos x + cos y + cos z − cos(x + y + z) In an analogous way we obtain Since the above expression is real, the equality (2.2) holds. Obviously, in an identical way we prove that points G 1 , G 2 , G 4 colline and so on. This ends the proof. Remark. As we can observe, using of a computer program to obtain so complicated formulae, was necessary. It should be noticed that the results obtained by transforming symbolic expressions with the help of the program "Mathematica" are quite exact. Open Access. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1,176
2013-07-18T00:00:00.000
[ "Mathematics" ]
Artificial neural network potentials for mechanics and fracture dynamics of two-dimensional crystals Understanding the mechanics and failure of materials at the nanoscale is critical for their engineering and applications. The accurate atomistic modeling of brittle failure with crack propagation in covalent crystals requires a quantum mechanics-based description of individual bond-breaking events. Artificial neural network potentials (NNPs) have emerged to overcome the traditional, physics-based modeling tradeoff between accuracy and accessible time and length scales. Previous studies have shown successful applications of NNPs for describing the structure and dynamics of molecular systems and amorphous or liquid phases of materials. However, their application to deformation and failure processes in materials is still uncommon. In this study, we discuss the apparent limitations of NNPs for the description of deformation and fracture under loadings and propose a way to generate and select training data for their employment in simulations of deformation and fracture simulations of crystals. We applied the proposed approach to 2D crystalline graphene, utilizing the density-functional tight-binding method for more efficient and extensive data generation in place of density functional theory. Then, we explored how the data selection affects the accuracy of the developed artificial NNPs. It revealed that NNP’s reliability should not only be measured based on the total energy and atomic force comparisons for reference structures but also utilize comparisons for physical properties, e.g. stress–strain curves and geometric deformation. In sharp contrast to popular reactive bond order potentials, our optimized NNP predicts straight crack propagation in graphene along both armchair and zigzag (ZZ) lattice directions, as well as higher fracture toughness of ZZ edge direction. Our study provides significant insight into crack propagation mechanisms on atomic scales and highlights strategies for NNP developments of broader materials. Introduction Understanding fracture mechanics and crack propagation is key to predicting and controlling mechanical behaviors for materials processing and subsequent materials applications. In many materials, crack propagation under loading is an overriding failure and, therefore, one of the critical problems in materials science. Accurate computational modeling of crack propagation, thus, becomes an essential tool for The size was selected to avoid self-interaction when we consider the radius cutoff of local descriptors of NNP (c) Energy comparison from the training and validation sets. direction does not occur in a straight line, even though a previous high-resolution transmission electron microscope (TEM) study reported both straight AC-and ZZ-torn graphene [32]. Two-dimensional materials are ideal for validating atomistic models of materials failure under the effects of vacancies, bilayer, crack directions, and folding by comparing the fracture patterns observed through advanced TEM techniques [33][34][35][36]. A recent in-situ TEM with reactive molecular simulation has shown that lattice distortion at the crack tip in 2D materials, WS 2 with a hexagonal lattice, can drastically change the entire path of crack propagation [35]. Also, theoretical comparisons between DFT and AIREBO show that the lattice distortion under tensile loading is significantly different [37]. Therefore, we hypothesized that developing an NNP of graphene for lattice deformation and fracture under various loadings could provide a more faithful prediction of crack propagation in graphene, especially along the AC direction. Furthermore, establishing a data preparation process can provide a foundational workflow to develop NNPs for mechanics and fractures of other materials. A schematic of the current study is shown in figure 1(a). In a previous study [37], we showed that DFTB could correctly predict both mechanical behaviors and lattice deformation of graphene under various loading conditions, in good agreement with DFT calculations. Therefore, we here utilized DFTB to generate extensive data for possible fracture scenarios of graphene under various loadings by mixing two tensile and one shear deformation. The data are reduced and selected based on deformation and energy differences to improve the generalization during the training. Then, we explored how the data selection affects the accuracy of the trained NNPs and the reliability evaluated based on physical properties such as deformation and stress-strain curves. In the end, we performed simulations for the crack propagation of graphene with a sharp crack using the trained NNP. We compared the results with an approach based on a popular empirical bond order potential, AIREBO. The results show better agreement with previous experiments regarding the resulting edge structures and the frequency of the type of torn edges. Generation of data The MDs simulations with DFTB for data generation were performed via the large-scale atomic/molecular massively parallel simulator (LAMMPS) package [38]. DFTB calculations were performed at each time step through the DFTB+ package [39], utilizing a previously developed interface for LAMMPS [40]. We employed the 3OB [41] C-C parameters with the DFTB3 scheme because the stress-strain behaviors are well-matched with those from DFT calculations with the PBE functional [40]. We generated 10 000 data points through the NVT ensemble at 400 K with the time step of 0.5 fs to obtain reference RMSE of the relative energy from the canonical data generation. Then, we obtained data points for the deformation and fracture under various loading. Static loading or quasi-static loading with full energy minimization has a limitation for the training data set because all atoms are located in the energy minimized positions. We need slightly perturbated coordinates to train the NNP to distinguish the contribution of a single atomic energy. Therefore, we utilized dynamics loading for the data generation. We consider different loading directions by mixing the loadings along x, y (pure tensile), and xy (pure shearing) directions as (v x , v y , v xy ). We prepared 361 directions with constant velocity 400 m s −1 with 0.5 fs time step. Each direction has 2000 steps, so the total data number is 722 000. Here, the loading speed is much faster than what is desired to provide reliable behaviors, which is under 20 m s −1 [40]. The effect of the loading speed becomes critical when the speed is too fast for the system to have enough time to relax the structures under the deformation. So, after every 20 steps, we included a small step of energy minimization to overcome the delay. The number of steps was tuned to match the stress-strain curve under shear loading to the results from the quasi-static loading. At each step, we deformed the simulation box by about 0.002 Å, resulting in a total deformation for each loading of about 4 Å. Selection of data From the 722 000 data points, we built neighbor lists between data points based on the deformation of the simulation box (dx, dy, dxy). We consider the distance (δr) between data as an indicator of the deformation similarity. Then, we sequentially deleted the data but saved it if the energy difference in the list is larger than δE. We utilized vales of δr (0.01 Å-0.2 Å) and δE (1 kcal mol −1 -50 kcal mol −1 ) from the original 722 000 data points, and the number of reduced data from (δr, δE) is listed in table 1. Training We utilized TorchANI library and its setting for training the NNs. For training, 80% of data was used, while 20% of data was utilized for validation with a small mini-batch size of 64. We note that we did not explicitly prepare a specific test set in this study because we utilized most hyperparameters suggested in the previous study of TorchANI [42]. Instead, we evaluated the performance of each model by the same reference data (δr = 0.01 Å, δE = 1 kcal mol −1 , ∼450 k). Therefore, our selected model from the data set (e.g. δr = 0.1 Å, δE = 1 kcal mol −1 , ∼250 k) was trained without 200 k data in both training and validation for their final performance. The loss function is defined as where α is a parameter to determine the contribution of forces, and we used 0.1. The Adam optimizer was utilized with weight decay for the weights [43,44] (weight decays for two hidden layers are set 1 × 10 −5 and 1 × 10 −6 , respectively, others are default values) and stochastic gradient descent (SGD) [45] for the biases (learning rate = 0.001, others are default values), as suggested in the previous study. The weights were initialized by Kaiming initialization [46] with the normal distribution, and the initial values of biases were zeros. We utilized a learning rate scheduler (a function 'reduced lr on plateau' in Pytorch) for both Adam and SGD with factor = 0.5, patience = 100, and threshold = 0. Others are default values). We took the best model for the validation set during the entire epochs. MDs simulations for crack propagation We prepared the pristine graphene of 10 nm by 20 nm with a 2 nm sharp crack along the y direction with periodic boundary conditions. To avoid the interaction between image cells, 20 nm space along the y direction and 3.35 nm space along the z direction were inserted. Instead of full dynamic loading, we applied a 0.01 strain at each iteration to stretch to a 0.04 strain along the x direction with structural relaxations. Then, we applied dynamic tensile loading along the x direction at low temperature, 10 Kelvin. The loading speed was 2.0 m s −1 , and the time step was set to 1 fs. PyTorch interface with LAMMPS We utilize the python functions in LAMMPS (v. 29OCT20) to utilize the python code in the python environment. Through the python environment, we can easily call python library. Utilizing TorchANI, we calculate the atomic environmental vectors (AEV) for NNs. Forces and stress are calculated with the given coordinates and simulation box through the autograd engine in PyTorch [47] and updated at each time step. Results and discussions One of the most critical parts of NNP development is the training data set. The data for training should cover the essential features of the problem-specific configurations. Previous NNPs have been trained from the data usually generated from first principles DFT-based MDs simulations and conformal searches based on the normal mode analysis [15,16,48]. The initial training set is generally not sufficient for the desired accuracy. Therefore, adaptive and active learning approaches have been proposed and applied [49,50]. The basic principle of such methods is to detect new data not contained in the initial training data set by analyzing the data using configuration fingerprints or comparing values from multiple models of NNPs, an ensemble, or a so-called committee. Iteratively searching the new data, training, and sampling provide better data sets in the end. However, these approaches are not sufficiently good for fracture dynamics if the initial data set is not accurate enough to describe the dynamics during the failure. We first tested previously trained models of ANI-1x [16], ANI-1cxx [51], and ANI-2x [52] as provided in the TorchANI library [42], where the data sets include the deformed geometries of small organic molecules from normal mode sampling. We examined the reliability with a single-layer graphene system consisting of 24 atoms under three different loadings (shown in figure 1(b)) by evaluating stress-strain curves and the deformation of bond length (l 1 , l 2 , and l 3 ) and angles (θ 1 , θ 2 , and θ 3 ). Since we utilized local descriptors with a radius cutoff ∼5 Å in TorchANI, the unit cell of 24 atoms with system size (∼7.4 Å × 8.5 Å) was selected as a compromise to avoid self-interaction effects on one hand and allow sufficiently large data creation with DFTB for training the NNP. A newly developed PyTorch interface in LAMMPS was utilized for communicating energy, forces, and stress with the TorchANI python library. Figures S1-S3 show the results of stress-strain curves and the deformation through ANI models. We note that the data sets do not explicitly include graphene information, but interestingly, the NNPs from ANIs can well describe graphene's behaviors under small deformation. As expected, however, they clearly fail for fracture behaviors and large deformation. Therefore, we designed data generation by mixing three loading directions along the x, y, and xy directions and generated more than 700 000 data points. Then, we trained new NNPs from scratch using the TorchANI [42] tool. The structure of the utilized NNP in the current study is shown in figure 2 and the comparison between other ANI models is in table S1. From the actual coordinates (q), AEV (also known as a symmetry function, G) are utilized as input to be invariant under translation and rotational transformation and the permutation of the same atom types [15]. There are two parts in the AEV: radial and angular terms from two atoms (i and j with distance R ij ) and three atoms (i, j, and k with two distances R ij , R ik , and one θ ijk ), respectively [42]: where η controls the width of Gaussian function with multiple R s for probing specified radial environments (m is an index for R s ); ζ controls the width of probing as η; θ s decides the specific region in the angular environments as R s . f C is a cutoff function to change values to zero at R C smoothly, defined as for R ⩽ R C and 0 for R > R C . AEV from each atom is an input for the NN to estimate the single atomic energy. We followed the parameters of AEV and the structures of nodes and layers from ANI-2x: the radial term has 16 different radii, and the angular term has 4 angles and 8 radii [52] shown in figure 2(b). The NN has three hidden layers with Gaussian error linear unit activation function [53] to add non-linearity. Next, we checked the performance of the NN structures and training processes from the data generated from graphene's MD simulation at equilibrium states at 400 Kelvin. We utilized a root mean square error (RMSE) for the evaluation of the accuracy for both energy and force components as a kcal mol −1 energy unit. Because some previous works utilized a mean absolute error (MAE) with meV atom −1 energy unit as metrics, we also added the MAE values with meV atom −1 . A RMSE in the relative energy lower than 0.8 kcal mol −1 (RMSE = 1.5 meV atom −1 ∼ 0.034 kcal mol −1 atom −1 , MAE = 0.6 meV atom −1 ∼ 0.014 kcal mol −1 atom −1 ) is achieved around 200 epochs, which we consider a very high accuracy from the perspective of computational chemistry, where a threshold 1 kcal mol −1 (for small molecules with ∼20 atoms) in accuracy is commonly considered a 'gold standard' . However, our result clearly shows that an agreement between NNP and ground truth data in terms of relative energy does not guarantee correct physical properties. As shown in figure S4, the failure behaviors are not correctly described. Figure 3(a) shows a naïve way to save data under loading, recording data based on constant deformation (δr) or at a constant time interval. There are two apparent problems with this approach. First, it is likely to miss essential data during the failure process, where the configurations drastically change in a very short time. Second, the many similar data are close to each other near the non-deformed structures, which probably hinders the training due to the data imbalance [54]. Therefore, we utilized a constant energy difference (δE) to select data, as shown in figure 3(b). It would have better chances to capture the key data during the failure process even with the same number of data points. Figure 3(c) shows the schematic to represent the data distributions with the two main directions of the loadings: x and xy. From the total data, we built neighbor lists of each data point with a defined cutoff in the deformation space, δr. Then, the data in the neighbor list is sequentially removed if the energy difference is not bigger than the predefined criterion, δE. Table 1 lists the parameters: δr and δE with the screened data numbers. We checked the stress-strain curves of the trained model from all data without any reduction, as shown in figure S5. As expected, it shows much better behaviors than ANIs or the trained model from NVT ensemble trajectories because the current data set explicitly includes various fracture scenarios. However, it fails to describe the fracture patterns and stress-strain after fracture along the x direction loading in figure S5(a). The energy minimization during the quasi-static loading should result in complete bond breaking between broken edges. Figures 4(a) and (b) show the RMSE of energy and force components from training, validation, and total data from each data set selected from the above-mentioned approach as varying δr with fixed δE = 1 kcal mol −1 . We note that the data from larger δr is selected from the data set of smaller δr, which means the smaller data sets always belong to the larger data sets. Therefore, the difference of RMSE, e.g. better accuracy of δr = 0.1 Å than that of δr = 0.05 Å, does not come from the data difference but from better generalization with reducing the overfitting (See SI discussion). We assume the original data set has too many similar data to prevent generalization. So, we evaluate the accuracy of all trained models based on the first selected data set from δr = 0.01 Å and δE = 1 kcal mol −1 as shown in figures 4(c) and (d). . RMSE of relative energy per atom (a) and force components (b) from training, validation, and total data from each data set selected from δr with fixed δE = 1 kcal mol −1 . δr = 0 indicates the entire data points without reduction. Reevaluation RMSE of relative energy (c) and force components (d) with the same data set (δr = 0.01 Å, δE = 1 kcal mol −1 ). RMSEs depend on data set, and even smaller numbers of data can have higher accuracy. Also, RMSE does not guarantee the physical properties of the models. The RMSE of relative energy from the model (δr = 0.01 Å, δE = 1 kcal mol −1 ) shows the lowest value, but RMSE of force components from the model (δr = 0.1 Å, δE = 1 kcal mol −1 ) show the lowest value. Also, we investigated the stress-strain curves and deformation for the reliability of the trained NNPs. The models trained from δr = 0.01 Å to δr = 0.05 Å data have problems near the fracture point, as indicated with arrows in figures S6-S8 (See figures S9-S11 for other conditions). In terms of the stress-strain curves and deformation, the trained model from δr = 0.2 Å looks better than that from δr = 0.1 Å, but the model from δr = 0.1 Å was selected for the next stage because the RMSE of force components exhibits the lowest value. This is a reasonable choice because the RMSE of atomic force is a good indicator for overfitting (see SI discussion). We also tested how the choice of δE affects the accuracy of models with δr lower than 0.1, as shown in figure S12. Since the number of data decreases as the value of δE increases, it is expected to lose accuracy. However, this also does not monotonically decrease, and data with δr = 0.1 Å and δE = 5 kcal mol −1 is reasonably optimal with the number of data points, 78 000 (= 78 k). Considering the fact that the reference data (∼450 k) is six times larger, the loss of accuracy of energy and force components are only 0.02 kcal mol −1 atom −1 (RMSE = 0.9 meV atom −1 ) and 0.12 kcal mol −1 Å −1 (5 meV Å −1 ), respectively. Also, the NNP describes the stress-strain and deformation under three loadings very well, as shown in figure S13. This kind of data reduction without losing the essential data is important for active learning. However, such data augmentation is out of the current scope, and we selected the model from the data (δr = 0.1 Å, δE = 1 kcal mol −1 ) for the next simulations. Figure 5 shows the stress-strain curves and deformation of the selected model under the loading along the x direction. A previous microscopy study reported both straight AC and ZZ torn graphene edges through the high-resolution TEM [32]. Also, torn lines along the AC edge are twice more frequently observed than the torn lines along the ZZ edge [55]. In previous theoretical studies, ReaxFF and AIREBO have been utilized to describe the mechanics and crack propagation from atomistic modeling. However, ReaxFF has some limitations in describing mechanics and stress-strain curves near failure compared to DFT calculations [56]. Especially, brittle crack propagation is hindered by the stiffening effect near the point of failure. Instead, AIREBO is preferred because the stress-strain curves of pristine graphene are well-matched with DFT calculations once its bond order switching function is controlled [31]. MD simulations of pristine graphene with AIREBO consistently show that the fracture toughness along the ZZ edge is lower than the AC edge [57][58][59]. Also, once the crack propagates along the AC direction, the fracture pattern from the AIREBO shows the ZZ torn edge preference. Although nanopore formation through the electron beam prefer to form the ZZ edge [60], the configuration does not come from the mechanics or crack preference but from the kinetic stability during the reconstructions [61]. This shows that empirical forcefields are limited for predicting pristine graphene's torn edge configuration and the dynamics of crack propagation. Finally, we performed MD simulations using both selected NNP and AIREBO to test our hypothesis for the crack propagation along the AC direction with a rectangular system with 10 nm × 20 nm with a crack of 2 nm, as shown in figure 6. We performed the crack propagation simulations by combining quasi-static loading and dynamic loading to reduce the computational cost. The NNP results in a straight and clean torn edge in both AC and ZZ crack direction in figure 6 (see movies 1 and 2). As described above, AIREBO predicts that the straight propagation along the AC direction is less likely to occur. Instead, it shows meandering crack paths in figure 6 (see movies 3 and 4) with ZZ crack edges. The obtained stress-strain curves from NNP and AIREBO are shown in figure 6. The notable difference here is the fracture toughness. We estimated the critical energy release rate and fracture toughness in both AC and ZZ crack directions. Considering the brittle behaviors of the stress-strain curve, we can directly estimate the external work as the critical energy release rate (G c ) from the critical stress (σ c , GPa) and the critical elongation (∆l, nm) as where l y (= 20 nm) is the length of the system in the y direction, and ∆a (nm) is the total length of the crack propagation. Fracture toughness is then can be obtained by where E is Young's modulus, and it is assumed as 1 TPa. The obtained values are listed in table 2. The obtained fracture toughness values from AIREBO for AC-edge (zz loading direction, 3.54 MPa m 1/2 ) and ZZ-edge (ac loading direction, 3.29 MPa m 1/2 ) agree with those from the previous studies with AIREBO [59]. The NNP predicts the fracture toughness along the AC-edge (zz loading direction, 3.10 MPa m 1/2 ) is lower than the ZZ-edge (ac loading direction, 3.41 MPa m 1/2 ), which shows an opposite trend to the results of the AIREBO. We note that our focus is on the difference in fracture toughness between ZZ and AC directions. The main factor of our estimations comes from the stored elastic energy before the crack propagation. Therefore, a correction of Young's modulus (calculated elastic constants are listed in table 3 and table 4) estimation without assuming 1 TPa does not affect the relative fracture toughness, e.g. ratio. The frequency of torn edges in the suspended polycrystalline graphene monolayer depends on the fracture toughness of pristine graphene. The prediction from the NNP agrees well with those previous observations, while AIREBO predicts opposite behaviors in terms of the frequency of torn edge observation and torn AC edge configuration. The limitation of AIREBO comes from the softened angle stiffness under tensile loading, which also has been compared with DFT and DFTB calculations in the previous study [40]. Figure 7 shows graphene's bond lengths and angles at the crack tip just after the first bond breaking during the crack propagation. The angular deformation of NNP shows a lower angle (∼124 • ) than that of AIREBO (133 • ), which results in the elongation of the inner side bond length, l 2 (∼1.7 Å) than the outer side bond length, l 3 (∼1.6 Å). The relative bond lengths between l 2 and l 3 determine the crack path, and AIREBO prefers ZZ crack paths because l 3 (1.72 Å) is longer than l 2 (1.67 Å). In another 2D material, WS 2 , these lattice distortions can result in anisotropic crack dynamics even with the same surface energy [35]. Bond and angle at the crack tip after the first bond breaking along the AC edge direction from NNP (a) and AIREBO (b). Dotted red arrows represent the crack propagation. The angular deformation of trained NNP shows a lower angle than that of AIREBO, which results in the elongation of the inner side bond length, l2, than the outer side bond length, l3, from NNP. AIREBO shows propagation along the zigzag edge because l3 is longer than l2. Conclusion In summary, we proposed generating and selecting the training data for the deformation and fracture of crystals and applied it to 2D crystal graphene through DFTB calculations. We utilized the previously developed PyTorch library, TorchANI, to train the models of the Behler-Parrinello type's NNP. For MD simulation, we developed a PyTorch interface with LAMMPS, which can be expanded to other ML potential libraries through PyTorch. The proposed data reduction process improves the generalization of NNP training by mitigating overfitting. We show that the low RMSEs of energy and force do not automatically guarantee the reliable behaviors of the trained models 3 . The selected model considering physical properties can describe the torn edge configuration observed in the previous studies and explains well the higher frequency of torn AC edge occurrence, which is not possible with other reactive potentials. The proposed work frame can be applied to understand fracture dynamics of 2D and bulk crystals using NNPs. We wish to emphasize that the current NNPs are limited as they should only be used for the simulation of stress-induced fracture and failure of pristine graphene. The training data set does not explicitly have failure dynamics of bilayer graphene, diamond, amorphous carbon network, carbon nanotube, graphyne, grain boundary, vacancies, folding, etc. Therefore, new data should be generated, or transfer learning/active learning is required to provide a quick path for developing NNPs of other applications. However, NNP is more useful for a large system in terms of computational speed than first principles-based electronic structure approaches. Also, the NNP is very flexible in capturing non-linear deformation-stress behaviors well (figure 5), which is challenging with the fixed functional form, such as harmonic equations in classical forcefields. In the previous study, we observed that it is challenging to match both deformation and non-linearity of stress-strain behaviors of graphene simultaneously with those from DFT even through reactive forcefields [37]. Crack dynamics is one of the exciting applications for NNPs due to its intrinsic multiscale feature. In this study, we only focus on the data generation and selection from the mixed loading and data reduction using DFTB as the reference method. However, the selected data can be utilized for high throughput calculation with more accurate methods. Also, active learning and transfer learning from the selected data would be interesting topics in the future. Data availability statement The data that support the findings of this study are openly available at the following URL/DOI: https:// github.com/gsjung0419/LammpsANI.
6,524
2023-04-14T00:00:00.000
[ "Materials Science" ]
Political turmoil and banks ’ stock returns : Evidence from Turkey ’ s 2016 coup attempt Article history: Received May 15 2020 Received in revised format May 16 2020 Accepted June 29 2020 Available online July 4 2020 Turkey experienced an extreme political event on Friday July 15 2016 in the form of an attempted coup. This paper examines the impact of this event on the components of the Banks Index of the Istanbul Stock Exchange using event study methodology. Results show that the banks’ abnormal returns (ARs) were a statistically significant negative from +2 to +6 days with the peak on day +3 when the government declared a state of emergency. Furthermore, when the banks’ stocks are compared with the overall market, they display lower volatility during the study period. These findings evidence the importance attached to political factors and particularly political instability in shaping investment decision making. by the authors; licensee Growing Science, Canada 20 © 20 Introduction Political events are known to be a source of stock market volatility due to the uncertainty they bring (e.g. Bash & Alsaifi 2019;Günay, 2019). Turkey is one country that has experienced political turbulence and has been the subject of studies of stock market impacts. These studies have generally used the banking institutions listed in the Istanbul Stock Exchange (ISE) that constitute the Banks Index. While political instability is associated with reduced investment, there is a lack of evidence regarding the specific impact of the Turkish attempted coup of 2016 on the ISE. Therefore, providing evidence to fill this research gap would contribute significantly to the literature. Furthermore, banks are often excluded from stock return studies on the basis they are different in important respects from non-financial stocks, reducing the available literature on this important sector (European Central Bank, 2006). With this motivation, we analyze the listed stocks on the Banks Index of the ISE to examine the effect of the coup on banks' returns. Focusing on the banking sector is justified by the commanding role they play in providing business investment in the country (Kartal et al., 2018) and their success in attracting foreign direct capital flows which has contributed to the rising value of bank stocks (Acar & Temiz, 2017). On Friday July 15, 2016, a military coup was attempted to overthrow the Turkish government. It started after the stock market had closed and had been quashed within just a few hours (BBC News, 2016a). The government declared a state of emergency on Thursday July 21 2016 (BBC News, 2016b). A military coup is an extreme political event and can be expected to lead to rapid and highly negative investor reaction. Section 2 presents a review of the literature on political crises and stock returns. Section 3 presents the research design, followed by Section 4, which shows the research results and discussion. Section 5 concludes. 1162 Literature Review Political risk is a component of financial risk alongside market risk, credit risk and operational risk and is particularly significant in emerging economies (Günay, 2016(Günay, , 2019. The political domain has long been understood to be an important part of the external environment in which businesses operate (Aguilar, 1967). Most of the time, this domain affects a business when a government interacts with the economy, such as when it regulates or when it determines the balance between public and private sectors. In addition to the ongoing significance of politics, there are periods when the political domain has a more acute effect on businesses. These political events can be extreme in their impact. An example of an extreme political event is an attempted coup such as the one experienced by Turkey on Friday July 15 2016. An unsuccessful coup is an indicator of political instability, alongside others including terrorist attacks, the detention of opposition leaders, riots, protests, and media censorship (Jadallah & Bhatti, 2019). Changes of government as a result of a democratic process effect economic growth positively while undemocratic regime change has a negative effect (Feng, 1997). The literature has examined a broad range of political events for their effects on stock market returns, including both multi-country and single country effects. Amihud and Wohl (2004) found a strong effect on stock prices from the anticipation of Saddam Hussein's fall from power in 2003, suggesting that it was associated with ending a costly and destabilizing war. Chen and Siems (2004) examined the historical effect of terrorism on stock markets, finding a significant negative effect on returns. The Iraq war was also associated with increased risks and causing a fall in equity prices and U.S. treasury yields (Rigobon & Sack, 2005). The September 11, 2001, terrorist attacks on the U.S. have also been investigated for their effect on risk perception and tail dependence (Straetmans et al., 2008). Using a worldwide dataset of 447 political crises, Berkman et al. (2011) report that the beginning of a crisis is associated with increased stock market volatility and conversely the ending of a crisis sees reduced market volatility. The Brexit referendum, held in the U.K., is another political event that had a significantly negative effect on stock returns in many countries (Arora, 2017). The Brexit effect on stock market returns across a large number of indices was also confirmed by Burdekin et al. (2017). There are also single country studies on political crises and stock market returns. On March 1, 2003, Turkey's parliament effectively blocked the U.S. from stationing military forces on its soil, a highly significant political event. Aktas and Oncu (2006) found that the investor reaction to this event supported the efficient market hypothesis as the market responded appropriately to new information resulting in neither an overreaction nor an underreaction. The Arab Spring, starting in January 2011, was a period of political instability in the Arab world. It was to have a sustained negative effect on stock returns in some countries. For example, in Egypt, from January 2011 to January 2012, half the value of the benchmark index, the EGX 30, had been wiped out (Lehkonen & Heimonen, 2015). Ayub (2017) examined the effect of Pakistan president Benazir Bhutto's December 2007 assassination on the Karachi Stock Exchange. He found that while this unexpected political event caused a large drop in stock values on the first trading day, over the longer term there was no significant underreaction or overrection indicated. The uncertainty surrounding the disappearance of Jamal Khashoggi on October 2, 2018, was found to have a strongly negative effect on stock returns on firms listed on the Saudi Stock Exchange (Bash & Alsaifi, 2019). However, not all political events lead to such a sustained reaction. The negative effect of Thailand's military coup on September 19, 2006, on the SET index lasted only minutes, and by the end of the first trading day, the index was up 3.1% (Lehkonen & Heimonen, 2015). The event had brought an extended period of political crisis to an end (Hewison, 2008) suggesting investors viewed the coup as a return to stability and therefore, a positive influence on investment. The banking sector is unsurprisingly one of substantial research interest due to its strategic role and potential for knock-on effects across the whole economy (Mirzaei et al., 2014;Motamedi, 2013). Banking sector returns are significantly and positively associated with future economic growth in both developed and emerging economies (Cole et al., 2008). As a highly regulated sector, banking is particularly sensitive to some aspects of the political dimension; however, studies of bank returns in volatile markets have tended to focus on financial crises rather than political ones (Cornett et al., 2009;Peni & Vähämaa, 2012;Weigand, 2016). The extent to which banks' stock returns are affected by market volatility could be dependent on their size as some of the larger banks may be considered too big to fail (Chira et al., 2013). In summary, political crises and political events of varying kinds are known to be associated with abnormal stock returns and market volatility. The precise nature of investor reaction to political events appears from the evidence to be dependent on type of event, context, and the availability of new information. Short term reaction, particularly on the first trading day, and potentially based on herding behaviour (Sinha, 2015) may be misaligned with informed medium and longer-term investor reaction. Research Design Market efficiency assumptions dictate that event effects are immediately reflected in firms' stock price. Thus, the banks' returns following Turkey's coup attempt is estimated using the event study method. Under this approach means and medians are used for estimating banks' returns related to specific events (Alsaifi et al., 2020). The first task for an event study is determining the appropriate study period; this period will be used for estimating abnormal returns (ARs). The coup date of July 15 2016 would normally be designated as day 0; however, as the coup commenced after the market had closed, day 0 becomes the next trading day which was Monday July 18 2016. All other trading days are designated in relation to day 0. Hence the immediately preceding trading day is day -1 and the immediately subsequent trading day is day +1. The estimation period extended from day -200 to day -21. The 21 trading day gap between the end of the estimation period and day 0 is to eliminate any chance of contamination from the event and mitigate any stationarity. Data is obtained from Thomson Reuters for 12 banks that constitute the Banks Index on ISE during the examined period. We calculate the daily returns using simple arithmetic returns based on closing stock prices. Following Brown and Warner (1985) we apply the standard mean-adjusted returns when calculating the ARs for stock at Day : where , is the stock return at day , and is the average return of stock 's daily returns during the estimation period (−200, −21). After this, we use the T-test for parametric testing, determining the statistical significance of the mean of ARs. Secondly, for non-parametric, we control for the effect of outliers with the Wilcoxon signed-rank test (W-test), determining the statistical significance of the median of ARs. Results and Discussion Descriptive statistics of the research sample use day -1 data, the day immediately before the day of the attempted coup, as shown in Table 1. In Turkish Lira (TL), the mean market capitalization and total assets were TL160.3 and TL156.7 billion respectively, suggesting our sample consists of large banks. Table 1 also indicates a wide variation in the banks' characteristics; however, the sample is weighted towards banks with a high market capitalization. Table 2 presents the key statistical indicators for banks' ARs from days -10 to +20. With the market closed because of the weekend, investors first opportunity to react to the coup came on Monday July 18 2016 by which time the coup had ended more than 48 hours ago. The Turkish government had moved quickly to reassure the markets offering, through the Central Bank of Turkey, unlimited liquidity (Koc et al., 2016). We find the banks' ARs were statistically insignificant on the first two days of trading after the market had reopened. This may be because of the weekend effect which shock absorbed and because the coup was so short-lived. On day +3, a state of emergency was announced, and a major suppression of government opponents began including mass arrests and purges (BBC News, 2016b). The announcement on day +3 coincides with the peak of negative and highly statistically significant ARs in both tests (i.e T-test and W-test) within the period +2 and +6 that saw a major sell-off and falling prices reflecting investors fear and uncertainty. Beyond this period, confidence returned to the markets including the currency markets where the Turkish Lira had now stabilized. After day +6, while the ARs were still negative, they were not statistically significantly so in both tests. The other statistical indications presented in Table 2 shows the ARs distribution starts to be negatively skewed all days after the attempted coup, indicating the presence of negative outliers. It can also be seen that ARs distribution jumped to be leptokurtic for most days after the attempted coup, confirming extreme values of negative ARs. Fig. 1 shows the banks' ARs based on mean and median during the examined period. The dramatic drop in their value starting from day 0, and their curves stayed in the negative area continuously till day +15. ARs, there is an obvious difference between the sloping curves of banks' ARs and the other two samples. The decline in the ARs of banks' looks relatively smooth compared to the sharp declines seen in the ARs of the BIST100 and BIST30 components, specifically in the period from day 0 to +6. The same outcome can be derived from Fig. 3, which shows that the ARs based on median value of banks' are less volatile compared to the components of BIST100 and BIST30. An additional analysis presented in Table 3 shows the raw trading data of the Banks Index during the examined period. The results are confirmed, through the trading volume in TL and the negative percentage change in the Banks Index on days 0 and +3 (the first trading day and the day of declared the state of emergency) as these characteristics are the highest on those two days. Conclusion The political domain is an important part of the environment in which firms operate. It is also a potential source of instability and uncertainty that affects investor confidence. The forcible removal of a government through a coup is an extreme political event whether the attempt succeeds or fails. Such a failed attempted occurred in Turkey on the 15 th July 2016. In this paper, we applied event study methodology using mean-adjusted returns. Our results indicate that the banks' ARs were statistically insignificant on the first two days of trading after the market had reopened. This may be partly attributed to the speed the coup was quashed and the fact that the markets had been closed in the immediate aftermath, in addition to the weekend effect which relaxes the shock. In line with the efficient market hypothesis, investor reaction could also be explained by the readily available new information as the coup and its defeat played out in real-time through the media. On the third trading day, when a state of emergency was declared these ARs became statistically significant negative. Potentially because this declaration was unexpected and perceived as a potential source of political instability. However, this negative trend was also short-lived. These findings evidence the importance attached to political factors and particularly political instability, in shaping investment decision making. Further analysis compared the mean and median banks' returns to the overall market represented by the BIST100 and BIST30 components. Banks returns were found to exhibit lower levels of volatility than the market as a whole. This may be attributable to the Central Bank of Turkey's stated assurance that it would make unlimited liquidity available in the aftermath of the failed coup. Further research could examine the coup's impact on other market indices such as the BIST100 and BIST30, sectors other than banking, or rather than stock returns could examine the impact on the Turkish Lira's performance.
3,529.4
2020-01-01T00:00:00.000
[ "Political Science", "Economics", "Business" ]
de Finetti Lattices and Magog Triangles Let $B_{n,2}$ denote the order ideal of the boolean lattice $B_n$ consisting of all subsets of size at most $2$. Let $F_{n,2}$ denote the poset extension of $B_{n,2}$ induced by the rule: $i<j$ implies $\{i \} \prec \{ j \}$ and $\{i,k \} \prec \{j,k\}$. We give an elementary bijection from the set $\mathcal{F}_{n,2}$ of linear extensions of $F_{n,2}$ to the set of shifted standard Young tableau of shape $(n, n-1, \ldots, 1)$, which are counted by the strict-sense ballot numbers. We find a more surprising result when considering the set $\mathcal{F}_{n,2}^{(1)}$ of poset extensions so that each singleton is comparable with all of the doubletons. We show that $\mathcal{F}_{n,2}^{(1)}$ is in bijection with magog triangles, and therefore is equinumerous with alternating sign matrices. We adopt our proof techniques to show that row reversal of an alternating sign matrix corresponds to a natural involution on gog triangles. Introduction Such linear extensions appear under various names, including comparative probability orders, boolean term orders and completely separable preferences, see OEIS A005806 [27]. The number of such linear extensions for 1 ≤ n ≤ 7 is 1, 1, 2, 14, 546, 169444, 560043206 but there is still no known general formula. Herein, we turn our attention to the order ideal B n,2 ⊂ B n of subsets of size at most 2, and count poset extensions of B n,2 that adhere to de Finetti's axiom. We begin with the following definition. [11] for a comparative probability order on the power set P([n]), relaxed here to allow for incomparable sets. We make a few observations about de Finetti lattices. First, any poset extension of B n,m adhering to (F1) and (F2) contains F n,m as a sublattice. Second, it is evident that F n = F n,n and that F n,n−1 ∼ = F n . Third, when m = 2, condition (F2) is equivalent to the simpler statement that i < j implies {i, k} ≺ {j, k} for k = i, j. Finally, we note that F n,m is not total order when 3 ≤ n and 2 ≤ m ≤ n, as certified by the incomparable sets {1, 2} and {3}. Definition 1.2. A de Finetti extension (E, E ) of lattice F n,m is a poset extension that adheres to de Finetti's condition (F2) for all sets X, Y ⊂ [n] that are comparable in E. For 1 ≤ k ≤ m ≤ n, let F (k) n,m denote the collection of de Finetti extensions of F n,m such that each set S ∈ F n,k is comparable with every set T ∈ F n,m . That is, if |S| ≤ k, then S is comparable with every set in F n,m . For convenience, we denote F n,m = F Figure 1.3, shows how the de Finetti extensions build upon one another to produce the 14 total orders in F 4 . Enumerations of these total orders can also be found in [16,5,10]. When n > m ≥ 3, there are poset extensions of F n,m that do not adhere to (F2). For example, we can extend F n,3 by adding the single comparison {4} ≺ {3, 1} without adding the comparison case is large enough to exhibit some complexity. Herein, we characterize F n,2 = F (2) n,2 and F (1) n,2 . We give a simple bijection between the total orders in F n,2 and shifted standard Young tableau (shifted SYT) of shape (n, n − 1, . . . , 1), see OEIS A003121 [27]. In these shifted SYT of staircase shape, the first box in row i > 1 is located below the second box of row i − 1. The integers 1, 2 . . . , n(n + 1)/2 are arranged in the boxes so 4,2 are total orders on F 4,2 . Ten of the twelve posets in F 4,2 induce total orders on F that the rows and the columns are both increasing. These are equinumerous with the number of strict-sense ballots with n candidates, where candidate k gets k votes, candiate k never trails candidate ℓ for n ≥ k > ℓ ≥ 1, see [2]. For 1 ≤ n ≤ 7, the strict-sense ballot numbers are 1, 1, 2, 12, 286, 33592, 23178480 and the general formula for the nth strict-sense ballot number is Proposition 1.3. The set F n,2 is in bijection with shifted standard Young tableaux of shape (n, n − 1, . . . , 1). Therefore F n,2 is enumerated by the strict-sense ballot numbers. Determining |F (1) n,2 | uncovers a remarkable connection to alternating sign matrices (ASMs), see OEIS A005130 [27]. An ASM is a n × n matrix of 0's, 1's and −1's such that each or column sums to 1, and the nonzero entries in each row or column alternate in sign. The seven 3 × 3 ASMs are Zielberger famously proved that alternating sign matrices (ASMs) are equinumerous with totally symmetric self-complimentary plane partitions (TSSCPPs) [33]. The first seven numbers in this sequence are 1,2,7,42,429,7436,218348 and the general formula is Zielberger's proof converts ASMs and TSSCPPs into gog and magog triangles, respectively, and then shows that these families of triangles are equinumerous. We prove that F n,2 is in bijection with magog triangles (and hence also in bijection with TSSCPPs), and are therefore equinumerous with ASMs. We use M n to denote the set of magog triangles of size n. This theorem follows immediately from two lemmas. First we prove that de Finetti extensions are in bijection with the (newly defined) family of kagog triangles. Then we give a bijection between kagog triangles and magog triangles. Definition 1.6. A kagog triangle K is an array of nonnegative integers K(i, j) such that (K1) 1 ≤ j ≤ i ≤ n − 1, so the array is triangular; (K2) 0 ≤ K(i, j) ≤ j, so entries in column j are at most j; (K3) K(i, j) ≥ K(i + 1, j), so columns are weakly decreasing; and (K4) if K(i, j) > 0 then K(i, j + 1) > K(i, j), so rows can start with multiple zeros, but then the positive values are strictly increasing. We use K n to denote the set of kagog triangles of size n. Note that a triangle in K n only has n − 1 rows and columns. The elements of K 3 are The kagog triangles in equation (4) are ordered so that that they biject to the magog triangles listed in equation (3). n,2 is in bijection with the set of kagog triangles K n−1 . Lemma 1.8. The set of magog triangles M n is in bijection with the set of kagog triangles K n . The key to Lemma 1.8 is to convert each of these triangles into a pyramid of stacked cubes, colored gray or white, so that gray cubes cannot appear above white cubes. We offer a generic definition for pyramid construction, which applies to any family T n of triangular arrays that form a distributive lattice using the natural partial ordering T 1 ≺ T 2 whenever T 1 (i, j) ≤ T 2 (i, j) for 1 ≤ j ≤ i ≤ n. This includes magog triangles M n and kagog triangles K n , as well as gog triangles G n (defined below). Definition 1.9. Let T n be a finite distributive lattice of triangular arrays of positive integers T = T (i, j) where 1 ≤ j ≤ i ≤ n with minimal triangle T min and maximal triangle T max . Define △T to be the two-color pyramid of cubes where the tower of cubes at (i, j) consists of T (i, j) white cubes below T max (i, j) − T (i, j) gray cubes. Define △T n = {△T : T ∈ T n } to be the collection of two-color pyramids. This two-color pyramid mapping is a variation of the standard interpretation triangular array T as a stack of cubes where the tower at (i, j) has height T (i, j). Indeed, we can view the white cubes as present and the gray cubes as absent. In our proof, tracking the absent cubes is essential, so the two-color pyramids are more illuminating. Intuitively, the bijection from magog triangles to kagog triangles corresponds to removing the bottom layer of the magog pyramid, then swapping the colors of the cubes and finally performing an appropriate affine transformation. Given the success of the two-color pyramid view of magog triangles, we conclude the paper by investigating two-color pyramids of gog triangles, which are also known as monotone triangles. Definition 1.10. A gog triangle G is a triangular array of positive integers G(i, j) such that (G1) 1 ≤ j ≤ i ≤ n, so the array is triangular; (G3) G(i, j) < G(i, j + 1), so rows are strictly increasing; (G4) G(i, j) ≥ G(i + 1, j), so columns are weakly decreasing; and (G5) G(i, j) ≤ G(i + 1, j + 1), so diagonals are weakly increasing. We use G n to denote the set of gog triangles of size n. The gog triangles of G 3 are Gog triangles are in bijection with alternating sign matrices; we have listed these seven triangles in the same order as the 3 × 3 ASMs in equation (1). The jth row of the gog triangle records the locations of the 1's in the vector obtained by adding the first j rows of the corresponding ASM. Theorem 1.11. There is a natural gog triangle involution f : G n → G n that corresponds to both (1) an affine transformation of two-color pyramids, and (2) reversing the order of the rows of the corresponding ASM. This theorem is a satisfying observation for those interested in ASMs. Background A partially ordered set (or poset for short) consists of a set P and a binary relation that is reflexive (x x), antisymmetric (if x y and y x then x = y) and transitive (if x y and y z then x z). A lattice is a poset such that every pair of elements have a least upper bound and a greatest lower bound. A totally ordered set is a poset where every pair of elements is comparable. A linear extension of a partial order is a totally ordered set that is compatible with the partial order. For an introduction to posets and lattices, see Chapter 3 of Stanley [28]. For a poset P , let L(P ) denote the set of linear extensions of P . Brightwell and Tetali [8] determined an accurate asymptotic formula for |L(B n )|, improving on work of Sha and Kleitman [25]. The value of |L(B n )| is known for 1 ≤ n ≤ 7, see OEIS A046873 [27]. The n = 7 case was recently determined by Brower and Christensen [9] using machinery developed to study the game of Chomp played on the boolean lattice. Pruesse and Ruskey [24] introduced the linear extension graph G(B n ) whose vertex set is L(B n ) and whose edge set consists of pairs of linear extensions that differ by a single adjacent transposition. Felsner and Massow [12] determined the diameter of G(B n ). Researchers have also studied linear extensions of subposets of B n , including the order ideal B n,m of subsets of size at most m. Fink and Gregor [14] determined the linear extension diameter of the subposet B 1,k n of B n that is induced by levels 1 and k. Brouwer and Christensen [9] determined that and computed |L(B n,3 )| for n ≤ 7. Comparing this formula with our Proposition 1.3 shows that n! · |F n,2 | is o(|L(B n,2 )|). It is no surprise that de Finetti extensions of B n,2 are exceptionally rare among linear extensions of B n,2 . The de Finetti orders F n are orderings of P([n]) that satisfy both (F1) and (F2). The value of |F n | is known for 1 ≤ n ≤ 7, see OEIS A005806 [27]. These orderings appear in a variety of settings with names that reflect the application at hand [13,22,5,10]. To emphasize the common poset context, we opt for the generic name "de Finetti order," which also pays homage to de Finetti's axiom [11]. It is pleasing that (F1) and (F2) lead to common extensions of the boolean lattice B n and the order on [n] induced by the standard ordering on the integers. Indeed, when x i ≤ y i for 1 ≤ i ≤ k then (F1) and (F2) lead to the (intuitive) conclusion that {x 1 , x 2 , . . . , x k } {y 1 , y 2 , . . . , y k }. In probability theory, the orderings in F n are known as comparative probability orders, and they enjoy applications in decision theory and economics [20,13,15,26]. A comparatively probability order is additively representable when there is a probability measure p : [n] → [0, 1] that induces the order, namely p(X) ≤ p(Y ) if and only if X Y . In a more algebraic context, Maclagan [22] refered to orderings in F n as boolean term orders and studied their combinatorial properties. A single adjacent transposition results in a total order that violates (F 2), so the linear extension graph on F n is totally disconnected. Instead, Maclagan introduced a flip operation between boolean term orders, which consists of multiple (related) adjacent transpositions so that (F2) still holds. It is an open question whether the flip graph is connected for n ≥ 9. Christian et al. [10] further studied flippable pairs of orders and their relation to the polytope of an additively representable order. In social choice theory, these orderings are called completely separable preferences [17,5]. In this setting, de Finetti's condition ensures that a voter's preference for the outcomes on a subset S ⊂ [n] of proposals is independent of the outcome of the proposals in S. Hodge and TerHaar [19] showed that the number of de Finetti extensions satisfies n! · |F n | = o(L(B n )). In fact, they proved the stronger condition that linear extensions with at least one pair X, Y of proper nontrivial subsets satisfying condition (F2) are vanishingly rare. Other research on separable preferences focuses on the admissibility problem: which collection of subsets can occur as the collection of separable sets S, meaning that (F2) holds for any subsets X, Y ⊂ S and any Z ⊂ S, see [19,18,3]. Theorem 1.3 establishes a bijection between the de Finetti extensions F (1) n,2 and magog triangles M n−1 . This connects our poset extension problem to the illustrious family of alternating sign matrices. See [6,7], respectively, for a brief or an extended recounting of the history of the famous alternating sign matrix conjecture. Magog triangles of M n are in bijection with totally the symmetric self-complementary plane partitions (TSSCPP) in a 2n × 2n × 2n box. Andrews [1] proved that the number of such TSSCPP is given by equation (2). Meanwhile, gog triangles G n are in bijection with n × n alternating sign matrices (ASM). Zielberger [33] proved that |M n | = |G n |, which confirmed that TSSCPPs and ASMs are equinumerous. Kuperberg [21] later gave a more streamlined proof using the 6-vertex model from statistical mechanics. There are many combinatorial manifestations of the ASM sequence (2), see [7,23]. A natural bijective proof between TSSCPPs and ASMs (or equivalently, between magog and gog triangles) remains elusive, though progress on subfamilies has been achieved [4,31]. Triangular arrays of numbers (such as gog, magog and kagog triangles) continue to play an essential role in ASM and TSSCPP research. For example, Striker [30] defined a tetrahedral poset T n whose subposets trace connections between TSSCPPs, ASMs and other combinatorial sequences. In particular, T n has one subposet whose order ideals can be described via families of triangular arrays. The order ideals of one such subposet is in bijection with gog triangles (and hence with ASMs). There are six distinct subposets whose order ideals (with associated triangular families) are in bijection with magog triangles (and hence with TSSCPPs). We note that our kagog triangles are not among the triangular families described in [30], so the family of TSSCPP triangles continues to grow. Proof of Proposition 1.3 This brief section offers a simple bijection between F n,2 and shifted standard Young tableaux (shifted SYT) of shape (n, n − 1, . . . , 1). Figure 3.1 exemplifies the mapping for n = 4. To ease exposition, we identify the singleton {i} with the doubleton {i, 0}. Ignore the set ∅ and lay out the lattice F n,2 in a shifted staircase grid so that row k contains the sets {i, k − 1} for k ≤ i ≤ n in increasing order. This grid induces a shifted staircase Ferrers diagram (n, n − 1, . . . , 1) whose boxes are indexed the n(n + 1)/2 nontrivial members of F n,2 . Consider a total ordering E ∈ F 4,2 . Place the integer ℓ in the box corresponding to the ℓth set in total ordering E. The result is a shifted SYT of staircase shape: the rows and columns of the resulting tableau are both increasing because the total ordering satisfies properties (F1) and (F2) of Definition 1.1. This mapping is surjective: starting from a shifted SYT, we can reverse the process to find a total ordering E ∈ F 4,2 that maps to it. This completes the proof of Proposition 1.3. ✷ 4 The bijection from F (1) n,2 to K n−1 In this section, we prove Lemma 1.7. Figure 4.1 shows the de Finetti lattice F n,2 for n = 3, 4, 5 and also indicates the sublattice of doubletons that are incomparable with singleton {k}. The lattice F n,2 induced by 1 ≺ 2 ≺ · · · ≺ n and de Finetti's condition for n = 3, 4, 5. The set I n contains the doubletons whose comparison with the singleton n is not determined by de Finetti's condition. For k ≥ 3, let Φ(I k ) be the collection of de Finetti extensions of I k ∪ {k} for which the singleton {k} is comparable with every doubleton of I k (and no additional extraneous relations). When we restrict a poset extension E ∈ F (1) n,2 to the set I k ∪ {k}, we obtain some E k ∈ Φ(I k ). Similarly, we can induce a unique poset extension E of F n,2 from a list (E 3 , E 4 , . . . , E n ) where E k ∈ Φ(I k ). We will have E ∈ F (1) n,2 provided that the union of these orderings does not violate de Finetti's condition (F2). n,2 and its collection of E k ∈ Φ(I k ). Our bijection from the poset extensions of F n,2 in F (1) n,2 to the kagog triangles in K n−1 proceeds as follows. Given a de Finetti extension E ∈ F (1) n,2 , we create the corresponding list (E 3 , . . . E n ) where E k ∈ Φ(I k ). We then map extension E to a kagog triangle K ∈ K n−1 so that extension E j maps to row j − 2 of triangle K for 3 ≤ j ≤ n. The row constraint (K3) of the kagog triangle will correspond to the internal structure of each E k . The column constraint (K2) Next, we consider extensions in Φ(I 7 ). Specifying the comparisons of singleton {7} with the doubletons in I 7 is equivalent to placing a dot in each row of the integer partition (6,5,4,3,2). Suppose that we place a 7 in the third box of the first row, corresponding to 62 ≺ 7 ≺ 63. Now de Finetti's condition leads to 21 ≺ 32 ≺ 42 ≺ 52 ≺ 62 ≺ 7, which yields the partially filled diagram • • • which contains a shifted copy of partition (3, 2) whose rows must each be assigned a dot. This can be done in four ways, and counting the boxes to the right of the dots gives the lists (0, 0, 1, 2, 3), (0, 0, 0, 2, 3), (0, 0, 0, 1, 3) and (0, 0, 0, 0, 3) from L( [5]). We now prove Lemma 4.2 by strong induction. We can now prove that the set of de Finetti extensions F (1) n,2 is in bijection with the set of kagog triangles K n−1 . Proof of Lemma 1.7. Let E ∈ F (1) n,2 be a de Finetti extension of F n,2 so that every singleton is universally comparable in E. Consider (E 3 , E 4 , . . . , E n ) where E k ∈ Φ(I k ) is the poset extension of I k ∪{k} induced by E. Create a triangular array T = T (i, j) for 1 ≤ j ≤ i ≤ n−2 by applying the mapping f from Lemma 4.2 to each element in this list of extensions, using the indexing convention By Lemma 4.2, each row satisfies the kagog row constraint. Meanwhile, the extension E satisfies de Finetti's condition (F2). In particular, for any 1 ≤ i < j < k ≤ n, if {k} ≺ {j, i} then {k − 1} ≺ {j, i}. In terms of triangle T , this means that T (k − 2, n − j) ≥ T (k − 3, n − j). For 5 The bijection from M n to K n We now prove Lemma 1.8. Along with Lemma 1.7, this completes the proof of Theorem 1.5. Recall that each triangle family T n forms a distributive lattice and that Definition 1.9 constructs two-color pyramids in relation to the maximum and minimum triangle of T n . The minimum magog triangle has M min (i, j) = 1 for every entry (i, j) and the maximum magog triangle has M max (i, j) = j for every entry (i, j). Our first transformation is to subtract M min from each magog triangle. The rightmost column becomes all-zero, so we omit it and reindex. This leads to the family of omagog triangles (short for "zeroed-magog" triangles). We use M • n to denote the set of all omagog triangles with n − 1 rows. The set M • 3 appears in Figure 5.1, with elements ordered so that they biject to the magog triangles of equation (3). The minimum omagog triangle satisfies M • min (i, j) = 0 and the maximum omagog triangle satisfies M • max (i, j) = j for all entries (i, j). Proof of Lemma 1.8. We create a bijection ψ from omagog pyramids △M • n to kagog pyramids △K n via a sequence of elementary transformations. Recall that a two-color pyramid △T is a collection of cubes (i, j, k) that are colored white or gray. Renaming these colors as color 1 and color 0, respectively, then the two-color pyramid becomes a binary function on the set of admissible coordinates, that is △M • : (i, j, k) → {0, 1}. Viewing △M • as a function allows us to describe the collection △M • n of two-color pyramids with a system of inequalities. We have • △M • (i, j, k) ≤ △M • (i + 1, j, k): the columns of the magog triangle are nondecreasing, • △M • (i, j, k) ≤ △M • (i, j + 1, k): the rows of the magog triangle are nondecreasing, and • △M • (i, j, k + 1) ≤ △M • (i, j, k): color 1 (white, present) cubes are below color 0 (gray, absent) cubes, so the cubes that are present obey "gravity." We now perform our four step transformation ψ. • Step 1: Invert the colors, or exchange color 0 for color 1 and vice versa. This reverses the inequalities. • Step 2: Push all cubes north in their respective column so that row 1 has length n − 1. This is equivalent to moving the cube (i, j, k) to (i − (j − 1), j, k). • Step 3: Tip the entire stack over the y-axis via a clockwise rotation by π/2. This is equivalent to moving the cube (i, j, k) to (n − k, j, i). • Step 4: Reflect the stack through the plane y = (n + 1)/2. This is equivalent to moving the cube (i, j, k) to (i, n − j, k). After composing these four steps, cube (i, j, k) switches color and moves to (n−k, n−j, i−j +1). Updating the omagog pyramid inequalities at every step leads to the following algebraic constraints for some pyramid △P : These pyramid inequalities correspond to the kagog triangle constraints of Definition 1.6, where we must recall that color 1 (white) cubes are present and color 0 (gray) cubes are absent. Condition (P1) ensures that the domain for admissible cubes (i, j, k) is correct and that the height of tower (i, j) is at most j, so (K1) and (K2) hold. Condition (P4) states that the cubes adhere to gravity: color 1 blocks must appear below color 0 blocks. Conditions (P2) and (P4) ensure that the columns are weakly decreasing, so (K3) holds. Conditions (P3) and (P4) ensure that the rows are strictly increasing after the first nonzero entry, so (K4) holds. Indeed, if cube (i, j − 1, k − 1) is color 1, then (i, j, k) is color 1, so the tower at (i, j) must be taller than the tower at (i, j − 1). A Catalan Submapping In this brief section, we show that the mapping ψ : △M • n → △K n induces a natural bijection between Catalan subfamilies of these pyramids. We start by describing two known Catalan families [29]. Let S n denote the set of nondecreasing sequences (s 0 , s 1 , . . . , s n−1 ) where 0 ≤ s i ≤ i for 0 ≤ i ≤ n − 1 and s i ≤ s i+1 for 0 ≤ i ≤ n − 2. Let C n denote the set of coin pyramids whose bottom row contains n consecutive coins. Next, we define our associated pyramid families. Let S ′ n ⊂ M • n be the set of omagog triangles whose first n − 2 rows are all zero. Let C ′ n ⊂ K n be the set of kagog triangles such that every entry in column j is either j − 1 or j. Proposition 5.2. Let S n , C n , S ′ n and C ′ n be the families defined above. (a) There is an elementary bijection σ : S n → C n . (b) There is an elementary bijection ρ : S n → S ′ n . (c) There is an elementary bijection τ : C n → C ′ n . (d) Restricting the bijection ψ : △M • n → △K n from Lemma 1.8 to △S ′ n gives a bijection to △C ′ n . Furthermore, this bijection has a natural interpretation in terms of monotone sequences and coin pyramids. Namely, σ = τ −1 • ψ • ρ. Proof. Figure 5.3 shows the families S 3 , C 3 , S ′ 3 , C ′ 3 . It also shows two families H 3 , H ′ 3 of hybrid configurations that are essential in multiple stages of the proof. Proof of (a). Our bijection joins S n with C n via the set P n of lattice paths from (0, 0) to (n, n) that never travel above the diagonal y = x, composing mappings described in [29]. First, we map sequence s = (s 0 , s 1 , . . . , s n−1 ) ∈ S n to the lattice path p ∈ P n whose kth horizontal step is at height s k . Next, we place gray (missing) coins in each square below p, and place white coins in each square above the path p, up to and including the squares along the diagonal y = x. Let H n denote the hybrid family of configurations of paths and coins, where missing coins are gray. To complete the mapping σ, reflect the white coins in the hybrid configuration through θ = π/8 to obtain the corresponding coin pyramid. Proof of (b). The monotone sequences S n map quite simply to S ′ n . The sequence s ∈ S n maps to the omagog triangle in M • n whose final row is (s 1 , . . . , s n−1 ), and whose other rows are all-zero. This mapping is clearly a bijection. Proof of (c). We map coin pyramids C n to the triangles in C ′ n via the hybrid configurations in H n . After mapping a coin pyramid to its hybrid configuration in H n , we ignore the white coins on the diagonal (which correspond to the fixed base of the coin pyramid), and reflect the 3 of omagog pyramids whose first row is zero. The family C 3 of coin pyramids is in bijection with S 3 via the hybrid family H 3 consisting of lattice paths and coins, where s k is the height of the horizontal step starting at x = k. We mapping C 3 to the subfamily C ′ 3 of kagog pyramids via family H ′ 3 , the mirror image of the non-diagonal coins of H 3 . remaining coins across the vertical axis to get a triangular array of the appropriate shape. Let H ′ n denote the resulting family of triangular arrays of two-colored coins. Replace each white coin with a 1 and each gray coin with a 0. Finally, add j − 1 to the entries in column j for 1 ≤ j ≤ n − 1. The result is a kagog triangle in C ′ n . This invertible mapping is a bijection. Proof of (d). First, we show that the bijection ψ : △M • n → △K n maps △S ′ n to △C ′ n . All of the white (present) blocks of △S ∈ △S ′ n are in row n − 1. Let △K = ψ(△S) where ψ is the mapping in the proof of Lemma 1.8. Recall that in this mapping, the block △S(i, j, k) flips colors and moves to △K(n − k, n − j, i − j + 1). In particular, the gray block △S(n − 2, j, k) maps to the white block △K(n − k, n − j, n − j − 1). This proves that every tower in column ℓ = n − j has height at least ℓ − 1 = n − j − 1; in other words, ψ bijects △S ′ n to △C ′ n . It remains to show that the mapping ψ corresponds to the mapping σ : S n → C n . The key is to take a bird's eye view of a kagog pyramid △K ∈ △C ′ n . This view only shows the topmost blocks; this is sufficient, since the blocks in the lower layers are all white. We will see that the coin colors of h ′ ∈ H ′ n correspond to the block colors of the top layer of a unique △K = ψ(△S). Keeping this intuition in mind, we conclude the proof. After mapping, the block △S(n − 1, j, k) flips color and maps to the top-layer block △K(n − k, n − j, n − j). Suppose that △S(n − 1, j, k) is white for 1 ≤ k ≤ ℓ and gray for ℓ + 1 ≤ k ≤ j. This means that △K(n − k, n − j, n − j) is gray for 1 ≤ k ≤ ℓ and white for ℓ + 1 ≤ k ≤ j. In other words, △K(k ′ , j ′ , j ′ ) is gray for n − ℓ ≤ k ′ ≤ n − 1 and white for j ′ ≤ k ′ ≤ n − ℓ − 1. The bird's eye view of the pyramids of △K n bijects to the hybrid configurations of H ′ n , where we replace the blocks with coins. The involution of G n In this section, we prove Theorem 1.11. Analogous to the previous section, we start by defining ogog triangles. The minimum gog triangle has G min (i, j) = j for all entries (i, j), while the maximum gog triangle has G max (i, j) = n − i + j. For every gog triangle, we construct its ogog counterpart by subtracting the minimum gog triangle. The last row in every gog triangle is always [1 2 · · · n] since it has length n and is strictly increasing. As such, every ogog triangle has a final row of zeros, which we omit from ogog triangle. (OG3) G • (i, j) ≤ G • (i, j + 1), so rows are weakly increasing; (OG4) G • (i, j) ≥ G • (i + 1, j), so columns are weakly decreasing; and (OG5) G • (i, j) ≤ G • (i + 1, j + 1) + 1, so diagonals cannot decrease by more than 1. We use G • n to denote the set of all ogog triangles with n − 1 rows. where these ogog triangles are ordered so that they biject to the gog triangles in equation (5). As constructed, the color 1 (white) cubes are present in the ogog triangle, while the color 0 (gray) cubes are absent. Our next lemma states that the gray cubes also represent a gog triangle. Lemma 6.2. Let G • be an ogog triangle and let △G • be its two-color cube pyramid representation. The color 0 cubes of △G • are an affine transformation of another ogog triangle. Note that this correspondence is an involution on the set of ogog triangles: swapping the colors twice leads us back to the original two-coloring of the cube pyramid. Proof. Similar to the proof of Lemma 1.8, we describe ogog pyramids via a set of inequalities, perform a multistep transformation and then check that the resulting inequalities also describe the set of ogog pyramids. The inequalities for ogog pyramids are: : diagonals cannot decrease by more than 1, and : the present cubes obey gravity. The three-step mapping ϕ is: • Step One: Invert the colors, or exchange color 1 for color 0 and vice versa. This reverses the inequalities. • Step Two: Perform a quarter rotation of R 3 about the x-axis. This moves the cube (i, j, k) to position (i, −k, j). This tips the two-color cube pyramid onto its side. • Step Three: Rotate by π around the z-axis and then translate by (n, 0, 0). This moves cube (i, j, k) to (n − i, −j, k). After composing these three steps, cube (i, j, k) switches color and moves to (n − i, k, j). Figure 6.1 exemplifies the mapping ϕ for an ogog pyramid from △G • 4 . Careful algebra shows that the resulting constraints are a permutation of the algebraic inequalities for an ogog cube pyramid. As such, this mapping takes one gog triangle to another gog triangle. This affine mapping is an involution, so it is bijective. The ogog pyramids are in bijection with gog triangles, and hence also in bijection with alternating sign matrices. Our next corollary shows that the involution ϕ of Lemma 6.2 reverses the rows of the associated ASM. Proof. Starting with the alternating sign matrix A, we obtain the ogog triangle G • as follows. First, we create the matrix A ′ whose ith row is the sum of the first i rows of A. This is a 0-1 matrix whose ith row contains exactly i ones. We convert A ′ into a gog triangle G by reporting the indices of the ones in each row. We then set G • = G − G min , which corresponds to subtracting [1, 2, . . . , i] from row i of G for 1 ≤ i ≤ n and then deleting the final row (which is all-zero). Let A i denote the ith row of A and let A ′ i = k k=1 A i denote the ith row of the partial sum matrix A ′ . Let 1 ≤ a ′ 1 < a ′ 2 < · · · < a ′ i ≤ n denote the locations of the ones in row A ′ i . Then G(i, j) = a ′ j , or equivalently [a ′ 1 , a ′ 2 , · · · , a ′ i ] is the ith row of the gog triangle G. The entries We start with row n − 1 of our triangle, as it is the simplest row to comprehend. Row n − 1 of gog triangle G is [a ′ 1 , a ′ 2 , . . . , a ′ n−1 ], which is missing a single number ℓ ∈ [n], namely the location ℓ of the unique one in row n of A. By equation (7), the corresponding ogog row consists of ℓ − 1 zeros followed by n − ℓ ones. Consider this row in the context of the two-color ogog pyramid △G • and its image △H • = ϕ(△G • ). Row n − 1 of pyramid △G • has height 1. It contains ℓ − 1 cubes of color 0, followed by n − ℓ cubes of color 1. After transformation ϕ, the cube (n − 1, j, 1) switches color and moves to (1, 1, j). So △H • has a tower of blocks at (1, 1) of height n − 1, with ℓ − 1 cubes of color 1 below n − ℓ cubes of color 0. It follows that ogog triangle H • has H • (1, 1) = ℓ − 1, and thus the corresponding gog triangle H has H(1, 1) = ℓ. This confirms that the first row of gog triangle H corresponds to the last row of matrix A, as desired. We now handle a generic row i of ogog triangle G • ; Figure 6.3 shows an example. The entries of row i are a weakly increasing list of length i, drawn from {0, 1, . . . , n − i}. Let 0 ≤ s m ≤ i be the number of consecutive m's in this list, so that In the corresponding gog triangle G, row i is missing the integers Let us pause to make some key observations. The missing integers in row i of G are precisely the locations of the zeros in the partial sum A ′ i = i ℓ=1 A ℓ . Since the sum of all the rows yeilds the all-ones vector, these are also the locations of the ones in the partial sum n ℓ=i+1 A ℓ . Of course, summing the last n − i rows of A is the same as summing the first n − i rows of the row reversal of A. Next, we translate our observations into statements about two-color pyramids. When we convert ogog triangle G • into pyramid △G • , row i of G • maps to the i × (n − i) wall of cubes The layer of wall △G • i at height k consists of k−1 m=0 s m cubes of color 0 followed by n−i m=k s m cubes of color 1. The transformation ϕ : △G • → △H • maps △G • i to the (n − i) × i wall △H • n−i = {(n − i, k, j) : 1 ≤ k ≤ n − i and 1 ≤ j ≤ i}. We have inverted the colors and exchanged vertical and horizontal, so the tower of wall △H • n−i at (n − i, k) consists of k−1 m=0 s m cubes of color 1, stacked below n−i m=k s m cubes of color 0. We now translate the structure of pyramid △H • into the triangle setting. Ogog triangle H • has H • (n − i, k) = k−1 m=0 s m for 1 ≤ h ≤ n − i, so its corresponding gog triangle H has The formulas in equations (8) and (9) are equivalent (taking k = p + 1). Therefore, row n − i of gog triangle H contains the locations of the ones in the partial sum n k=i+1 A k . In other words, gog triangle H is constructed by considering the rows of alternating sign matrix A in reverse order. Conclusion Poset extensions of the de Finetti Lattice F n,2 have interesting combinatorical connections. We have shown that F (1) n,2 is enumerated by the ASM/TSSCPP sequence and that F n,2 = F (2) n,2 is enumerated by the strict-sense ballot numbers. We have also shown that there is a very natural involution on gog triangles that corresponds to reversing the rows of the associated alternating sign matrices. We conclude this work with some open research questions relating to both poset extentions and ASM/TSSCPP. One natural continuation of this work is to consider the de Finetti extensions of F n,3 , namely F n,3 = F n,3 . Are these families enunerated by known combinatorial sequences? If so, can we find a natural bijection to the appropriate combinatorial family? An understanding of these smaller families could provide valuable insight into the family F n of de Finetti total orders. Any new perspective could have ramifications for comparative probability orders and completely separable preferences. One could further investigate the subfamily of de Finetti extensions F (k) n,m by defining a graph where we connect extensions via an appropriately atomic operation, such as transpositions [24] for L(B n ) or flips [22] for members of F n . Also, can the dimension [32] of a de Finetti extension of F n,m be achieved by restricting ourselves to de Finetti extensions? This paper brings two novel families into the fold of ASM and TSSCPP combinatorial structures: the poset extensions F (1) n,2 and the kagog triangles K n . Some recent efforts have focussed on statistic-preserving bijections between subfamilies of ASM and TSSCPP structures [31,4]. Perhaps the properties F (1) n,2 and K n might reveal connections to help traverse the gap between ASM and TSSCPP. In particular, our two-color cube pyramid representation for triangular arrays revealed a natural bijection between magog triangles and kagog triangles, as well. as a nice involution on gog triangles. We are optimistic that this point of view could aid in the investigation of the other known triangular families.
10,126.2
2019-12-27T00:00:00.000
[ "Mathematics" ]
A merican P hi losophical S ocie t y FIELD TRIP PURPOSE The aim of this research was to examines and sampled sites with different characteristics (e.g. silica content) from the El Tatio geyser area where the precipitation of amorphous silica affords cyanobateria with an effective screen against UV radiation, and allow to their (cellular) ultrastructures in order to better understand their potential in biosignatures and their fossilizations (see the submitted project). This study was expected to expand the recognition of the actual origin of bacteriomorphs and other alleged microbial morphologies through the reconstruction of their shape and composition with a comparative investigation of modern and fossil examples from a range of environmental setting such as El Tatio, Chile. new flooring in the basement and sub-basement has increased linear shelf space by well over a mile. Renovations also included the installation of a large, well-equipped, and secure area for the storage and care of the Society's physical artifacts, which will soon be moved from a cramped storage space in Richardson Hall. Because short-term Library fellows have not been able to use the Library for the past two years, the Reading Room is now filled with scholars making up for lost time. No sooner had the Library renovation been completed than we began a substantial renovation in Benjamin Franklin Hall, which was closed until mid-September. For many years, the Philadelphia Chamber Music Society (PCMS) has held a few of its concerts in Franklin Hall, and in 2020-2021 all of them were held there, either virtually or with no more than 25 patrons in the audience. The APS has partnered with PCMS to fund the renovation with substantial support from the Pew Center for Arts & Heritage and the Presser Foundation. The improvements will include a six-foot extension of the stage to a depth of 14 feet, replacement of the technical booth above the balcony with a larger facility, improvements in sound recording, and creation of a combination "green room" and meeting space on the third floor above the stage. Once the renovations are complete, PCMS will rent the hall for some 25 concerts each year, and, of course, the Society will benefit from the improvements for its events. The management training firm Kaleel Jamison Consulting Group (KJCG) met with staff this year for diversity, equity, and inclusion training. We have This item has been adopted by Jay Stiefel through the Society's Adopt-a-Book program. These tax-deductible donations allow the APS to continue to build the collection and preserve it for future generations. For more information, see https://www.amphilsoc.org/adopt-a-book. made several changes-some of which are simply restorations of pre-pandemic practices-in response to subgroup reports that were created by staff in the course of the KJCG training. These include more information about staff benefits, regular all-staff meetings, improved personnel and onboarding practices, and regular online training in anti-harassment and prodiversity practices. This year saw three APS staff retirements. In August 2021, Marilyn Vignola retired from the Society. She served as Special Executive Assistant to numerous Executive Officers for the past 20 years. Her successor is Sally Warren, who comes to the APS with training in art history, historic preservation, and legal assistance and experience in private industry. In December 2021, Mary McDonald retired after serving as Director of Publications for 20 years. Peter Dougherty of Princeton University Press has now joined us in an advisory role to counsel the Society as it looks toward a new phase of the APS Press. And in May 2022, with 18 years at the Society, Charles Greifenstein retired as Associate Librarian and Curator of Manuscripts. Despite the difficulties of the past two years, the APS staff has continued to work splendidly on behalf of the Societyespecially in finding new ways to pursue the Society's mission by virtual communication. APS staff and fellows have contributed to the wonderfully sociable and intellectually stimulating culture of the Society. As a participatory organization, many Members and others have also contributed to the Society through service on the Council and our many governing committees. My hearty thanks go to all. of Independence." The conference drew together scholars, public historians, leaders of cultural institutions, and members of the public to discuss the themes that should be explored as part of the upcoming 250th commemoration of 1776, or the Semiquincentennial, in 2026. It also marked the public launch of the David Center for the American Revolution at the APS, a collaboration between the APS and the David Library of the American Revolution. At this conference, the APS also announced the beta version of one of its contributions to 2026: therevolutionarycity. org. This site will host all of the digitized manuscripts that relate to Philadelphia and the American Revolution. The initial site is a partnership among the APS, Historical Society of Pennsylvania, and Library Company of Philadelphia, and was supported by grants from the Institute of Museum and Library Services and the NEH. In addition to contributing our manuscripts, the APS will support the site's infrastructure. We have built it so that it can expand to include contributions from other repositories around the world, so we expect it to continue to grow in the years ahead. I hope you will check it out! In the spring, the APS followed up on "Meanings of Independence" with a symposium on open data. "Open Data: Reuse, Redistribution, and Risk" highlighted the various ways digital humanists and library professionals have used new technology to make materials more accessible or illuminate what they tell us. The Society organized the conference to highlight our own Center for Digital Scholarship (CDS) and its Open Data Initiative. The center's commitment to making its digital data as freely available as possible has been the backbone of their recent projects, namely the digitization and transcription of both Benjamin Franklin's postal books and Eastern State Penitentiary's ledgers and the intake records. CDS's most recent project is a partnership with the University of Virginia's Center for Digital Editing and the Thomas Jefferson Papers at Princeton University. Together, these institutions will create an open-source platform for organizations to digitize and transcribe historic weather data. The Center for Native American and Indigenous Research's Native American Scholars Initiative also grew considerably. The Mellon Foundation renewed and expanded their support for the program to $1.6 million. This grant allowed the Society to hire an engagement coordinator, who will enhance CNAIR's ability to work in collaboration with Native communities. We will also be able to launch a new Career Pathways fellowship, which will provide a recent PhD recipient with hands-on experience in a museum or library setting. Of course, the collection continued to grow in a myriad of ways. We acquired the papers of Beatrice Mintz (APS 1982), a pioneering scientist at Fox Chase Cancer Center-a wonderful addition in advance of our Women in Science exhibition. We also acquired two significant early American manuscripts. One is a list of plants owned by William Bartram (APS 1768), who supervised the oldest botanical garden in the United States. Originally built by his father and APS founder John Bartram, the garden expanded and improved dramatically under William's supervision. This document contains vital information on Bartram's work and holdings. We also purchased a cache of Franklin's personal financial receipts from the last year of his life. Several APS Members appear in these records, and they add to the corpus of financial accounts we have from Franklin. Our collecting continues to evolve to reflect larger changes in society, and in the past few years we've seen a surge in born-digital materials. Thanks to our Martine A. and Bina Aspen Rothblatt Digital Archivist, we can now accept these materials-including our first donation of iPods!-and preserve them for posterity, just as we do with more traditional paper materials. As we look ahead to the next year, we are very excited about our upcoming exhibition on Women in Science. It will showcase both the depth and richness of our collection and the many contributions of APS Members and others. We plan to host a two-part conference series on "Women in Science: Barriers, Achievements, and Opportunities," launch an oral history project, and produce a fascinating digital network analysis based on a sample of the correspondence of women scientists in our collection. If you are interested in learning more about any of these initiatives, please don't hesitate to contact us. And we hope that we might see you in person at the APS in the coming year! Patrick spero, Librarian and Director of the Library & Museum library library early form of policing. These scholars set up shop in a newly designed fellows' suite on the first floor of Richardson Hall. The program will expand further this fall when our first National Endowment for the Humanities (NEH) sabbatical fellow, an award given to a senior scholar, arrives on campus. Our summer was also filled with interns and special projects. We had three dynamic undergraduate interns-in-residence as part of our Native American Scholars Initiative (NASI) undergraduate internship program. We also held two Digital Knowledge Sharing Workshops, one of which made up for the cohort whose workshop was canceled due to the pandemic. These workshops bring together scholars, many of whom are based in Native communities, who are working on community-based digital archival projects. The workshops allow them to present their work, share best practices, and build connections with each other. In October, during a moment when the pandemic seemed to recede, we hosted our first hybrid conference, "Meanings he resPonse was overwhelming. Scholars who had been hungry to access our materials streamed through our doors, especially during the summer when we often had a full room. This reminded all of us thateven though we had been virtual for two years and digitized approximately 60,000 pages a year to meet researcher demandthe original materials remain vital. With over 14 million pages of manuscripts in our collection, even our robust digitization program can never replace the real thing or replicate the transformative experience of sitting in a quiet room holding an original letter-creating a concrete connection between the researcher and their subject. The reopening of the Reading Room was just one of the many "returns" we experienced, even as we continued to navigateand were sometimes battered by-the uncertainties of a pandemic. We welcomed a full slate of year-long fellows focused on a number of different specialties, including Mayan iconography, the history of statistics and medicine, slavery and the American Revolution, and the creation and operation of night watches in colonial seaports-an 2 American Philosophical Society, Autumn 2022 librarian although this past year had many highlights, perhaps the most exciting news to come out of the library was the reopening of the Reading Room to researchers. , a Chicago lawyer, author, and art collector who was the first person to buy radically modern paintings by Marcel Duchamp and Francis Picabia at the 1913 Armory Show, the first American collector to purchase works by Vasily Kandinsky and Paul Klee, and arguably the first person to write a book about modern art in the United States. From the James Logan's "The Duties of Man Deduced from Nature": An Analysis of the Unpublished Manuscript (Transactions, vol. 111, part 3) by Norman Fiering explores an unpublished work by James Logan (1674-1751), a Philadelphia statesman and scholar whose passion for learning is exemplified in the scholar's library he amassed of nearly 3,000 titles. Fiering analyzes the treatise on moral philosophy that Logan wrote in 1734, but which now survives only as a manuscript that until about 1969 was assumed to be dispersed in the archives of the Historical Society of Pennsylvania or altogether lost. Inspired by the APS's digitization of Benjamin Franklin's postal records and From the aPs Press conservation By the end of Jenni's first week in the lab, she and Anne had picked the treatment projects for the remaining seven weeks. One work that caught Jenni's eye early on was from the Duhamel du Monceau/ Fougeroux de Bondaroy papers-an architectural drawing, "Plan du Pavé du sanctuarie de l'Église de Vrigny" (1787), rendered in brown pen and ink on a beautiful medium-blue antique laid paper. This item had a large loss to the paper that needed insertion with a new paper fill. Also from the same collection, Jenni chose a small gouache botanical painting on parchment. The paint was very fragile and loose in areas from the underlying support. Because this is a recent favorite of Patrick Spero's for inclusion in Treasures Tours, it was im-portant to secure all loose paint to avoid further damage from handling and storage. Her most time-consuming project was a large lithographic print, "The Washington Family" (post-1805, after the painting by Edward Savage), from the David Library of the American Revolution. This last one, or "George and Company" as she liked to call it, presented one bugaboo of a conservation treatment. That poor print had been much loved over the years prior to our receiving it-it had been torn, taped, torn again, taped again, and overpainted-in addition to having large parts of the paper and image missing. One form of love was the mending of tears with pressure-sensitive tape (you may know this as the kind of tape your grandmother used in copious amounts to wrap your birthday presents with when you were a kid). One kind person took it upon themselves to remove the older tape, which caused dark stains that seeped through to the front, with newer, paper-based tape that is purportedly "archival." This tape in itself, despite clever marketing claims, is almost never appropriate to use. If you truly need to mend a piece of paper, call a conservator to get some advice on what you may do safely at home.* Many, many hours of conservation treatment later, the print was completed on Jenni's last day at the APS. Although Anne wasn't able to share in her triumph in person (after contracting Covid-19, Anne spent the day checking in via Zoom), the end result was masterful. On reflection, Anne noticed a small and subtle shift over those eight weeks in the teacher-student dynamic. As they worked together, and became more familiar with one another's styles, Anne began to drink in all of the conservation information Jenni had come armed with-not only from her past years' experience at Buffalo, but also from her prior work at the Mobile Botanical Gardens. Jenni had much to impart and, in the process, made the lab a better place and Anne a better conservator and teacher of conservation. Anne Downey, Head of Conservation It's been a while since we've had a "Buff State" intern in the Conservation Department, and Anne was curious to work with an emerging conservator from the program-which turned out to be an excellent decision. Over the course of the summer, it became increasingly clear that Jenni brought an unexpected level of background knowledge, connoisseurship, visual acuity, skill, and sensitivity along with her to the lab. When the department plans potential summer treatment projects, we try to help round out an intern's "tool kit" of skills, as well as enhance exposure to varied media and types of collection materials. With Jenni, Anne initiated some of this decisionmaking a month before she arrived. She started by looking at a document compiled by former intern Jessica Silverman that lists all basic treatment competencies that a paper conservator should be familiar with early on in their career. The list includes the most basic (dry surface cleaning-what most folks think of as erasing) to the more esoteric (chemically altering darkened lead white paint so that it appears white again). All in all, 82 skills-including a variety of examination, documentation, and testing techniques-are important to experience as a student works toward becoming a professional paper conservator. From this document, Jenni selected 14 potential skills. With our rich collection materials, it was certain that we would be able to meet her needs while serving the needs of the Library & Museum. Returning to in-Person internships: spotlight on Jenni Krchak *the american institute for conservation has a "Find a Professional" tool that will help you locate a conservator who may be able to provide advice for basic home care, in addition to conservation services: https://www.culturalheritage.org/aboutconservation/find-a-conservator We try to help round out an intern's "tool kit" of skills, as well as enhance exposure to varied media and types of collection materials. A History of Climate Science in America Now on view at the APS Museum, this exhibition explores the questions and methods that have driven the study of weather and climate in America from the mid-18th century through today. "Some are weatherwise, some are otherwise." -Poor Richard's Almanac for 1735 benJamin franklin squeezed this proverb into the bottom corner of a page of his Almanac full of calendrical, astronomical, and climatic information. We viewed it as an invitation to show that everyone can become weatherwise and share how that process unfolded in America. Americans have long been curious about the weather. Europeans arriving on this continent had many questions about their new home. Temperature, precipitation, wind, and other weather phenomena drew their attention. Through observation, documentation, and collaboration-often with knowledge acquired from Indigenous peoples-they began to understand the climate. Becoming Weatherwise draws upon the Library & Museum's extensive collections, including the weather journal of James Madison (APS 1785), a 15-foot map of a tornado's path, portraits of Thomas Jefferson (APS 1780) and Herman Goldstine (APS 1979), and various weather visualizations. The materials in the exhibition highlight the importance of work by amateurs and professionals who have worked collaboratively to study weather and climate in the interest of agriculture, human health and comfort, military dominance, and simple curiosity. In addition, the exhibition considers how ideas about climate and weather have changed over time. desire to understand the world and use this knowledge to improve society remained a guiding principle. Scientists also embraced new technology that produced more data with greater precision. Scientific organizations and the federal government began to compile data and to standardize data and methods. As data collections grew, accurate forecasting became possible, and climate scientists developed more compelling ways to display their findings. l Methods and Motivations l in the 19th anD 20th centuries, the study of climate and weather became a more professional discipline. In addition, scientific organizations and the U.S. government, especially the military, were motivated by national priorities to study the weather and climate. As a result, scientists embarked on new research to better predict destructive storms, find ways to increase agricultural productivity, improve public health, and address various military interests. These were just some of the motivating factors that led to new methods of climate scientific practice. Throughout, the studies enabled scientists and the U.S. government to think about how storms formed, how they moved, and how to warn citizens. Featured in the exhibition is a chart produced by the government about storm tracking. In 1900, hurricane prediction and forewarning technologies were still in their early stages. This chart maps the movement of a hurricane that unexpectedly hit Galveston, Texas. It remains the deadliest natural disaster in U.S. history, having claimed nearly 8,000 lives. l Motivation: l Forecasting and Control as technology increaseD the quantity, speed, and accuracy of data, scientists began deciphering weather patterns and climatic trends. These new and improved methods led to a long-held goal of meteorology: precise forecasting. Advances in science also deepened understanding of weather phenomena, such as how precipitation occurs. In the mid-1900s, scientists were exploring the possibility of cloud seeding. In the 1930s, America's Southern plains entered a dramatic drought caused by farming practices that removed too much topsoil. The drought led to huge dust storms that earned the region the nickname "The Dust Bowl." l Motivation: Storms l Professional anD amateur scientists have studied storms in a variety of ways. Some observers recorded data and published written descriptions. Others created dramatic graphic representations of momentous storms. At their core, these l Climate Enlightened l During the enlightenment, many Europeans and North American colonists understood that climate had significant effects on life. However, Prussian scientist Alexander von Humboldt (APS 1804) took it a step further. He theorized that the world's environment was interconnected and that man-made changes in an ecosystem could have ripple effects. Humboldt gained this perspective by traveling the world, including South and North America, and taking careful and comprehensive measurements and observations. His unified theory of the environment laid the foundations for modern climate science. American climate scientists drew upon European theories and practices while adding their own observations. For example, Thomas Jefferson believed Americans could and should change their environment to better suit their needs, an opinion directly opposed by Humboldt. For Jefferson, the future of the United States was agrarian. He wanted the country to settle new territory, eliminate Native resistance, cultivate farmland, and take advantage of the climate to maintain a vibrant republic. Jefferson's plans for expansion required knowledge of new lands, climates, and peoples. Explorers and others in his networks ventured throughout North America, recording weather and other data from Indigenous l Method: Visualizing Data l scientists PublisheD their observations in charts, graphs, and datasets so their findings could be utilized in daily life or repurposed for other studies. One of the main ways that scientists displayed their findings was through mapping. In the late 19th century, the invention of the telegraph allowed weather observers to send their data with greater speed and ease. As more information flowed in, scientists were better able to analyze weather patterns and draw connections between distant places. This also prompted scientists to create more up-to-date maps of national weather phenomena. l Motivation: Human Health l before germ theory was accepted in the late 19th century, Americans believed that the climate could cause certain diseases. Climate and health were so connected that during the War of 1812, the U.S. government had military surgeons collect weather data in hopes that the information could be used to improve the Army's health. This marked the first organized effort of systematic weather data collection and forecasting by the federal government. project that featured contributions from meteorologist Jule Charney and the computer scientists John von Neumann (APS 1938) and Herman Goldstine (APS 1979), among others. As a result, ENIAC produced the first computer-based weather forecast in 1950. This emerging field of computing opened new horizons for climate science and forecasting. For example, climate scientists could create larger-scale and global models that mathematically demonstrated the connectedness of the Earth's climate, proving Humboldt's theories correct. l One World, Many Voices l climate science is a rapidly changing discipline, and scientists continue to develop new methods and technologies that advance our knowledge. Humanity's impact on the climate has been studied and debated for centuries. Historical data helps to provide more accurate predictions of the Earth's climate now and into the future. Since the mid-20th century, scientists have overwhelmingly agreed with the Humboldtian view that man-made change has damaged the Earth. New computing technology in the 1980s allowed scientists to quantify the harm. Community science offers another way for scientists to collect data and broaden their collaborative networks. People of all ages and backgrounds make observations and share them with scientists. Science doesn't only come from scientists. From 1938 to 1942, the APS coordinated several Philadelphia-based community science projects. In one project, interested residents studied the rings of some of the area's oldest trees to identify historical weather patterns. An amateur participant, Elma Holmes, developed the method for collecting tree ring patterns on paper. Scientists and others have sounded alarms about the current climate crisis long before the 21st century. Previously ignored voices are now being brought to the fore. Today, global communities share resources and knowledge about the climate. Scientists, historians, and community participants are working to better understand what our future world will look like. The Becoming Weatherwise exhibition ends as it began, with visitors facing the Franklin quote, "some are weatherwise, some are otherwise," and being asked, "Which will you be?" visitor enters the Becoming Weatherwise exhibition they can see what it was like in Germantown in 1838. It might surprise you how similar (and at times dissimilar) the temperatures are. This year, we were also able to welcome school groups back to the APS. The first group we welcomed in March was Springside Chestnut Hill Academy. In a poignant twist, this school was the last school visit we had before the onset of the pandemic. Another highlight of school visits to the APS this year was with the LaSalle College High School Robotics Team in July. The group, through the efforts of Museum Manager Craig Fox, transcribed data from one of Matthew Fontaine Maury's Storm and Rain Charts, which is on display in the exhibition. The energy and enthusiasm of teachers and students visiting (and collaborating with) the APS has been a great boost throughout this year as in-person programming has been able to resume. We have not lost sight of the work done in the virtual realm last year as we celebrated in-person happenings! With the release of Ken Burns's Benjamin Franklin documentary, Head of Education Programs Mike Madeja was able to boost and share virtual resources from the Dr. Franklin, Citizen Scientist exhibition. In collaboration with local broadcasting companies, we were able to share links, classroom materials, and more with those educators and learners interested in Franklin and his legacy. From podcast interviews to conversations with PBS Books, the breadth of the Society's strengths and efforts were on full display for many new audiences through this renewed and sustained interest in our work on the founder of particular interest to us here at the APS. While introducing the APS to new audiences, we often mention that collaboration is a deeply ingrained part of the Society's DNA (and history). Collaborations this year have allowed educational programs to broadcast that message a bit wider. Of equal significance, those collaborations have allowed educational programs to reconnect and strengthen relationships both inside and outside of the APS. michael madeja, Head of Education Programs This is especially true for one of the exhibition's displays: a weather-on-this-day digital display derived from transcribed weather data gathered by early scientist Ann Haines in 1838. Through a series of programs that took place January-March 2022, the public helped APS educators transcribe a year's worth of data in Ann's weather journal. The program attendees worked in groups to decipher, transcribe, and input the data they encountered in the journal. The end result? Any time a he Past year has focused very much on collaboration. Whether internal, with other departments like the Center for Digital Scholarship, or external, with schools and local organizations, collaborations have resulted in high levels of learning and engagement for audiences (plus plenty of fun along the way). Museum Education Coordinator Ali Rospond spearheaded the Community Science Weather Data project and a program series to transcribe a historic weather journal. Both speak to this theme of collaboration. Fall 2021 was the kick-off of the second year of the Community Science Weather Data Project with William W. Bodine High School for International Affairs and Newtown Middle School. Inspired by the historic weather journals of Thomas Jefferson, David Rittenhouse, James Madison, and Ann Haines found in the APS archives, this program continues a tradition of citizen science at the Society. Ali Rospond organized and facilitated a weather data collection project with Bodine's AP Environmental Science class and Newtown's eighth-grade classes. In all, we engaged with about 79 students in the 2021-2022 school year. These students learned about the APS, the long history of weather data collection, how to collect weather data, and that everyone can participate in science. In the morning and afternoon, students collected basic weather data: temperature, air pressure, wind speed, general weather, and general observations. Students learned how to use meteorological instruments, and how to work collaboratively in small groups. Throughout the project they had to communicate with their group members, just as scientists communicate and work together today. Also, as part of this project we were able to enlist both schools' students and have them be part of the Becoming Weatherwise exhibition. One of the weather data notebooks from Bodine High School, a picture of Newtown Middle School students collecting data, and quotes from both Newtown and Bodine students talking about their thoughts on climate and climate change are featured prominently in the exhibition. Along with these moments, the theme of collaboration is shown throughout the current exhibition. l Method: Creating Standards l stanDarDization is essential for scientists to share and analyze data in useful ways. As the study of climate became a scientific discipline, inventive individuals, leading scientists, organizations, and government bodies worked together to create standards for collecting and conveying weather information. Such standards included units of measurement, symbols, and visuals used to capture, summarize, and present large amounts of data. Their efforts created a way for diverse groups to share their data and collaborate. l Method: Collaborations l american scientific institutions systematically collected and stored weather recordings in central databases. These collections provided the material scientists needed to study and analyze weather on larger scales. As a leading national organization, the APS took an active and early role in this field. Climate scientists also created their own collaborative projects. For example, they often brought in experts from different or emerging fields, like mathematics and computer science, to develop new methods for studying the climate. Please keep an eye out for upcoming events in the monthly e-newsletter and on the APS website. We'd love to see you! seen at the 12 American Philosophical Society, Autumn 2022 1 2 5 3
6,895
2021-01-01T00:00:00.000
[ "Philosophy" ]
The Potential of a Thick Present through Undefined Causality and Non-Locality This paper elaborates on the interpretation of time and entanglement, offering insights into the possible ontological nature of information in the emergence of spacetime, towards a quantum description of gravity. We first investigate different perspectives on time and identify in the idea of a “thick present” the only element of reality needed to describe evolution, differences, and relations. The thick present is connected to a spacetime information “sampling rate”, and it is intended as a time symmetric potential bounded between a causal past of irreversible events and a still open future. From this potential, spacetime emerges in each instant as a space-like foliation (in a description based on imaginary paths). In the second part, we analyze undefined causal orders to understand how their potential could persist along the thick present instants. Thanks to a C-NOT logic and the concept of an imaginary time, we derive a description of entanglement as the potential of a logically consistent open choice among imaginary paths. We then conceptually map the imaginary paths identified in the entanglement of the undefined orders to Closed Time-like Curves (CTC) in the thick present. Considering a universe described through information, CTC are interpreted as “memory loops”, elementary structures encoding the information potential related to the entanglement in both time and space, manifested as undefined causality and non-locality in the emerging foliation. We conclude by suggesting a possible extension of the introduced concepts in a holographic perspective. Introduction The nature of Time is often at the root of the debate in physics and possibly sits at the core of the General Relativity (GR) and Quantum Mechanics (QM) incompatibility. In recent years, the search for a theory of Quantum Gravity (QG), able to include the success of both GR and QM, revived the study of time as a key ingredient for the understanding of a quantum description of spacetime. After an investigation on multiple perspectives on the subject, this paper suggests the interpretation of time through the concept of a time symmetric "thick present". Within each thick present instant, intended as the only element of reality along an emerging axis of a thermodynamic and causal time, a quantum information potential T k is considered, from which spacetime emerges in a sequence of space-like foliations. Beside time, the concept of entanglement has puzzled the physics community for decades, stimulating the discussion around causality and locality. In an evolution occurring in discrete instants, we investigate how indefinite causal orders (as entanglement in time) could be intended. We first consider undefined causality through a parallel with a C-NOT quantum gateway. Following a path integral approach, we then describe the information in the undefined order through entangled imaginary paths in the C-NOT circuit, which develop as superposed imaginary times in each space-like foliation. The superposition of imaginary times in a time-symmetric potential is finally interpreted as a closed path (CTC) in the thick present. there are several QM symmetric approaches. The idea of an emerging reality connected to the superposition of both a forward and a backward propagating wave goes back to the fifties, introduced by Watanabe in Ref. [6] as Two-State-Vector Formalism (TSVF). In recent years, in the context of a time-symmetric approach, the concept of irreversibility has been more clearly connected (in Refs. [7,8]) to the idea of irretrodictability from a logically consistent Bayesian perspective. The emergence of a causal arrow of time from a more fundamental requirement of logical consistency has also been investigated in [9]. Additional insights on a time-symmetric description with elements of Energetic Causal Sets has been developed by Cohen et al. in Refs. [10,11], further smoothing the tension between a causal and irreversible perspective irremediably opposed to time symmetry. Identifying a Quantum of Evolution If the future is open and yet to come, the past is irreversible (might not even be known beyond certain limits) and time shows a level of symmetry in its evolution, we should then consider a time-symmetric thick present as the elementary quantum in the passage of time. We can define a thick present as the information T k related to a thick space-like foliation, bounded by −T and +T and derived from a time-symmetric superposition of perspectives (from a near past and a near future) on the emerging spacetime. Within a thick present we can consider both the quantum information potential (in a time-symmetric description) as well as the information of the last events (intended as causal points beyond the past boundary of the thick present, from which the present emerges and the future opens), efficiently discarding (for Occam's sake) all the information that is not "currently needed" to describe the evolution of the Universe. A thick present is the actual realization of a discreteness of time and, from an ontological perspective, shall be intended as the only element of reality in a logically consistent, causally and thermodynamically oriented emerging axis of an extended classical time. It is worth to note that, as highlighted by Tallant and Ingram (Ref. [12]), a well-defined philosophical framework of Presentism is missing, as several (and sometimes contradicting) descriptions are proposed in the literature. Among them, we will consider in this contribution the definition stating that "Only the present time exists (No non-present time exists)". Following a philosophical perspective grounded on the physics of irreversible events, open future and indeterminate present, a Presentism interpretation of time has also been recently promoted by Mariani and Torrengo in Ref. [13]. In the search for a quantum description of space and time, a thick present has been considered by Gisin in Ref. [14] (via an intuitionist mathematical language), and by Smolin in Ref. [15] (from an ontological perspective in QM). The concept of an everchanging becoming between a fixed past and a probabilistic future has also been investigated by Schlatter. Starting from a principle of synchronization, the gravitational potential is connected in [16] to a foliation of spacetime in space-like surfaces and, sequencing the flow of reality in time intervals, the established relations between energy, entropy, and geometry are recovered. In Refs. [17,18], events are interpreted through synched light clocks (introduced in Ref. [19]) in an emerging thermal time, and evolution is intended through a realm of probability amplitudes (with a symmetric time structure) and an emerging empirical spacetime (as events break the unitary symmetry). A thick present can be interpreted in QIS as a discrete elaboration of the global information potential in the space-like foliation. There are several theories that consider evolution in discrete steps. To mention a few, Finite State Classical Mechanics (described by Margolus in Ref. [20]) is based on Lattice Dynamics, where evolution rules are often referred to as cellular automata models. Signal-State Quantum Mechanics, developed in a theory of Quantized Detector Networks (presented in [21]), is a realization of the Heisenberg's "instrumentalist approach" to quantum physics. Following the insights of QIS and "it from bit" (that considers spacetime and QM as emerging from a quantum information processing), Operational Probabilistic Theories (OPT), developed by Hardy, D'Ariano, et al. (Refs. [22][23][24]), describe the evolution of quan- tum systems as logical-physical circuits that can be foliated in hyper-surfaces elaborated in atomic steps. OPT have been considered in a time-symmetric perspective (Refs. [25,26]) and in terms of a difference between known and unknown, rather than an emerging past and future (as in Ref. [27]). Even if the idea of a thick present has not been considered yet in the context of an OPT description of spacetime, we can identify in the atomic processing of OPT the realization of an atomic thick present and then map the space foliation emerging in each cycle to an equivalent circuit-foliation. To describe a time-symmetric thick present in the context of a discrete evolution of the information (phased on atomic cycles), we will consider a minimum time interval T like a π rotation, and 2T for a full cycle in a time-symmetric description. We will interpret these discrete 2T steps, from (2k − 1)T to (2k + 1)T, as the duration of the atomic elaboration of the present potential T k from which spacetime emerges as a space-like foliation at 2kT. It is worth to clarify that the concept of "spacetime from information" is not promoting the idea that "we live in a simulation", which is an unneeded speculation. Moreover, the "present" is not intended as a "global perceived now". The passage of time for local systems follows relativity, and time intervals measured by quantum clocks (even through events) vary according to GR, as there is no absolute perspective for any local observer within the emerging spacetime, but only relative ones. The "perceived now" of quantum systems (from particles to complex clocks), shall be intended as a "proper evolution cycle" of the system, measured on to the past experienced cycles and of greater duration compared to the thick present extension, as the spatially non-local "here" that spreads in the wave-function. The thick present T k represents the potential of the kth space of events and possibilities in a space-like foliation of our universe bounded within (2k − 1)T and (2k + 1)T. In a QIS picture, its duration 2T is intended as a spacetime information "sampling rate". The idea of a maximum rate of change connected to the inverse of the Planck time has been elaborated in Ref. [28], where it has been proven to be compatible with Relativity. We will consider the present cycle duration as a global reference for time intervals to allow a relative confrontation of quantum clocks with respect to one another in a discrete passage of time. Eventually, even a "Relational time" (defined in [29] as the "counting of happenings") needs an "elementary event" to allow independent clocks to compare their "counts" in a coherent and consistent way, as absolute references are always required for uniformity in comparisons. In this sense, the present atomic processing cycle can represent a quantum of elementary action (the "fastest event" to evaluate differences) available at every point in spacetime as a reference on which any relational and relativistic perspective can rely. Observers, events, or potential events all coexist in a thick present able to account for a superposition of perspectives on the information from a near past and a near future, being atomic and time-symmetric within its thickness, and assuring consistency between "what it was" (causally happened) and "what it could be" in the current cycle. The information persists and evolves as a potential in case no specific events occurred in the present elaboration cycle, while events of decoherence shall be considered irreversible, in line with a QIS description and the causal set perspective. From the irreversibility of events, a thermodynamically oriented arrow of time in line with causality can also be considered as emerging in the succession of the thick present instants. Conclusion on Presentism and Open Challenges A Presentism perspective on time has ancient origins, it is rooted in several western and eastern philosophies, and it is coherent with the latest interpretations in neuroscience. Relativity, allowing for a description of reality focused on local observers, has then taught us that every local description could and should be relative, dissolving an absolute passage of time in a relativistic spacetime fabric. The multiple relative perspectives on the same information in terms of events have mined the concepts of before and after, leaving to an absolute speed of propagation of the causal information the role to preclude paradoxes. Eventually, in a relativistic description, time appears in an extended classical reality of past events and a deterministic future, seemingly excluding a Presentism description from every interpretation in physics. Recently, in the context of relativistic time intervals but beyond the limits of a classical description, part of the research community has tried to reconcile the interpretation of time with the ancient philosophical understandings, realizing that in the indeterminacy of the quantum information lives the potential of the present instant: it is the quintessence of any process, it causally depends on the set of past irreversible events and it is the door to an open future. In the first part of this contribution, we have presented the main insights towards a description of time coherent with relativity and QM and with a Presentism perspective. In the context of QIS, we have proposed an interpretation of the present as the information in the kth evolution cycle of the space of events and possibilities, emerging as a space-like foliation, bounded between (2k − 1)T and (2k + 1)T in a time-symmetric description. The temporal extension of the thick present, the actual realization of a discreteness of time, has been interpreted as a spacetime information "sampling rate". The atomic elaboration of the information potential in each instant has been proposed as a quantum of elementary action, the absolute reference needed among independent observers or clocks to consistently define and compare any relativistic perspective on the happenings in the evolution of information. In the framework of a thick present, Classical is then what we remember or causally predict, it is what has already happened or will happen if there were no quantum features. Complex observers can encode in the complexity of their internal dynamics the information of the past events and derive a corresponding thermodynamic and causal orientation of a classical time. Classical reality emerges from the information encoded in the memory of complex observers but should not be intended as "currently real": in an ontological sense, it does not exist. From the proposed perspective, the Universe exists in the thick present only. Figure 1 graphically illustrates the introduced proposal. In the given interpretation, the information T k plays a crucial role in the emergence of spacetime, and a better description of how this potential is encoded in the thickness of the present is needed. Moreover, recent experiments have shown that undefined causality (entanglement in time order) is possible. In the following chapter we will investigate the relation between causality and logical consistency and propose an explanation as to how the information potential of entanglement in time orders could be intended and persist along the succession of the thick present instants. In the final part of this paper, we will conjecture a possible description of the information potential as undefined locality and entanglement in space from a holographic perspective. Entropy 2022, 23, x FOR PEER REVIEW 6 of 16 Figure 1. Identification of the thick present as the current thick space-like foliation and corresponding kth elaboration cycle of the quantum information Tk, from which spacetime is considered to be emerging. Causality and Logical Consistency Recent investigations in the physics of time highlighted the possible existence of Undefined Causal Orders (UCO) and derived the equivalent Bell's inequalities in terms of temporal orders (as illustrated by Brukner et al. in [30]). The entanglement of temporal orders and the experimental verification of UCO have been considered as well in [31][32][33]. Causality and Logical Consistency Recent investigations in the physics of time highlighted the possible existence of Undefined Causal Orders (UCO) and derived the equivalent Bell's inequalities in terms of temporal orders (as illustrated by Brukner et al. in [30]). The entanglement of temporal orders and the experimental verification of UCO have been considered as well in [31][32][33]. In an evolution described as occurring in thick present instants (given as the only element of reality in time), it is worth understanding how UCO could occur and how their potential, in the entanglement of temporal orders, could be intended. The authors of Ref. [32] identify a "quantum SWITCH" circuit able to selectively choose the route of a particle, so that Alice (A) is encountered along the path before Bob (B) or vice-versa, depending on a controller qubit C. The quantum SWITCH circuit can be described, from a logical perspective, as a device able to investigate the alternative scenarios "A happens before B" or "B happens before A", equivalent to "A(B) is first in time and B(A) is not first", through a controller qubit which is in a superposition of states. In circuit logic, the same behavior can be described through a XOR function A⊕B, given that "A(B) is true" when "A(B) is met first on the path". Given A and B as any pair of points along the path of a particle entering the circuit, the XOR gate superposes the two statements "A is first, and B is not first" and "B is first, and A is not first". The resulting information of the XOR function is true if one and only one of the two assumptions is true, excluding the under-determined scenario (both false, as if there were no "first") or, on the other hand, an over-determined solution (both true, as if both were "first"). The XOR gate assures the logical consistency of the global information in the context of an "open choice", limiting the possible outcomes to the only ones which imply a difference, and excluding the logical paradoxes of over/under-determined solutions. In quantum logic, the XOR function can be described as a Controlled-NOT (C-NOT) quantum gate, which operates on a quantum register consisting of 2 qubits, C and S (Controller C and Target S), and flips the qubit S if and only if |C = |1 . The C-NOT gate correlates the information potential of the controller qubit C with the information potential of the Target S (the particle entering the circuit), and it is a common system used to create entangled pairs. For instance, an experiment in which a particle changes one of its quantum properties depending on the direction of travel in the circuit would create an entanglement between the observable in the particle and in the controller qubit, as if the potential information of the path of the particle were locally "stored" in C. A similar description of the quantum SWITCH is given in [34] as a quantum time flip device. The authors elaborate the proposal in the context of OPT and unitary operations, implementing it through entangled photons. It is crucial to highlight that, given the quantum superposition of the 2 circuit paths along the opposite directions, the "choice" could not happen immediately, at the time of the first C-S interaction. We need a measurement event to have a definite and irreversible outcome of the choice and, as long as the particle or the controller is not observed (and contextually defined), both results are possible: the outcome of the choice is a quantum information potential encoded in the relation between C and S, as illustrated in Figure 2. implementing it through entangled photons. It is crucial to highlight that, given the quantum superposition of the 2 circuit pa along the opposite directions, the "choice" could not happen immediately, at the time the first C-S interaction. We need a measurement event to have a definite and irrevers outcome of the choice and, as long as the particle or the controller is not observed (a contextually defined), both results are possible: the outcome of the choice is a quant information potential encoded in the relation between C and S, as illustrated in Figure Figure 2. Controlled quantum SWITCH reproducing an UCO described as a XOR function, im mented as a C-NOT quantum gate. The entanglement of the Controller qubit C with the Target the point  allows the superposition of paths in which "A(B) is met first and B(A) is not", and c sequently the UCO. The entanglement in the C-NOT gate can be seen as the information poten of a choice instantiated in the instant of the interaction and of which the answer is undetermine Imaginary Closed Paths To picture an undefined order within a thick present and the corresponding foliati we can describe the path of the particle inside the circuit as a function of the propagat velocity and of an imaginary time of motion needed to traverse the circuit (as a dim sion of possible freedom), in a resulting imaginary path | ⟩ along the circuit. We will consider the point C as the position of the controller qubit closing the circui the space-like foliation. The point A is defined as the imaginary point reached at the imagin time | ⟩ given a propagation along the circuit in the anticlockwise direction ( | ⟩ ), w the point B is reached after the same time when moving in the clockwise direction ( | The point of entrance of the particle in the circuit (as the instant of entanglement with controller C) becomes the point in the past of the particle and of the controller in which information potential related to a logically consistent "open choice" was established. The concept of an imaginary time has been popularized by Hawking in Ref. [35] a can be interpreted as a Wick rotation (able to offer a Euclidean description of the Minko ski metric) which is common in the Path Integral (PI) formulation of QM. In each insta a space-like foliation can be described from the information in the causal set of even ("fixed past choices" already defined at the past boundary of the present), in the point quantum interactions generating potential O (as new "open choices"), and through imaginary time of motion (as imaginary paths) emerging from them. Considering an imaginary time of motion , needed at the speed of light fr any quanta of space, the imaginary paths traced along , (or ict, for short) define a trace an imaginary Minkowski space within the thick present. A possible description of spacetime as space-like foliations in a PI context can rely the definition of a new Hilbert space ℋ built upon the tensor product of copies of conventional Hilbert space ℌ, one for each elaboration cycle (ℋ ∶ ⨂ ℌ ), and then the application of the related unitary time translation operator along the successive sli as elaborated in Ref. [36]. The description of a relativistic particle in a PI formalism Imaginary Closed Paths To picture an undefined order within a thick present and the corresponding foliation, we can describe the path of the particle inside the circuit as a function of the propagation velocity v and of an imaginary time of motion needed to traverse the circuit (as a dimension of possible freedom), in a resulting imaginary path ivτ |C along the circuit. We will consider the point C as the position of the controller qubit closing the circuit in the space-like foliation. The point A is defined as the imaginary point reached at the imaginary time iτ |C given a propagation along the circuit in the anticlockwise direction (iτ |C=1 ), while the point B is reached after the same time when moving in the clockwise direction (iτ |C=0 ). The point of entrance of the particle in the circuit (as the instant of entanglement with the controller C) becomes the point in the past of the particle and of the controller in which an information potential related to a logically consistent "open choice" was established. The concept of an imaginary time has been popularized by Hawking in Ref. [35] and can be interpreted as a Wick rotation (able to offer a Euclidean description of the Minkowski metric) which is common in the Path Integral (PI) formulation of QM. In each instant, a space-like foliation can be described from the information in the causal set of events F ("fixed past choices" already defined at the past boundary of the present), in the points of quantum interactions generating potential O (as new "open choices"), and through an imaginary time of motion (as imaginary paths) emerging from them. Considering an imaginary time of motion (it F,O ) needed at the speed of light from any quanta of space, the imaginary paths traced along (ict F,O ) (or ict, for short) define and trace an imaginary Minkowski space within the thick present. A possible description of spacetime as space-like foliations in a PI context can rely on the definition of a new Hilbert space H built upon the tensor product of copies of the conventional Hilbert space H, one for each elaboration cycle (H := k H k ), and then on the application of the related unitary time translation operator along the successive slices, as elaborated in Ref. [36]. The description of a relativistic particle in a PI formalism has been discussed in [37], where the Feynman propagator has been connected to the imaginary action in the space-like paths, to be accounted in the sum of all possible trajectories together with the orthochronos paths (at the speed of light). Even if far from a proper derivation in a PI formalism, we could still consider the imaginary path ivτ |C (in entanglement with the controller C) defined after the time of traversal of the C-NOT circuit as the representation of the quantum information connected to the open choice established in the undefined causality. Modeling the entanglement in the undefined orders through a C-NOT quantum logic and superposed imaginary paths developing in an imaginary time of motion in opposite directions along the circuit allows a novel interpretation of the phenomena. From the perspective of an imaginary time developing in the time-symmetric thick present, the superposition of the paths ivτ |1 ⊕ ivτ |0 can be indented as the logically consistent superposition of a forward-and a backward-evolving wave persisting in the circuit, as well as an imaginary Closed Time-Like Curve (CTC) connecting the points A, B, and C (Figure 3). The suggested relation between CTC in the thick present and entanglement is open to several interpretations and needs further clarifications, left to the coming chapters. together with the orthochronos paths (at the speed of light). Even if far from a proper derivation in a PI formalism, we could still conside imaginary path | ⟩ (in entanglement with the controller C) defined after the tim traversal of the C-NOT circuit as the representation of the quantum information conne to the open choice established in the undefined causality. Modeling the entanglement in the undefined orders through a C-NOT quantum and superposed imaginary paths developing in an imaginary time of motion in opp directions along the circuit allows a novel interpretation of the phenomena. From the perspective of an imaginary time developing in the time-symmetric present, the superposition of the paths | ⟩ ⊕ | ⟩ can be indented as the logi consistent superposition of a forward-and a backward-evolving wave persisting in circuit, as well as an imaginary Closed Time-Like Curve (CTC) connecting the poin B, and C (Figure 3). The suggested relation between CTC in the thick present and entanglement is to several interpretations and needs further clarifications, left to the coming chapters The Potential Hidden in a Choice CTC have been studied since the early days of GR as possible solutions to Einst field equations (Gödel in [38]), and their existence often raised concern (as in [39]). Following the interest for a quantum description of GR, the nature of time ga momentum in the physics' research of the recent years, together with the CTC puzzl It has been shown that CTCs are incompatible with a causal and thermodyn progression of time and that events cannot happen along their path (further elaborat Refs. [40][41][42]). Nevertheless, if time (as a progression related to causal events) cann considered on a CTC, the idea of a CTC developing along superposed imaginary t (within the space-like foliation) can represent the information potential of the syste the thick present. In the proposed interpretation, the superposed possibilities in th tanglement are considered as the information of a logically consistent "open choice" carding under/over-determined solutions in the XOR that encodes to the choice. This i mation is encoded in the CTC@Tk as a potential of superposed values of the outcome o choice and eventually a state with no identification of any event. In this sense, a CTC d oping in the imaginary time represents an undefined causality in the thick present an entanglement in the time order of the potential events along the closed path. Without posing restrictions to the thickness 2T of the present, we should consid very fast "sampling rate" of the information (from which the difference between "somet happened" or "nothing happened" is evaluated), and then, given a time of traversal  2T, we could investigate what is happening while the particle is traversing the circuit. The description of the space foliation emerging in the thick present through an aginary time of motion and an information potential related to "open choices" betw The Potential Hidden in a Choice CTC have been studied since the early days of GR as possible solutions to Einstein's field equations (Gödel in [38]), and their existence often raised concern (as in [39]). Following the interest for a quantum description of GR, the nature of time gained momentum in the physics' research of the recent years, together with the CTC puzzles. It has been shown that CTCs are incompatible with a causal and thermodynamic progression of time and that events cannot happen along their path (further elaborated in Refs. [40][41][42]). Nevertheless, if time (as a progression related to causal events) cannot be considered on a CTC, the idea of a CTC developing along superposed imaginary times (within the space-like foliation) can represent the information potential of the system in the thick present. In the proposed interpretation, the superposed possibilities in the entanglement are considered as the information of a logically consistent "open choice", discarding under/over-determined solutions in the XOR that encodes to the choice. This information is encoded in the CTC@T k as a potential of superposed values of the outcome of the choice and eventually a state with no identification of any event. In this sense, a CTC developing in the imaginary time represents an undefined causality in the thick present and an entanglement in the time order of the potential events along the closed path. Without posing restrictions to the thickness 2T of the present, we should consider a very fast "sampling rate" of the information (from which the difference between "something happened" or "nothing happened" is evaluated), and then, given a time of traversal ∆τ >> 2T, we could investigate what is happening while the particle is traversing the circuit. The description of the space foliation emerging in the thick present through an imaginary time of motion and an information potential related to "open choices" between imaginary points on a CTC may promote an idea of non-locality in the emerging imaginary space through its "thickness in time". The particle, while traversing the circuit, can be considered as propagating in both arms (being in both a forward-and backward-propagating wave), and is potentially on all the points already traversed along the curve in each thick instant. The path is then closed through a CTC developing in the thickness of the present (orthogonal to the space distance axis ict), representing a non-local correlation between the entangled possible imaginary locations of the particle within the imaginary space (as in Figure 4). nary space through its "thickness in time". The particle, while traversing the circuit, can be considered as propagating in arms (being in both a forward-and backward-propagating wave), and is potentially all the points already traversed along the curve in each thick instant. The path is closed through a CTC developing in the thickness of the present (orthogonal to the sp distance axis ict), representing a non-local correlation between the entangled possible aginary locations of the particle within the imaginary space (as in Figure 4). The relation between the causal non-separability and the cyclicity of the causal st ture has also been highlighted in Ref. [43], and a connection between UCO, non-loca and closed curves in the context of logic games has been investigated in Ref. [44]. St description of the information potential as CTC developing in a time-symmetric thick sent seems novel among the interpretations of entanglement. In a QIS perspective, we consider these CTC@Tk as spacetime "memory-loops", able to encode the information tential of a logically consistent choice in the current space of events and possibilitie which the outcome is open at the most fundamental level. Chasing Non-Local Information Non-local information shall not allow faster than light communications (intende a transmission of a message with a non-random information). Moreover, consideri finite speed for the causal propagation of any information and that two experimen (separated in a space-like way) can make choices of measurement independently of e other, even in the context of entanglement, the Free Will theorem already introduced cludes that the result of any quantum observation cannot be fully determined by anyth previous to the experiment. Given the Free Will theorem and considering the Bell and Kochen-Specker theor (Refs. [45,46]), we shall remember as well that QM interpretations based on non-loc must be contextual: the value of a variable is determined considering the interaction w the local system involved in the measurement process. We should consider that it is in the fundamental randomness of the quantum ob vation that a "faster than light communication" finds its impossibility (even in the cas non-local correlations), and that a logically consistent causality is preserved. The relation between the causal non-separability and the cyclicity of the causal structure has also been highlighted in Ref. [43], and a connection between UCO, non-locality, and closed curves in the context of logic games has been investigated in Ref. [44]. Still, a description of the information potential as CTC developing in a time-symmetric thick present seems novel among the interpretations of entanglement. In a QIS perspective, we can consider these CTC@T k as spacetime "memory-loops", able to encode the information potential of a logically consistent choice in the current space of events and possibilities, of which the outcome is open at the most fundamental level. Chasing Non-Local Information Non-local information shall not allow faster than light communications (intended as a transmission of a message with a non-random information). Moreover, considering a finite speed for the causal propagation of any information and that two experimenters (separated in a space-like way) can make choices of measurement independently of each other, even in the context of entanglement, the Free Will theorem already introduced concludes that the result of any quantum observation cannot be fully determined by anything previous to the experiment. Given the Free Will theorem and considering the Bell and Kochen-Specker theorems (Refs. [45,46]), we shall remember as well that QM interpretations based on non-locality must be contextual: the value of a variable is determined considering the interaction with the local system involved in the measurement process. We should consider that it is in the fundamental randomness of the quantum observation that a "faster than light communication" finds its impossibility (even in the case of non-local correlations), and that a logically consistent causality is preserved. The status that is "instantaneously updated" at the distant location C by a measurement on S is coherent with the state of S but, to a local observer at C, appears as determined by a random process, and so unable to carry meaningful information. Still, from a global and logically consistent perspective in T k , the identification of a choice represents an information potential encoded and persisting in the superposition of the outcomes. In the thick present potential T k , an entanglement in the time order (as UCO) as well as among particles in different spatial locations (EPR pairs) can be equivalent to a CTC, described as a logically consistent "memory-loop" able to encode the information potential of an open choice (offered in the entanglement) that precludes under or over-determined solutions. Loops are, actually, the most basic circuits for information storage, and CTC@T k can be considered as a spacetime "virtual memory" to encode the potential in T k . The proposed interpretation of an imaginary space in which non-locality is assured thanks to a thickness in time could offer insights into the "measurement problem" or the "collapse of the wave function". In the instants in which the outcome of the choice is still open, the information potential propagates as a CTC orthogonal to the imaginary space. It selects one branch of the CTC and defines a causal path in the instant of observation. In the case of an EPR pair sent to Alice and Bob, the information propagates superposed in the CTC. When Alice choses to observe her particle, she contextually defines a measurement event, which defines a determined orientation in the former CTC that opens in a causal path, ensuring the logical consistency with Bob's measure ( Figure 5). ment on S is coherent with the state of S but, to a local observer at C, appears as determined by a random process, and so unable to carry meaningful information. Still, from a global and logically consistent perspective in Tk, the identification of a choice represents an information potential encoded and persisting in the superposition of the outcomes. In the thick present potential Tk, an entanglement in the time order (as UCO) as well as among particles in different spatial locations (EPR pairs) can be equivalent to a CTC, described as a logically consistent "memory-loop" able to encode the information potential of an open choice (offered in the entanglement) that precludes under or over-determined solutions. Loops are, actually, the most basic circuits for information storage, and CTC@Tk can be considered as a spacetime "virtual memory" to encode the potential in Tk. The proposed interpretation of an imaginary space in which non-locality is assured thanks to a thickness in time could offer insights into the "measurement problem" or the "collapse of the wave function". In the instants in which the outcome of the choice is still open, the information potential propagates as a CTC orthogonal to the imaginary space. It selects one branch of the CTC and defines a causal path in the instant of observation. In the case of an EPR pair sent to Alice and Bob, the information propagates superposed in the CTC. When Alice choses to observe her particle, she contextually defines a measurement event, which defines a determined orientation in the former CTC that opens in a causal path, ensuring the logical consistency with Bob's measure ( Figure 5). Figure 5. Successive snapshots of the imaginary space foliation, from the EPR pair generation to the spin measurement at Alice's location. The information potential persists along the successive instants through the entanglement/CTC as long as it is undetermined. When A defines a contextual outcome in her measurement (blue arrow), the state of the particle directed towards B is coherently defined so that the information keeps a global logical consistency within the thick present. Figure 5. Successive snapshots of the imaginary space foliation, from the EPR pair generation to the spin measurement at Alice's location. The information potential persists along the successive instants through the entanglement/CTC as long as it is undetermined. When A defines a contextual outcome in her measurement (blue arrow), the state of the particle directed towards B is coherently defined so that the information keeps a global logical consistency within the thick present. The given description seems to put on a similar footing a spatial superposition of paths and an entanglement in time expressed as UCO, but this needs further clarification. Chiribella et al. showed in Ref. [47] that the combination of entanglement-breaking channels in indefinite order can become a perfect quantum channel, while this is not true if the channels are in parallel (spatial superposition), highlighting a possible fundamental difference between space and time. To understand the results reported in Ref. [47] in the context of a thick present, we should note that a channel from S to R is entanglement-breaking "along the path from S to R". When the channels are in series but with an undefined order thanks to a controller qubit, we are implicitly assuming that the coherence with the controller has persisted along the time of traversal of both channels (as well as in the connections to go back to S before entering the next channel), defining coherent imaginary paths, persisting at R before any measurement on the controller. The resulting CTC in the imaginary time (coherent along the thick present instants even after the transmission over both channels) defines the UCO through the controller qubit and represents the additional quantum resource where the information is encoded identically along the communication (as for the channel C + in [47]). When the channels are in parallel, after the time of traversal there are no CTCs at S or R that could be used as an available quantum resource to encode the transmitted information, given that the entanglement on the "which way" would only define a causal branch, eventually selecting a single channel which would still be entanglement-breaking. Towards a Holographic Perspective We have proposed a description of spacetime as a space-like foliation emerging from a time-symmetric thick present information potential T k along an imaginary time of motion. This potential represents the information of entanglement, manifested as undefined causality and non-locality and encoded in the thick present as CTC or "memory loops". Even if not always in a Presentism perspective, several research groups are actively investigating the relations between entanglement, information, and QG. Holographic theories (introduced by Susskind in Ref. [48]) describe spacetime as emerging from the entanglement among distant quanta of space, encoded in a bulk region of which spacetime is the boundary. Elaborating these concepts in a Presentism perspective, we could consider the potential of the present T k as equivalent to a pair of symmetric bulk regions, in a time-symmetric description from (2k − 1)T to 2kT and from (2k + 1)T to 2kT, respectively, of which the space foliation is the common boundary at 2kT. Following the idea of a spacetime emerging from the information of entanglement among the quanta of space, if particles as well can be described as emerging from information, we should consider non-locality in the emerging foliation as the chances of being in multiple points in the imaginary space seemingly simultaneously, as if there were an imaginary ER bridge entangling the distant quanta of space. In this sense, quantum tunneling in a space-like foliation should be described as an entanglement in space and a connection through the real fourth dimension of spacetime: the thickness of the current instant, in which everything could be interconnected. Being time-symmetric, this connection and non-local potential is encoded in the present as the superposition of forward and backward waves, from (2k − 1)T and (2k + 1)T, respectively, in a resulting CTC which intersects the foliation in the distant entangled quanta of space. In a thick present potential, the non-local information of a particle expressed in the wave function could be encoded in the superposition of a probabilistic "bundle of CTC" connecting different locations in the imaginary space. This probabilistic ensemble of memory loops encodes the local information potential and embroiders the otherwise flat fabric emerging along the imaginary ict. In a spacetime curved by the information of entanglement among the quanta of space, particles could be intended as a "volume of space entangled on a common mode", encoded in each cycle as a "spacetime local phase" in respect to the reference of action of the present. The Presentism interpretation of spacetime in a holographic perspective introduced in this contribution is proposed as a conjecture. Additional research is needed for a proper mathematical description of T k in terms of the entanglement among the quanta of space, towards a full comprehension of its encoding in the symmetric bulks and of its relation with the gravitational potential. Figure 6 graphically illustrates the holographic description of the thick present proposed, leaving further investigations on this path to a dedicated contribution. The Presentism interpretation of spacetime in a holographic perspective introduced in this contribution is proposed as a conjecture. Additional research is needed for a proper mathematical description of Tk in terms of the entanglement among the quanta of space, towards a full comprehension of its encoding in the symmetric bulks and of its relation with the gravitational potential. Figure 6 graphically illustrates the holographic description of the thick present proposed, leaving further investigations on this path to a dedicated contribution. Figure 6. Graphics illustrating the space foliation at 2kT in the thick present as the boundary between two symmetric bulk regions extending from (2k − 1)T to 2kT and from (2k + 1)T to 2kT. The bulks encode the information Tk (intended in a holographic perspective as an entanglement among quanta of space) from which spacetime emerges. CTCs in the thickness of the present represent the nonlocal potential; CTC developing in the imaginary time are equivalent to undefined orders of traversed points. Concluding, extending the famous equation of holographic theories, to represent the possible connection between time (existing as a real thick present potential Tk), space (emerging in each present instant along an imaginary time), and entanglement (as causally undefined or spatially non-local information of correlation among imaginary quanta of space encoded in the consistent superposition of outcomes of an open choice), we could maybe dare to conjecture, as limited Flatlanders, the following synthesis: @ Synthesis and Outlook In this paper we have discussed an interpretation on the nature of time and a proposal on the relation of entanglement with information, undefined causality, and nonlocality. We have investigated several descriptions of time, connecting elements from the different perspectives in the search for a common intersection, and eventually concluded the existence of a thick present as the only element of reality in an emerging axis of time. A thick present exists between a causal past of irreversible events and an open future. Within a thick present, we have considered a global information potential Tk encoding the kth space of events and possibilities through a time-symmetric description, from (2k − 1)T to (2k + 1)T. The potential Tk has been pictured as a logically consistent information evolving along the present instants, coherent with what happened and what could happen. Figure 6. Graphics illustrating the space foliation at 2kT in the thick present as the boundary between two symmetric bulk regions extending from (2k − 1)T to 2kT and from (2k + 1)T to 2kT. The bulks encode the information T k (intended in a holographic perspective as an entanglement among quanta of space) from which spacetime emerges. CTCs in the thickness of the present represent the non-local potential; CTC developing in the imaginary time are equivalent to undefined orders of traversed points. Concluding, extending the famous equation of holographic theories, to represent the possible connection between time (existing as a real thick present potential T k ), space (emerging in each present instant along an imaginary time), and entanglement (as causally undefined or spatially non-local information of correlation among imaginary quanta of space encoded in the consistent superposition of outcomes of an open choice), we could maybe dare to conjecture, as limited Flatlanders, the following synthesis: Synthesis and Outlook In this paper we have discussed an interpretation on the nature of time and a proposal on the relation of entanglement with information, undefined causality, and non-locality. We have investigated several descriptions of time, connecting elements from the different perspectives in the search for a common intersection, and eventually concluded the existence of a thick present as the only element of reality in an emerging axis of time. A thick present exists between a causal past of irreversible events and an open future. Within a thick present, we have considered a global information potential T k encoding the kth space of events and possibilities through a time-symmetric description, from (2k − 1)T to (2k + 1)T. The potential T k has been pictured as a logically consistent information evolving along the present instants, coherent with what happened and what could happen. In each instant, spacetime has been described as a space-like foliation emerging from the information in T k . Following a QIS perspective, we have connected the thick present temporal extension to a spacetime "sampling rate" and a discrete elaboration of the information potential, towards the realization of a discrete time, connected to absolute references (such as the Planck units) and not in contrast with a relativistic description of the physics of time as experienced and measured by local observers. The atomic elaboration of the information in T k has been proposed as an elementary quantum of action and the "fastest event" to evaluate differences, acting as a reference to consistently compare any relativistic perspective on spacetime among observers. We have concluded the first part of the paper clarifying how a Presentism perspective could reconcile philosophy, neuroscience, and physics, and which could be the open challenges towards a quantum description of spacetime based on a thick present potential. In the second part of this contribution, we have investigated indefinite causal orders, to understand how their information potential could be described and persist along an evolution occurring in thick present instants (given as the only element of reality in time). Thanks to a parallel with a C-NOT quantum gateway, we have interpreted entanglement as the coherent superposition of the possible outcomes of an "open choice", in a logically consistent potential that discards under/over-determined solutions. Following a Paths Integral approach on the circuit implementing the undefined orders, we have described the evolution of the system as superposed imaginary paths developing in opposite directions in the circuit along an imaginary time of motion. Given a timesymmetric description of the potential in the thick present, we have considered these paths equivalent to a forward and a backward wave in the imaginary time of motion, converging at the controller qubit in an imaginary closed path and eventually a CTC. CTCs are introduced to represent entanglement in both time and space, manifested in the emerging spacetime as the information of undefined causality and non-locality. In a QIS description, CTCs in the thick present have been considered equivalent to "memory-loops" able to encode the information of an open choice of which the outcome is logically consistent and undetermined at the most fundamental level (until observed and contextually defined). When developing along the imaginary time of motion in space, these CTCs represent indefinite orders (of imaginary quanta of traversed space) and an undefined causality. If their path develops through the thickness of the present instant, they encode the non-local correlations in the imaginary space (spatial entanglement). In the final part of the contribution, we have investigated a possible interpretation of the thick present in a holographic perspective. In the context of a spacetime emerging as flat along an imaginary ict and curved by the logically consistent information of entanglement among quanta of space, the description of the potential through CTC in a time-symmetric thick present T k has been conceptually extended to the information expressed in the wave function of an elementary particle. Massive particles have been interpreted as the potential encoded in a probabilistic bundle of CTCs, that embroiders and deforms the fabric of spacetime, entangling in each present evolution cycle the imaginary quanta of the emerging space foliation. Following these ideas, we have concluded by conjecturing a possible extension of the famous equation of holographic theories to ER = EPR = CTC@T k . The additional needed research on this path has been left to a dedicated contribution. The descriptions of time and of entanglement given in this paper lack explicit mathematical derivations and could be viewed as conjectures inspired by logic and QIS. Nevertheless, the concept of a time-symmetric thick present potential T k that encodes through CTCs the information of entanglement (as undefined causality and non-locality in the emerging space foliation) seems a promising starting framework for the interpretation of our universe in terms of information. The hope is that future works on this path may offer additional insights into the possible ontological nature of information in the emergence of spacetime, towards a proper quantum description of gravity and a more profound understanding of our universe. Conflicts of Interest: The author declares no conflict of interest.
12,089
2021-04-27T00:00:00.000
[ "Philosophy" ]
Dissociative electron attachment to the highly reactive difluoromethylene molecule–importance of CF2 for negative ion formation in fluorocarbon plasmas Dissociative electron attachment to the highly reactive difluoromethylene molecule, CF2, produced in a C3F6/He microwave plasma and stepwise via the fast atom reaction CF3I+H→CF3+HI and CF3+H→CF2+HF, has been investigated. The upper limit for the cross section of formation of F- via dissociative electron attachment to CF2 is estimated to be 5×10−4 Å2. This value is four orders of magnitude smaller than the cross section previously predicted from scattering calculations. It is concluded that difluoromethylene plays a negligible role in negative ion formation in fluorocarbon plasmas. Introduction Fluorocarbon gases are widely used in plasma etching. The etching characteristics depend on highly reactive radicals and molecules, such as CF and CF 2 , and positive and negative ions present in the discharge. Difluoromethylene, CF 2 , is the most abundant molecular radical species present in many industrially relevant fluorocarbon plasmas [1,2], where it is known to play an important role in film deposition and etching [3] and polymerization reactions leading to the formation of undesired macromolecules [2]. The negative ion density in such a discharge can be orders of magnitude larger than the electron density [4]. Therefore, negative ions play a significant role in changing the distribution and concentration of charged species in a plasma and thereby considerably influence the ion chemistry taking place. The importance of the highly reactive CF 2 molecule for the formation of negative ions in fluorocarbon discharges is still unknown, perhaps due to the difficulty of producing and investigating this short-lived, unstable molecule. CF 2 is stable as an isolated molecule, but is highly reactive and must be generated in situ for experimental investigations. If the highly reactive CF 2 molecule possesses an attachment resonance or resonances at electron energies below 10 eV, attachment of plasma electrons with typical energies of several electronvolts could lead to the formation of F − and possibly also CF − , C − and F − 2 anions. For the formation of the stable parent anion, CF − 2 , an efficient collision mechanism for de-excitation of the transient anion CF − * 2 has to be available; the lifetime of the transient anion is otherwise expected to be in the picosecond range or lower. Electron scattering calculations performed by Rozum et al predicted the formation of F − to proceed through a 2 B 1 resonance state with a maximum at 0.95 eV and a width of 0.18 eV. The attachment cross section for CF − * 2 formation was estimated to be 25.76 Å 2 ; it was predicted that about 5% of the formed transient parent CF − * 2 anions subsequently dissociate to form F − + CF [5]. Lee et al [6] found evidence for a 2 B 1 shape resonance at a slightly higher electron energy of 1.5 eV. A study by Francis-Staite et al [7], however, places this resonance considerably lower at less than 0.1 eV. Francis-Staite et al found that polarization has a critical effect in 3 the calculated resonance position; they suggest that the higher energies predicted by Rozum et al and Lee et al could be explained if less polarization had been taken into account in their calculations. A recent paper presenting calculations assessing the importance of electron attachment to CF 2 in CF 4 plasmas called for an experimental investigation of electron attachment to CF 2 [8]. There are few previous experimental investigations of low-energy electron collisions with CF 2 because of its high reactivity. Maddern et al [9] and Francis-Staite et al [7] have observed low-energy elastic electron scattering by CF 2 . Experiment The experimental setup, the electron radical interaction chamber (ERIC), has been described previously [10]. Briefly, low-energy electrons from a trochiodal electron monochromator collide with sample molecules in a differentially pumped interaction region. The electron beam is pulsed; when all electrons have left the interaction region, any ions formed are extracted into a time-of-flight (TOF) mass spectrometer. Both positive and negative ions can be observed by reversing the electric fields in the TOF spectrometer. The electron energy scale for the positive spectra was determined using the ionization thresholds of He (24.6 eV) and HF (16.0 eV) [11]. The energy scale for the negative ions formed by dissociative electron attachment to the parent gas was determined using the SF * − 6 peak at 0 eV from electron attachment to SF 6 and with the S − peak from CS at 5.43 eV (see [12]), and the CN − peak from CF 3 CN at ∼1.3 eV [13]. The uncertainty in the electron energy scale is estimated to be ±0.2 eV. The electron energy resolution is ∼200 meV, measured from the full width half maximum (FWHM) of the SF * − 6 peak at 0 eV. CF 2 was produced by passing a mixture of C 3 F 6 and He through a microwave discharge and by reaction of hydrogen atoms with CF 3 I. The plasma region where reactive species are produced is separated by about 25 cm of glass tube from the interaction region. Therefore, the number of vibrationally and electronically excited and very short-lived radical states will be greatly reduced in the interaction region compared to the plasma volume. Frequently in measurements with the C 3 F 6 + He plasma, it appeared that the electron current below 300 meV close to 0 eV was reduced, perhaps due to reactive species affecting surfaces in the monochromator. Therefore, peaks close to 0 eV electron energy may be cut off or show distorted shapes. The energy scale for the negative ions formed by dissociative electron attachment with the C 3 F 6 + He plasma running was calibrated with CS and CF 3 CN because of this distortion at 0 eV. CF 2 was also produced in the stepwise fast atom reaction, and CF 3 + H → CF 2 + HF. Atomic hydrogen was produced in a H 2 /He microwave discharge again located 25 cm from the interaction region. The H/H 2 /He mixture was mixed with CF 3 I 4-8 cm from the interaction region. Alternative methods can be used for the generation of CF 2 . For example, a clean sample of CF 2 can also be produced via pyrolysis of C 2 F 4 [7,9]. Dissociative electron attachment to the parent gas molecule, C 3 F 6 Dissociative electron attachment to C 3 F 6 has been investigated for comparison with attachment to the gas sample produced in the microwave discharge of C 3 F 6 + He. Anions detected were F − , CF − 3 , C 2 F − 3 and C 3 F − 5 , with F − and C 3 F − 5 being the most abundant. Integrated signals of the strongest F − and C 3 F − 5 anions are shown in figures 1(a) and (b). F − is formed at three positions with maxima at 2.9 ± 0.2, 6.4 ± 0.2 and ∼11.6 ± 0.3 eV. C 3 F − 5 has its peak maxima at ∼3.2 ± 0.2 and 6.3 ± 0.2 eV and C 2 F − 3 around 3.5 and 6.0 eV. Furthermore, CF − 3 is formed at an electron energy of ∼6.5 eV. The positions of the peak maxima and the relative peak ratios of the different anions observed here are in good agreement with literature values [14,15] within experimental uncertainties. C 3 F 6 /He plasma composition In figure 2(a), a positive mass spectrum of the C 3 F 6 /He parent gas at 15 eV electron energy with the plasma off is shown. C 3 F + 6 , the parent molecular ion, is the most intense signal and the fragments C 2 F + 4 and C 2 F + 5 are also visible. In figures 2(b) and (c), positive mass spectra recorded at 15 eV with the plasma on and at two different pressure conditions are shown. Here, (b) was taken at a lower pressure than (c). In the case of low pressure (b), the plasma etches the Pyrex glass tube at the position of the microwave cavity and Si + dominates the positive mass spectrum together with the CF + 2 and CF + 3 ions. The Cl + signal visible in figure 2(b) originates from Cl atoms that are formed in the discharge from a residue of CCl 4 in the chamber. The C 3 F + 6 signal was found to be weak under all pressure conditions, which implies that the C 3 F 6 parent gas is efficiently converted into other species in the discharge. At higher pressure in (c), the Si + signal is weaker than at low pressure (b) and CF + 2 dominates the positive spectrum. Weak signals of larger ions with masses up to ∼300 amu are also visible in positive spectra (b) and (c). To alter the plasma composition further, C 3 F 6 /He was mixed with SF 6 . In figure 2(d), a positive mass spectrum obtained from a plasma produced with this gas mixture is shown. CS + and CS + 2 are present together with several other sulphur, fluorine and/or carbon-containing ions. CS was subsequently used as a reference molecule in section 3.3 to calculate the CF 2 attachment cross section as its attachment peaks and absolute cross sections had been measured previously [12]. HF + from HF is found in all positive spectra and its formation was enhanced by the addition of SF 6 . HF is probably formed in plasma reactions of plasma species with residual water molecules. The ionization threshold of HF was used to calibrate the positive electron energy scale. HF + is not visible in the mass spectra presented in figures 2(b)-(d) taken at 15 eV as the ionization threshold of HF is at 16 eV [11]. In order to confirm that the CF + 2 signal observed in the positive mass spectra is caused by ionization of the CF 2 molecule, the appearance potential of the CF + 2 signal was measured. The integrated CF + 2 signal as a function of electron energy is shown in figure 3. The CF + and CF + 3 ion curves are shown in the same figure for comparison. CF + 2 can unambiguously be identified to originate from electron impact ionization of the CF 2 molecule as its curve shows a clear onset at the known ionization energy of the CF 2 molecule, 11.44 eV [11]. This implies that CF 2 is indeed present in the gas stream. By contrast, CF + and CF + 3 originate mainly from the fragmentation of larger molecules as the positive ion yield is small below ∼16 eV while their ionization thresholds are low at 8.9-9.4 eV (CF) and 8.6-9.8 eV (CF 3 ) [11]. The positive ion onset curve can also reveal the presence of excited molecules as the ionization thresholds of electronically or vibrationally excited states are, of course, lower than Typical ionization curves of CF + , CF + 2 and CF + 3 obtained experimentally. The CF 2 molecule can be identified clearly from the onset of the curve at its ionization threshold, 11.44 eV [11]. By contrast, the detected CF + and CF + 3 ions mainly originate from the fragmentation of larger molecules and not from ionization of CF and CF 3 . There is no significant contribution from vibrationally or electronically excited states in either onset. Excited states may therefore only be present as traces (see text). ground state thresholds [16]. The ionization onsets of CF 2 and HF recorded experimentally are shown in more detail in figure 4. It can be concluded that at most a trace of electronically excited CF 2 molecules may have been present, as no CF + 2 signal is observed below the ionization threshold of the ground state CF 2 molecule. The electron energy resolution of the present experiment is not sufficient to detect moderate vibrational excitation, but it is clear that there is no significant contribution of CF 2 molecules to the sample with >200 meV of vibrational excitation energy; the energies of the CF 2 vibrational normal modes are ν 1 (symmetric stretch) 152 meV, ν 2 (bend) 82 meV and ν 3 (antisymmetric stretch) 138 meV [17]. Vibrational excitation can lead to considerable shifts in dissociative electron attachment peak maxima positions and enhancements of cross sections [18]. Negative ions observed in dissociative electron attachment to species formed in a C 3 F 6 /He plasma at two different pressure conditions. In (a) the C 3 F 6 inlet pressure is low while in (b) the pressure is higher. Negative ion mass spectra-assignment of attachment peaks to CF 2 Dissociative electron attachment to C 3 F 6 /He plasma species was investigated under a number of different pressure and discharge conditions. Two exemplar data sets are shown as twodimensional plots (2D) in figures 5(a) and (b). In figure 6 a 2D plot of the negative ions formed in the C 3 F 6 /He/SF 6 gas mixture is shown. The S − and C − bands from dissociative electron attachment to CS are clearly visible between 5 and 7 eV [12]. Apart from the 35 Cl − and 37 Cl − anions, which were observed in some measurements due to residual CCl 4 in the chamber, F − was the most intense anion in all data sets recorded shown in figures 5 and 6. Furthermore, CF − 3 formation takes place at electron energies of ∼3 eV and ∼7 eV, and many heavier anions appear mainly close to 0 eV. It is clear from the data presented so far that many different species are produced in the discharge that give negative ions upon electron attachment. The present discussion of the data will concentrate on CF 2 . As mentioned above, dissociative electron attachment to CF 2 may lead to the formation of C − , F − , F − 2 and CF − . The thermodynamic thresholds for the formation of CF − and F − from ground state CF 2 can be calculated using the dissociation energy of the CF-F bond, 5.20 eV [19], and the electron affinities of CF (>3.30 ± 0.30 eV [11]) and F (3.40 eV [11]) as AE(CF − ) ∼1.90 eV and AE(F − ) 1.80 eV. Calculation of the CF-F bond energy using the heats of formation of CF 2 (−182 kJ mol −1 [11]), F (79.39 kJ mol −1 [11]) and CF (255.22 kJ mol −1 [11]) yields a value of 5.36 eV, which leads to similar results, AE(CF − ) > 2.06 eV and AE(F − ) = 1.96 eV. The experimental literature result for the electron affinity of CF is significantly larger than the values predicted by theoretical calculations of between ∼0.5 and 1.2 eV [20]. Using the theoretical CF electron affinity increases the threshold for the formation of CF − by at least 2 eV. The thresholds for the formation of C − + F 2 , 7.73 eV, and F − 2 + C, ∼5.87 eV, are considerably higher than those for F − and CF − . These thresholds have been calculated from the electron affinities of C (1.26 eV [11]) and F 2 (∼3.12 eV [11]), the bond energy of F 2 (1.41 eV [21]) and the assumption that breaking the two C-F bonds of CF 2 requires twice the CF-F bond energy. Integrated anion signals of F − and CF − 3 at four different pressure and discharge conditions are shown in figures 7(a) and (b). F − is observed with maxima at ∼0 eV, 2.45 eV, ∼3.5 eV and ∼7 eV. Any of the F − peaks above ∼2 eV could in principle originate from dissociative electron attachment to CF 2 . CF − 3 is observed with maxima at 0 eV, ∼3.6 eV and ∼7 eV. Note that most of the CF − 3 signal close to 0 eV originates from the overlapping band of noise probably produced by metastable dissociation events and also partly from overlapping Cl − 2 . It is interesting to note that no traces of C − or CF − are observed in the negative mass spectra (see figure 5(b)). This means that these negative ions are not formed in dissociative electron attachment to CF 2 or the cross section for their formation is so small that they cannot be detected in this experiment. F − 2 is observed with a maximum at ∼2.8 eV (see figure 6). This is more than 2 eV below the calculated appearance energy of F − 2 from CF 2 and practically excludes CF 2 as a possible candidate for the formation of the detected F − 2 . This leaves only the F − peaks, which could be formed by dissociative electron attachment to CF 2 . F − appears with a maximum close to 0 eV and there is also sometimes a shoulder in the peak at ∼0.7 eV (see figure 7(a)). The interpretation of this signal is difficult. The F − signal below 1 eV consists of two or more overlapping peaks from different parent molecules, the concentration of which may change from measurement to measurement, thus influencing the F − peak, as visible in figure 7(a). F 2 is one possible candidate for the formation of F − at electron energies near 0 eV, despite being observed only very weakly in the positive mass spectra. F 2 has a very small ionization cross section for the formation of F + 2 at 20 eV electron energy of 0.047 Å 2 [22]. By comparison, the ionization cross section of CF 2 is one order of magnitude larger, 0.529 Å 2 [23], at 20 eV. F − formation from F 2 could be detectable in the negative spectra because the cross section for the formation of F − in dissociative electron attachment to F 2 close to 0 eV is very large, 80 Å 2 [24]. Saturated fluorocarbons with up to six carbon atoms are known to have a thermodynamic threshold of at least 1.2 eV for F − formation but form long-lived parent anions at 0 eV [25]. There is little data about dissociative electron attachment to unsaturated species available, but it may be possible that dissociative electron attachment to these larger molecules leads to the formation of F − if the C-F bond strength is lower than in the saturated compounds. The intensities of negative ion peaks in the dissociative electron attachment spectra have been compared between the data sets of the different measurements made. If the relative intensities of two dissociative electron attachment peaks are constant under several different pressure and discharge conditions, then it is likely that these peaks are correlated and the negative ions are formed in dissociative electron attachment to the same parent molecule. Similarly, signals from positive and negative ion spectra recorded under identical conditions are compared to identify the parent molecules responsible for dissociative electron attachment processes. A change in the intensity of a positive parent ion of each molecule should be accompanied by a similar change in the intensity of the dissociative electron attachment peaks that originate from the same molecule. The change in the ratio of the intensities of two parent positive ions, I A + to I B + , between two different conditions p 1 and p 2 should be equal to the change in ratios of the intensities of the negative ions formed by the same molecules, I a − to I b − , between p 1 and p 2 in the negative ion spectrum. This relationship can be represented by [10,12] Calculations are made of I A + /I B + from the experimental data to compare the intensity of each parent ion A + in the positive ion mass spectra with the parent B + ion of a 'reference' molecule B. A reference molecule is a molecule present in the gas sample with known electron attachment bands. For each new dissociative electron attachment peak considered, calculations are made of the ratio I a − /I b − , where I a − is the unidentified electron attachment peak intensity and I b − denotes the peak intensity of the electron attachment peak of the reference molecule. Comparison of dissociative electron attachment peaks showed that the F − peak visible at 2.45 eV at low pressures is correlated with the F − 2 peak at 2.8 eV and an SiF − 3 peak close to 0 eV. Integrated signals of these three anions are shown in figure 8. These peaks probably originate from dissociative electron attachment to Si 2 F 6 , which may be formed in plasma etching of the Pyrex glass tube. Using the heats of formation of Si 2 F 6 (−2383.29 kJ mol −1 ) [26] and SiF 3 (−1085.33 kJ mol −1 ) [11], the Si-Si bond energy is calculated to be 2.2 eV. As the electron affinity of SiF 3 is ∼2.4 eV [11], the dissociation channel involving the formation of SiF − 3 is exothermic. A recent calculation yields an Si 2 F 5 -F bond dissociation energy of 6.53 eV [27]. As the electron affinity of F is 3.40 eV [11], this leads to a thermodynamic threshold of F − formation of 3.13 eV, which is ∼0.6 eV above the observed peak maximum. Further evaluation of peak intensities showed that at higher pressures the F − peak at 3.5 eV and the CF − 3 peak at 3.6 eV are roughly correlated in intensity. Those peaks are fairly broad and may originate from dissociative electron attachment to several species, most likely longer fluorocarbon molecules formed by polymerization reactions in the discharge. The F − and CF − bands observed here correspond to bands observed at a similar energetic position in a C 4 F 8 ECR plasma [28]. Stoffels et al [29] observed F − at ∼3 eV in a CF 4 plasma. They concluded that this peak is likely to originate from dissociative electron attachment to C 2 F 6 and C 3 F 8 [29]. A number of further molecules are known to have attachment bands leading to the formation of F − and CF − 3 between 3 and 4 eV, among them C 2 F 4 [30], C 2 F 6 [25], C 3 F 8 [25], C 4 F 8 [28] and n-C 4 F 10 [25]. The F − peak at 6.8 eV and the CF − 3 peak at 7.5 eV were also found to be correlated in intensity. These peaks most likely originate from CF 4 [25,29,31]. An F − peak was also observed in an experiment, where gas was sampled from a CF 4 plasma, at a comparable energetic position [29]. This analysis suggests that none of the dissociative electron attachment processes observed is due to CF 2 . Calculation of the maximum dissociative electron attachment cross section of CF 2 A method to calculate dissociative electron attachment cross sections in gas mixtures has been described previously [12]. Briefly, the absolute dissociative electron attachment cross section, σ − A , of a new molecule, A, is estimated by comparison with a reference molecule, B, with known dissociative electron attachment and electron impact ionization cross sections, which is also present in the gas stream with where the relative number density of the neutral reference molecule to the new molecule, n B /n A , is equal to the ratio of their positive ion signals, I B + /I A + , multiplied by the ratio of their absolute 12 ionization cross sections, σ B + /σ A + . The term I a − /I b − is the relative intensity of negative ions a − and b − formed in dissociative electron attachment to A and B, and the known absolute cross section for b − formation is σ − B . Although a peak originating from dissociative electron attachment to CF 2 was not observed in the negative ion spectrum, a maximum dissociative electron attachment cross section can be estimated if it is assumed that all the F − signal at 1.8 eV is from CF 2 , where 1.8 eV is chosen as it is close to the threshold for F − formation from CF 2 . The reference molecule here is CS, which has a known dissociative electron attachment cross section for S − formation at 5.43 eV, 0.025 Å 2 [12], and known electron impact ionization cross sections of 0.7, 1.4 and 2.15 Å 2 at 13, 15 and 17 eV, respectively [32,33]. Ionization cross sections for CF 2 of 0.03, 0.143 and 0.257 Å 2 at 13, 15 and 17 eV [23,34] are also used in the calculation; these CF 2 ionization cross sections were calculated with the BEB model [34] and are available online [23]. The calculated CF 2 ionization cross section values are in excellent agreement with experimental values (see [23,35]). Using the procedure just described and the data obtained in the experiments with the C 3 F 6 /SF 6 plasma, the maximum dissociative electron attachment cross section close to the thermodynamic threshold at 1.8 eV for the dissociation, has been estimated to be significantly smaller than 5 × 10 −4 Å 2 . The value of the upper limit, 5 × 10 −4 Å 2 at 1.8 eV, does not change significantly if it is, for example, calculated at 2 eV. The upper limit is expected to be correct to within an order of magnitude in the region of the thermodynamic threshold. The limit of 5 × 10 −4 Å 2 is much smaller than the peak value for the dissociation cross section predicted theoretically by Rozum et al [5], which was estimated to be 5% of 25.76 Å 2 at 0.95 eV. This discrepancy may be explained in part by the fact that the thermodynamic threshold for the formation of F − from CF 2 is situated approximately 1-2 eV above the predicted resonance maximum. At 1.4-1.5 eV, however, a dissociative electron attachment cross section of ∼0.04 Å 2 was predicted (5% of 0.8 Å 2 ), which is considerably higher than the experimental upper limit determined here. The experimental results are more consistent with the lower peak resonance energy predicted by Francis-Staite et al [7] to be less than 0.1 eV. If the resonance is located close to 0 eV, the thermodynamic threshold for the formation of F − is nearly 2 eV higher. It seems probable that CF 2 does not form negative fragments upon electron attachment due to this unfavourable energy gap between the position of the resonance and the threshold for F − . Very weak negative ion formation due to the high energy tail of this resonance may not have been observable in this experiment as other molecules present in the gas stream also give F − between 1.8 and 2 eV. Dissociative electron attachment to CF 2 produced in fast atom reactions CF 2 was also produced in the reaction of H atoms with CF 3 I with the formation of HI and HF, as described in section 2. This process is 'cleaner' than the formation of CF 2 from the C 3 F 6 + He plasma reaction as fewer side products are present in the sample. For example, there are no high-mass (CF 2 ) n polymers formed. A positive mass spectrum of the gas sample is shown in figure 9; this mass spectrum is the sum of many spectra taken over the ionization energy range of 14.5-20 eV. The negative ion electron attachment spectrum from this sample is dominated by the I − peak from HI close to zero electron energy. Very little signal was observed from other negative ions. A composite negative mass spectrum, which is the sum of mass spectra taken over the energy range 0-11 eV, is shown in figure 10; the dominant I − signal is clearly visible. All the other ions are orders of magnitude weaker by comparison. The O − peak originates from dissociative electron attachment to residual water vapour in the vacuum chamber and the weak C − signal may originate from the graphite coating of surfaces inside the apparatus. There is also some noise in the spectrum, principally between 20 and 60 mass units. A very weak F − signal is also just visible in figure 10. The variation in intensity of this weak F − signal with electron energy is shown in figure 11 between 1 and 11 eV; there is very little, if any, signal visible above the noise. It is possible that there is some weak signal due to CF 2 above the predicted threshold of 1.8 eV, but it is weaker than the noise in figure 11. HF is known to form F − upon electron attachment at 2.5 eV [18]. The dissociative electron attachment cross section of HF, however, is very small, 2 × 10 −4 Å 2 [18]; this cross section is in the same range as the upper value for CF 2 calculated here from the C 3 F 6 + He plasma data. From these fast atom reaction experiments, it seems very likely that either CF 2 forms no negative ions upon electron attachment or the cross section for negative ion formation is very small. An upper limit for the cross section could not be determined from these data due to the lack of a suitable reference molecule in the gas sample. Conclusions The present experimental investigation has found that F − formation via electron attachment to the CF 2 molecule has a maximum cross section at least four orders of magnitude smaller than predicted from scattering calculations of Rozum et al [5]. This experimental result is consistent with the calculation of Francis-Staite et al [7], which predicts a lower electron attachment resonance energy below 0.1 eV, which is lower than the value of 0.95 eV predicted by Rozum et al. It can be concluded that the importance of CF 2 for the formation of negative species (F − , CF − , F − 2 , C − ) in low-temperature fluorocarbon plasmas is small. As the experiments presented here were carried out under single collision conditions, no information of a possible collisional stabilization of the CF − 2 parent anion was obtained. Without stabilisation, the parent anion is expected to be short lived and was not observed in the experiments carried out here. At much higher gas pressures, it may be possible that the CF − 2 formed by electron attachment may be collisionally stabilized and therefore present in fluorocarbon plasmas.
7,215.6
2010-08-17T00:00:00.000
[ "Chemistry", "Physics" ]
Does repeat tibial tubercle osteotomy or intramedullary extension affect the union rate in revision total knee arthroplasty? Background and purpose Tibial tubercle osteotomy (TTO) is an established surgical technique for exposing the stiff knee in revision total knee arthroplasty (RTKA). The osteotomy is usually performed through the anterior metaphyseal cancellous bone of the tibia but it can be extended into the intramedullary canal if tibial stem and cement removal are necessary. Furthermore, repeat osteotomy may be required in another RTKA. We assessed whether intramedullary extension of TTO or repeat osteotomy affected the healing rate in RTKA. Methods We retrospectively evaluated 74 consecutive patients (39 women) with an average age of 60 (29–89) years who underwent 87 TTOs during RTKA. 1 patient had bilateral TTO. 10 patients had repeat TTO and 1 patient received 3 TTOs in the same knee. The osteotomy was extramedullary in 57 knees and intramedullary in 30 knees. Osteotomy repair was performed with bicortical screws and/or wires. Results Bone healing occurred in all the cases. The median time to union was 15 (6–47) weeks. The median healing time for the extramedullary osteotomy group was 12 weeks and for the intramedullary osteotomy group it was 21 weeks (p = 0.002). Repeat osteotomy was not associated with delayed union. Neither intramedullary nor repeat osteotomy was found to increase the complication rate of the procedure. Interpretation Reliable bone healing can be expected with intramedullary extension or repeat TTO in RTKA. However, intramedullary extension of the osteotomy prolongs the union time of the tibial tubercle. Does repeat tibial tubercle osteotomy or intramedullary exten sion affect the union rate in revision total knee arthroplasty? Background and purpose Tibial tubercle osteotomy (TTO) is an established surgical technique for exposing the stiff knee in revision total knee arthroplasty (RTKA). The osteotomy is usually performed through the anterior metaphyseal cancellous bone of the tibia but it can be extended into the intramedullary canal if tibial stem and cement removal are necessary. Furthermore, repeat osteotomy may be required in another RTKA. We assessed whether intramedullary extension of TTO or repeat osteotomy affected the healing rate in RTKA. Methods We retrospectively evaluated 74 consecutive patients (39 women) with an average age of 60 (29-89) years who underwent 87 TTOs during RTKA. 1 patient had bilateral TTO. 10 patients had repeat TTO and 1 patient received 3 TTOs in the same knee. The osteotomy was extramedullary in 57 knees and intramedullary in 30 knees. Osteotomy repair was performed with bicortical screws and/or wires. Results Bone healing occurred in all the cases. The median time to union was 15 (6-47) weeks. The median healing time for the extramedullary osteotomy group was 12 weeks and for the intramedullary osteotomy group it was 21 weeks (p = 0.002). Repeat osteotomy was not associated with delayed union. Neither intramedullary nor repeat osteotomy was found to increase the complication rate of the procedure. Interpretation Reliable bone healing can be expected with intramedullary extension or repeat TTO in RTKA. However, intramedullary extension of the osteotomy prolongs the union time of the tibial tubercle.  The usefulness of tibial tubercle osteotomy (TTO) in revision total knee arthroplasty (RTKA) is well established (Whiteside 1995, Ries and Richman 1996, Bruce et al. 2000, Mendes et al. 2004, van den Broek et al. 2006, Young et al. 2008. A long osteotomy including the tibial tubercle and the proximal part of the anterior tibial crest provides a large bone surface for rigid fixation with multiple wires and/or screws. The osteotomized bone segment is maintained in continuity with the anterior compartment muscles, which preserves the vascularity at the osteotomy site and acts as a distal soft tissue tether resisting the quadriceps pull. Reliable bone healing has been reported with this technique in the majority of primary or revised knee arthroplasties (Whiteside 1995, Ries and Richman 1996, Bruce et al. 2000, Mendes et al. 2004, van den Broek et al. 2006, Young et al. 2008. However, occasional nonunion and proximal migration of the tibial tubercle or fracture of the tibial metaphysis can occur (Whiteside 1995, Mendes et al. 2004, Young et al. 2008). The osteotomy is performed through the cancellous bone anterior to the intramedullary canal, leaving a broad area of bony contact between the osteotomized tibial tubercle and the host tibia. Occasionally, RTKA may require anterior exposure of the tibial canal for cement and stem removal. Intramedullary extension of the osteotomy permits direct access to the tibial prosthesis and surrounding cement but it is also associated with loss of cancellous bone at the osteotomy site. Thus, osteotomy union may be compromised as callus formation can only be achieved at the peripheral cortical bone. For patients who have had a previous RTKA with TTO and require another RTKA, a second osteotomy at the same bone area may be necessary. Surgical dissection during the previous TTO may adversely affect the vascularity and healing potential of the second TTO, predisposing the patient to delayed union or nonunion. We investigated whether intramedullary extension or repeat TTO would affect the union and complication rate in RTKA. Methods We retrospectively evaluated 74 consecutive patients who underwent 87 TTOs during RTKA in a single-surgeon, single-institution setting. All the procedures were performed between November of 1997 and December of 2006. There were 35 men and 39 women with an average age of 60 (29-89) years. 62 patients underwent 1 TTO, 10 patients had repeat TTO, and 1 patient had 3 TTOs in the same knee ( Figure 1). 1 patient also had bilateral TTO. Osteoarthritis (54 knees) was the most common indication for the primary total knee arthroplasty (TKA), followed A. Healed primary TTO in a 57-year-old man who underwent RTKA of the left knee. A bicortical screw proximally and 2 wires distally were used for osteotomy fixation. B. 4 years later, implant loosening and instability developed, and another RTKA with a second TTO was performed. Fixation of the osteotomy was achieved with the same osteosynthesis method as the previous TTO. C. Complete consolidation of the osteotomized fragment was seen radiographically 6 months postoperatively. D. 2 years later, knee infection occurred. The implants were removed and an articulated antibioticimpregnated cement spacer was inserted. A third TTO was performed during removal of the infected TKA and antibiotic cement spacer implantation. The osteotomy was left unfixed to avoid introduction of metallic fixation into a contaminated wound. E. Fixation of the osteotomy with 3 wires was performed during the second stage of RTKA. At the final follow-up 2 years postoperatively, the tibial tubercle was well healed. Preoperative lag of extension and flexion contracture were present in 23 and 30 knees, respectively. In 8 knees, a rectus snip (6 knees) or a V-Y turndown procedure (2 knees) had been performed earlier during another RTKA. Overall, the average number of RTKAs per patient was 1.5 (1-7). Patients were scheduled to be evaluated clinically and radiographically preoperatively and postoperatively at 6 weeks, 3 months, 6 months, and annually thereafter-or at additional time intervals if residual symptoms and radiographic findings necessitated further examination. The average follow-up was 49 (6-108) months. None of the patients were lost to followup before healing of the osteotomy had occurred. Anteroposterior, lateral, and patellar radiographs were obtained from all the patients for the evaluation of implant position, patellar tracking, and status of the TTO. The measurement of length and width of osteotomy was done electronically with digitized lateral radiographs using imaging software. The osteotomy was classified as extramedullary when the bone cut was through the metaphyseal cancellous bone of the anterior tibia, or intramedullary when it extended more deeply from the inner surface of the tibial tubercle ( Figure 2). The osteotomized bone fragment was considered to be healed to the host tibia when radiographic evidence of bridging callus formation was observed on the lateral radiograph. Surgical technique The procedure was performed through a medial parapatellar arthrotomy and the medial proximal tibia was dissected subperiostally to facilitate tibial external rotation and lateral patellar subluxation. Retropatellar adhesions were released and scar tissue dissected from the medial and lateral gutters. If patellar subluxation was associated with excess tension in the extensor mechanism and there was risk of patellar tendon avulsion, a TTO was carried out. The tibial tubercle together with a segment of anterior tibial crest was elevated in a medial to lateral direction. The bone cut was initiated with a thin oscillating saw and completed with 2 broad osteotomes, leaving a long tibial bone fragment for late repair. In comparison with the previously described technique of Whiteside and Ohl (1990) in which the distal end of the bone cut was made in a transverse manner, the osteotomy was distally tapered to avoid a stress riser in the anterior tibial cortex. Similarly, no step-cut was done proximally and the osteotomy was extended to the knee articular surface. The osteotomized bone segment, along with the attached anterior compartment muscles and patellar tendon, was then hinged laterally to expose the knee. At the completion of the RTKA, the tibial tubercle was reduced in its anatomic position or alternatively displaced up to 1 cm medially and/or proximally to achieve optimal quadriceps tension, knee flexion, and alignment of the extensor mechanism. Lateral release was performed routinely to improve or correct patellar alignment. Osteotomy repair was performed with bicortical screws (9 knees), Luque wires (16 knees), 1 screw and wires (52 knees) ( Figure 1A-C), or 2 screws and wires (7 knees). In infected knee arthroplasty, a 2-stage procedure was followed with a time interval of 6-8 weeks. The TTO was made either during reimplantation of the components (second stage, 33 knees) or at the time of implant removal and introduction of cement-spacer (first stage, 5 knees). In the latter group, the bone fragment with the muscular attachments remained unfixed ( Figure 1D) until the second stage of RTKA ( Figure 1E). Postoperatively, no weight bearing or range of motion restrictions were applied and knee flexion exercises were started on the day after surgery. Statistics Statistical evaluation was carried out with the the SPSS software package vesrion 16.0. Since data histograms showed skewed distribution of the variables, nonparametric methods of analysis were chosen. Data are presented as median and A B median time to union was 15 (6-47) weeks. The osteotomy was extramedullary in 57 knees and intramedullary in 30 knees. The median healing time in the first group was 12 (6-47) weeks and in the second group it was 21 (7-38) weeks (p = 0.002, Mann-Whitney test). The median union time for the first TTO was 15 (6-47) weeks and for the repeat TTO (including the knee with the 3-time osteotomy) it was 21 (7-27) weeks (p = 0.6, Mann-Whitney test). The fixation technique had no detectable effect on osteotomy union (p = 0.2, Kruskal-Wallis test). Avulsion of the proximal part of the tibial tubercle occurred in 3 knees and superior migration of the entire osteotomized fragment was noted in 2 knees. The tibial tubercle fragment displacement was not evident on the immediate postoperative radiographs, but was identified at the time of routine followup, either at 6 weeks or 3 months postoperatively. Once displacement of the osteotomy had occurred, knee flexion beyond 100 degrees and quadriceps strengthening exercises were restricted for 3 months. The osteotomy was extramedullary in 3 of the knees mentioned above and intramedullary in the remaining 2. The amount of displacement ranged from 5 to 15 mm and the time to union in the 5 cases was 10, 16, 20, 21, and 47 weeks. 4 of the 5 patients were asymptomatic and had full active extension of the knee. In the last patient with a history of rheumatoid arthritis and steroid use, skin necrosis developed directly over the tibial tubercle, which was associated with 15 mm of superior migration of the proximal portion of the TTO. A medial gastrocnemius muscle flap transposition was preformed, and at the 1-year follow-up the patient had an arc of motion of 100 degrees with a 20-degree extensor lag, and walked with a cane. Postoperative manipulation was required in 10 knees. Removal of screw(s) and/or wires due to skin prominence was undertaken in 5 patients after radiographic evidence of osteotomy healing. No other complications were reported. Proper patellar tracking was found in all the operated cases. Discussion Tibial tubercle osteotomy during RTKA has provided good clinical results in most published cases. Van de Broek et al. (2006) reported that successful osteotomy healing was achieved in 37 of 39 RTKAs. An average increase of 12 degrees in ROM was also noted at a mean follow-up of 2.5 years. Young et al. (2008) observed bone union in all but 1 of 41 TTOs during RTKA. Knee flexion and extension were improved by an average of 18 degrees and 5 degrees, respectively. Bruce et al. (2000) found osteotomy healing in all RTKAs (10 cases) and an increase in mean ROM from 60 degrees preoperatively to 78 degrees postoperatively. The authors also reported that 3 knees with preoperative fixed flexion deformity of up to 40 degrees were substantially improved at a mean follow-up of 3 years. Similarly, Mendes et al. (2004) noted union of the tibial tubercle in all but 2 of 67 RTKAs. In addition, 4 of 5 patients who had had extensor lag preoperatively had no extension def- range. Any differences in union time between extramedullary and intramedullary or first and repeat osteotomy groups were examined using the Mann-Whitney rank-sum test. The Kruskal-Wallis statistic was calculated to test for a dependency between fixation technique and union time. The changes in knee range of motion, extensor lag, and flexion contracture before and after surgery were evaluated with the Wilcoxon signed-rank test. Statistical significance was assumed for a p-value of < 0.05 (or determined with use of a 95% confidence interval). Results The median length and width of osteotomy was 106 (79-153) mm and 13 (8-25) mm, respectively. The tibial tubercle was reattached medially in 16 cases and recessed proximally in another 29 cases. Bone healing occurred in all cases. The icit at the latest follow-up evaluation. In our study, all osteotomies healed and the median knee flexion and range of motion showed an increase of 15 degrees and 35 degrees, respectively. 9 of 23 knees with extensor lag had no deficit after the TTO, while flexion contracture of more than 10 degrees was found only in 5 patients postoperatively compared to 14 patients preoperatively. These observations indicate that favorable clinical outcome can be expected after RTKA with TTO. In our experience, osteotomy has also been effective in more complex or multiple revision knee arthroplasties. We routinely extended the TTO into the intramedullary canal when necessary, to allow direct exposure for removal of well-fixed long cemented tibial stems. We also performed repeat TTO during a later revision of the same knee if adequate exposure could not be achieved with a less extensile approach. Finally, we left the TTO unfixed for a period of 6-8 weeks until the secondstage treatment of infected TKA. Our results illustrate that these techniques can be associated with the same high union rates and low complication rates reported for TTO by other authors (Bruce et al. 2000, Mendes et al. 2004, Ries and Richman 1996, van den Broek et al. 2006, Whiteside 1995, Young et al. 2008). Occasional tibial fracture or migration of the tibial tubercle can, however, occur after TTO. Whiteside (1995) pointed out that all the TTOs during 110 revision arthroplasties that were fixed with 2 or 3 wires had healed, although 3 tibial fractures and 2 cases with proximal migration of the tibial tubercle complicated the surgical procedure. Mendes et al. (2004) reported that the osteotomy had slipped proximally up to 2 cm in 13 of 67 knees, but no extensor lag was identified. The osteotomy was done in a proximal step-cut and distal bevel-cut fashion, and primarily fixed with wires. Furthermore, 2 patients sustained a tibial stress fracture as a result of mechanical weakening of the anterior tibia cortex. Van den Broek et al. (2006) observed also that 4 of 39 proximally step-cut osteotomies migrated superiorly. However, re-fixation of the tibial tubercle was carried out in only 2 of them. In our study, the tibial tubercle migrated proximally in 5 of 87 knees. Apart from 1 case with concomitant wound healing problems requiring muscle flap transposition, no extension lag occurred. These findings suggest that slight superior displacement of the tibial tubercle can occasionally occur after TTO, although this is not necessarily associated with extensor mechanism dysfunction. Favorable results have been reported after fixation of the tibial tubercle by using either wires (Whiteside 1995, Barrack et al. 1998, Mendes et al. 2004, Young et al. 2008, screws (Wolff et al. 1989, Ries and Richman 1996, van den Broek et al. 2006, or both (Bruce et al. 2000). Van den Broek et al. (2006) reported that lag screws led to consolidation of the tibial tubercle within 6 weeks in 14 RTKAs, within 3 months in 16 knees, and within 6 months in 6 knees. Young et al. (2008) noted that the mean union time with use of 3 double-stranded Luque wires was 14 (8-24) weeks. Bruce et al. (2000) found that the average time to union at the proximal and distal ends of the osteotomy was 8 and 24 weeks, respectively, after fixation with wires. We found no difference in union time between the fixation groups, and healing occurred even after 1 or 2 previous osteotomies. Repeat osteotomy was successful without increasing the time to union or the incidence of tibial tubercle migration. We believe that preservation of the musculature sleeve is of primary importance for bone segment viability, and stability and a good result can be achieved after a previously performed TTO osteotomy by using either wire or screw fixation. This is further supported by the relative stability of the unfixed tibial tubercle in the staged treatment of infected TKA and the early callus formation at the osteotomy site, which was often observed during the second-stage reimplantation procedure. The TTO remained stable after insertion of a cement spacer, but in the absence of internal fixation probably because of the distal soft tissue attachment to the anterior compartment muscles. The tibial tubercle segment is required to be of adequate size to facilitate secure hardware fixation and provide a wide contact area between the opposing bone surfaces. Wolf et al. (1989) found a high failure rate in osteotomies of less than 3 cm in length, which could not safely accommodate lag screws. However, more recent studies have not shown any major problems with longer osteotomies of more than 6-7 cm. Similarly, the depth of osteotomy has been considered an important factor for the success of the technique and values of approximately 1-2 cm, at the point of tibial tubercle, have been recommended for prevention of bone fragmentation and comminution (Ries and Richman 1996, van den Broek et al. 2006, Whiteside 1995. We found that intramedullary extension of the bone cut is associated with an increase in the union time. We believe that this delay is related to the relatively small cortical bone-to-bone contact area and lack of cancellous bone apposition at the site of tibial osteotomy. However, bone union occurred in all cases requiring intramedullary extension of the osteotomy without any postoperative restrictions in mobility. The good healing capacity of TTO indicates that it can be safely extended into the intramedullary canal to allow access for cement and tibial stem removal. The inherent limitations of this study include its retrospective design and the small number of patients. In addition, the evaluation of healing time was based on the relatively subjective criterion of radiographically bridging callus formation at the osteotomy site. Thus, the overall sensitivity and specificity of the method could not be clearly defined. Weight bearing activity was not quantitated. However, unrestricted knee motion and weight bearing after surgery did not appear to compromise the rate of osteotomy healing. In conclusion, tibial tubercle osteotomy can be successfully performed more than once in RTKA. Intramedullary extension of the osteotomy provides adequate access to the tibial bone canal, but prolongs the union time.
4,681.6
2009-01-01T00:00:00.000
[ "Medicine", "Engineering" ]
Dermal Exposure Assessment to Pesticides in Farming Systems in Developing Countries: Comparison of Models In the field of occupational hygiene, researchers have been working on developing appropriate methods to estimate human exposure to pesticides in order to assess the risk and therefore to take the due decisions to improve the pesticide management process and reduce the health risks. This paper evaluates dermal exposure models to find the most appropriate. Eight models (i.e., COSHH, DERM, DREAM, EASE, PHED, RISKOFDERM, STOFFENMANAGER and PFAM) were evaluated according to a multi-criteria analysis and from these results five models (i.e., DERM, DREAM, PHED, RISKOFDERM and PFAM) were selected for the assessment of dermal exposure in the case study of the potato farming system in the Andean highlands of Vereda La Hoya, Colombia. The results show that the models provide different dermal exposure estimations which are not comparable. However, because of the simplicity of the algorithm and the specificity of the determinants, the DERM, DREAM and PFAM models were found to be the most appropriate although their estimations might be more accurate if specific determinants are included for the case studies in developing countries. The Pesticide Issues Pesticides are key elements of pest management programs in modern agriculture to increase the levels of production. Their use is stimulated by the commercialization and intensification of agriculture, the difficulty in expanding cropped acreage, the increased demand for agricultural products as the population increases, and the shift to cash crops for domestic and export sales [1]. It is estimated that annually some 2.5 million tons of pesticide are used worldwide and 220,000 people die because of poisoning from these substances. Most of these poisonings occur in developing countries because of weak safety standards, minimal use of protective equipment, absence of washing facilities, poor labeling, and lack of information programs [2][3][4][5][6]. Public health experts have expressed increasing concern about the use of pesticides because epidemiological studies have found that they are associated with different types of cancers [7][8][9][10], neurologic pathologies [11][12][13], respiratory symptoms [14] and hormonal and reproductive abnormalities [15][16][17][18][19]. Regardless of the risks involved in the use of pesticides, they are considered a key input to agriculture allowing intensive production techniques [20]. Therefore, it is crucial to assess the risk due to pesticide use by improving their management, reducing the exposure and protecting human health. The agricultural sector in Colombia uses 3.8 million hectares of land for permanent and transitory crops. During the last decade, an average of 82,000 tons of pesticides were applied per year (17% insecticides, 47% herbicides and 35% fungicides and bactericides) [21]. This suggests that part of the population and the environment in Colombia are likely to be exposed to negative effects derived from pesticide use. For instance, the potato farming system occupies 128,700 ha with 230,000 production units which had a production of 2.3 million tons in 2012 and used 32.5 kg/ha of pesticide active ingredients [22]. Therefore, the quantification of human exposure to pesticide use in farming systems like potato crops is crucial to provide information about the level of risk faced by farmers and workers and to support the development of proper policy measures. Risk Assessment of Pesticide Use in Developing Countries In the agricultural field, there is an increasing concern about the health of farmers, workers and bystanders, since they might be frequently exposed to pesticides for long periods of time. Governments, especially from developed countries, have introduced new environmental policies about the adequate use of pesticides. Meanwhile, in developing countries, like Colombia, a similar attempt has been done but even though the regulation scheme is already defined, this is not efficiently implemented due to the lack of information about exposure assessment and risk characterization [23,24]. The definition and implementation of these environmental policies is a further step after a risk assessment. Therefore, it is crucial to establish a method for the risk assessment of pesticide application in developing countries focusing in the exposure assessment and the risk characterization. The conclusions coming out from this method will be useful for stakeholders not only for the improvement of the risk assessment scheme, identifying the critical factors that influence the level of exposure concentrations, but also for the development of pedagogical programs about the appropriate use of pesticides. The risk assessment of pesticide application can be divided into two essential parts: exposure assessment (qualitative and quantitative description of the exposure concentrations and related dose for specific pathways) and effects assessment (determination of the intrinsic hazards associated with the agent and quantification of the relationship between the dose with the target tissue and related harmful outcomes) [25][26][27][28]. The first part is known as the initial portion of the environmental health paradigm: from sources, to environmental concentrations, to exposure, to dose. The effects assessment is aiming for the latter portion of the events continuum: from dose to adverse health effects. In the occupational hygiene field, the attention has shifted to the research of the exposure in the agricultural workplace to improve the pesticide management and to reduce the health risk [28]. This is of special interest in developing countries because pesticide management activities face weak safety standards [3,5,6,29]. Studies in potato farming systems in Vereda La Hoya, Colombia [3,5,23,24,[30][31][32][33], Mojanda, Ecuador [34] and El Angel, Ecuador [35] have shown that pesticide management has no a particular theoretical basis and instead it is performed by trial and error finding out what works out in practice. Furthermore, farmers do not wear adequate personal protective equipment, apply pesticides which are banned in industrialized countries and modify the standard discharge of nozzles to reduce the application time [31]. Because these issues increase the health risk due to human exposure, a risk assessment of pesticide use in these areas is required in order to determine the risk level. Modeling Dermal Exposure to Pesticide Use Indirect methods to assess human exposure have been used since the early 1990s [36]. Tools for dermal exposure, such as Control of Substances Hazardous to Health (COSHH) regulations [37], Dermal Exposure Assessment Method (DREAM) [38], Estimation and Assessment of Substance Exposure (EASE) [39], European Predictive Operator Exposure Model Database (EUROPOEM) [40], Pesticides Handlers Exposure Database (PHED) [41], Risk Assessment of Occupational Dermal Exposure to Chemicals (RISKOFDERM) [42], Qualitative Assessment of Occupational Health Risks (STOFENMANAGER) [43], and the approaches proposed by the U.S. EPA [44] are targeted at occupational situations encountered in industrial processes in Europe and the USA, but they do not consider agricultural processes such as pesticide management and there might be uncertainties when they are applied in study areas in developing countries. Dermal Exposure Ranking Method (DERM) [45] is a method focused on occupational activities in pesticide management in developing countries; nonetheless, its semi-quantitative estimations still lack reliability and validity [46,47]. Pesticide Flow Analysis Model (PFAM) [48] is a model focused on farming systems in developing countries based on the material flow analysis method, however, it is still not validated. Because of the lack of studies about the application and further evaluation of these models in farming systems in developing countries, there is no consensus about the best method to evaluate dermal exposure and the health risk in those systems. Therefore, existing models for dermal exposure (DERM, DREAM, PHED, RISKOFDERM, COSHH, STOFENMANAGER, EASE and PFAM) were evaluated in order to find out the most appropriate to be applied in case studies in developing countries. Along this evaluation the following research questions were addressed: 1. Which of the existing models for dermal exposure assessment are feasible to be applied in case studies in farming systems in developing countries? 2. According to the parameters and determinants included in the model structure, which model assessment is more complete in terms of the evaluation of dermal exposure? 3. When comparing the model outcomes with the dermal exposure measurements in the study area, which model assesses dermal exposure more accurately? Multi-Criteria Analysis After a literature review, eight available models were considered for the analysis: COSHH [37], DERM [45], DREAM [38], EASE [39], PHED [41], PFAM [48], RISKOFDERM [42], and STOFENMANAGER [43]. These models were selected because of their availability, clear model description and their potential applicability for the assessment of pesticide use in farming systems in developing countries. They were analyzed according to a group of criteria such as availability, guidance, knowledge required, reliability, type of outcome, type of substance, target group, dermal exposure descriptor and dermal exposure pathway which are explained in Table 1. Estimation of Dermal Exposures in the Study Areas From the results of the multi-criteria analysis and based on the model characteristics five models (i.e., DERM, DREAM, PFAM, PHED, and RISKOFDERM) were selected to be applied in the case study of potato farming systems in Vereda La Hoya in the highlands of Colombia. The data used as input comes from a previous survey made in the study area with 197 smallholder potato growers in four communities [3] and previous studies about dermal exposure in the same study area [24,31]. The input data and the scoring system for each determinant within each model are shown in the annexes. Because PFAM model required a specific pesticide with the total amount applied per hectare, the dermal exposure assessment was estimated for the pesticide methamidophos. Description of the Study Area The study area is located in Vereda La Hoya near Tunja, the capital city of the province of Boyacá, Colombia. This is a rural region devoted mainly to the cultivation of potato in production units of around 3 hectares in size. The crop depends on rainfall, therefore, the production is generally organized into two periods, one from March to September and another from October to February, which corresponds to the two rainy seasons. Average annual productivity is 18.3 ton/ha [22]. Potato crops in this region are vulnerable to three major pests: the soil-dwelling larvae of the Andean weevil (Premnotrypes vorax), the late blight fungus (Phytophthora infestans) and the Guatemalan potato moth (Tecia solanivora) [22]. These pests, together with the weeds present in the early phases of the crop, are controlled by the application of chlorothalonil, chlorpyrifos, cymoxanil, glyphosate, mancozeb, methamidophos and paraquat [5,32]. In the study area the pesticide management is performed along three main activities: the preparation of the pesticide, the application itself, and the cleaning of the spraying equipment. During the whole pesticide management, farmers use work clothing consisting of trousers, short-sleeve shirts and plastic boots. These three activities consist of the following series of characteristics: (a) Preparation: This activity includes opening the bottle containing the pure pesticide substance, mixing the solution of (different) pesticides and water, and loading the tank of the knapsack sprayer. Farmers in Vereda La Hoya prepare the pesticides in a 100-L or 200-L capacity container. The pesticide and the water (normally 80 L to obtain four applications of 20 L each) are mixed in this container with the aid of a wooden stick. During the mixing and the filling of the tank there are usually spills out of the container affecting different parts of the body including hands, arms, chest and legs; (b) Application: Once the knapsack sprayer is carried on the back, the pesticide application starts with the spraying process on the field. During this activity the farmers' body is exposed to the droplets emitted by the nozzles. In the study area the spraying is performed with hand pressure sprayers which are, on average, 9 years old [3,24]. They consist of a tank with a 20-L capacity, an injection and pressure system with an external piston pump and a pressure chamber with a capacity of 21 bar, a spraying pressure of 3 ± 0.3 bar and a pressure range between 1 and 14 bar. Farmers use two types of nozzles for pesticide application which differ in the amount of pesticide discharged: a high-discharge (HD) nozzle used during the first crop phases (sowing and emergence) and a low-discharge (LD) nozzle used during the rest of the crop phases (growth, flowering and pre-harvest). The discharges of the HD and LD nozzles measured in the study area were 1.88 ± 0.12 L/min (n = 24) measurements, and 1.26 ± 0.08 L/min (n = 24) respectively. Farmers purchase standard discharge nozzles of 1.05 ± 0.02 L/min (n = 8) and then modify the plastic and metal structures of the nozzles in order to obtain these discharges; (c) Cleaning: Once the application is finished, farmers clean the sprayer and the container by pouring clean water on all the accessories in a procedure repeated three times. This procedure is included in the booklet "Good Agricultural Practices" [49] which farmers use as a reference for the pesticide management. During this activity, there are numerous spills from the equipment and the accessories reaching the farmer's body. Previous studies have measured the dermal exposure and made an attempt to assess the health risk. These results are shown in Table 2. Multi-Criteria Analysis The multi-criteria analysis found that only DERM, DREAM, PHED, RISKOFDERM and PFAM can feasibly be applied in case studies in developing countries ( Figure 1, Table 3). COSHH was excluded from the evaluation as it does not consider important criteria relevant for case studies in developing countries such as target group, as it is focused on guidance for small and medium enterprises (SMEs), as it is only available in a website with a user's manual for only some specific industries; concerning outcome, its assessment is qualitative; regarding evaluated substances, it does not evaluate pesticides in farming systems; its dermal exposure descriptor only assesses the potential exposure; and concerning evaluated body parts, it does make a distinction between any body part. EASE was also excluded from the evaluation as it does not consider criteria such as target group, it is focused on industrialized processes, for guidance there is no user's manual with the model description; it provides a qualitative, its dermal exposure descriptor only evaluates the potential exposure and as to evaluated body parts, it only considers arms and forearms. STOFENMANAGER was also excluded from the evaluation as it does not comply with criteria such as target group, it is focused on industrial processes, the website does not show the algorithms or model calculations for guidance, its outcome assessment is qualitative and there is no information available regarding evaluated body parts. Estimation of Dermal Exposures in the Study Areas According to the previous results DERM, DREAM, PHED, RISKOFDERM, and PFAM were selected as the most appropriate models to be applied in the case study of Vereda La Hoya. The determinants included in each model are shown in Table 4 and the input data consider for each model is given in the Appendix Tables A1-A5. Even though the evaluated dermal exposure models provide insights into the level of exposure, their outcomes differ because of the model structure and the determinants included in each model structure (Table 5). Previous direct measurements in Vereda La Hoya found that dermal exposure to pesticides is very high ( Table 2) because of the inadequate work clothing, the modification of nozzles to increase the discharge, the inappropriate cleaning of the application equipment, the pesticide application against the wind direction and the use of pesticide with a high level of toxicity [24,31]. Actual dermal exposure values were also found higher than the reference values for human exposure for some pesticides like metamidophos [24,31]. Therefore, from the comparison of the models estimations and the type of determinants considered by each model, DERM, DREAM, and PFAM were found to be the most appropriate models. However, PHED might give an inaccurate estimation because the model determinants are relevant for farming systems in industrialized countries. Even though the model includes pesticide application scenarios which might be useful for developing countries, the model does not assess processes like pesticide emission and transfer, important processes within the mass transport quantification which should be included in the conceptual model for dermal exposure assessment, according to Schneider [50]. RISKOFDERM estimation might also be inaccurate because the model evaluated the exposure according to a percentage of body exposed and the quantitative estimation cannot be compared with reference values of human exposure as the pesticides have different levels of toxicity and the model only gives a qualitative assessment of "high" based on the quantitative estimation. DERM is an appropriate model because of the specificity of the determinants for case studies in developing countries; however, the estimation accuracy might be underestimated because important determinants are not consider such as washing the equipment, task duration, wearing gloves, frequency and replacement of gloves, work clothing, personal hygiene and climate conditions. Therefore, this model has the potential to increase the accuracy of its estimations when these determinants are included in the assessment. DREAM was found to be an appropriate model as its estimation corroborates the dermal exposure assessment made in the location [24,31]; however, the estimation accuracy might be improved if there is a differentiation in the protection factor according to the different body parts and other determinants are considered such as climate conditions like wind speed and humidity. If these missing determinants are included the model scope will be wider for not only farming systems in industrialized and developing countries but other industrial processes. Finally, PFAM was found to give a quantitative assessment in terms of potential and actual exposure and how the protection factor influences the actual exposure. In addition it can assess the risk for each pesticide separately. However, it needs to be calibrated with direct measurements before it can be implemented in study areas with the same characteristics. Nevertheless, this model has the advantage of complying with all the required criteria in order to be implemented in case studies in developing countries. These results are valid for potato farming systems and many other crop systems with similar characteristics in different regions in Latin America and might be also be valid for other regions worldwide with similar pesticide applications in Africa or Asia. However, the results are not valid for other sophisticated pesticide applications in crops in developing countries such as flowers, banana, coffee, sugar cane, rice, etc. All the models for human exposure such as COSHH [37], DREAM [38], EASE [39], PHED [41], RISKOFDERM [42] and STOFENMANAGER [43] were developed after the conceptual model proposed by Schneider in 1999 [50,51]. Therefore, they were developed with similarities in the structure of the determinants. However, they are built for case studies in industrialized countries and there are uncertainties about their application in developing countries. For instance COSHH is specialized in SMEs in the UK; DREAM, in industrialized countries and farming systems in The Netherlands where tractors and motorized pesticide applications are used; EASE, in industrialized processes in the UK; PHED, in regulatory agencies and the pesticide industry in the USA and Canada; RISKOFDERM, for operational and technical staff in SMEs; and, STOFFENMANAGER, for Dutch companies. Some agricultural case studies in developing countries are characterized by manual pesticide applications with no regulations about the adequate pesticide use and no use of personal protection equipment. Only the DREAM model was applied in study areas in developing countries but the model has not been validated because of some issues regarding the reproducibility and accuracy of dermal exposure estimations [54]. Furthermore, this research found that when this model is applied in case studies in developing countries, most of the determinants do not cover the specific characteristics of these study areas. Based on DREAM, Blanco attempted to develop a model for farming systems in developing countries with DERM; however, this model has faced problems in the validation because of inappropriate procedures in the methodology [47]. However, despite this inaccuracies in the estimations of all the evaluated models, their structure has the potential to redefine and include other determinants which might be the origin to create a brand new model for dermal and human exposure assessment in farming systems in the developing world. Conclusions This research evaluated models for dermal exposure assessment focusing on case studies in developing countries. From the multi-criteria analysis and the type of determinants included in the models, DERM, DREAM, PHED, PFAM and RISKOFDERM were found as the most appropriate models to assess the dermal exposure in developing countries. Regarding the specificity to the farming systems in developing countries, DERM, DREAM and PFAM include determinants which are relevant for the system characteristics in the study area. However, all the five selected models are suitable to be modified in their structure in order to include parameters or determinants which might increase the accuracy of the estimations. The evaluated models have the possibility to assess industrial and agricultural processes in industrialized and developing countries. However, DREAM was found to have a number and type of determinants that not only increase the accuracy of the estimation but they might serve as a basis to develop a new model including more determinants with higher specificity to study areas in farming systems in developing countries. Previous studies found that because of the inadequate work clothing, the modification of nozzles to increase the discharge, the inappropriate cleaning of the application equipment, the pesticide application against the wind direction and the use of pesticides with a high level of toxicity, the dermal exposure was assessed as very high because both the potential and actual exposure for some pesticides were higher than the reference values for human exposure. Therefore, when comparing these results with the model estimations, it was found that DREAM and PFAM gave the most accurate estimations. However, it is important to take into account that DREAM is a semi-quantitative model easy to apply in the case studies. On the contrary, PFAM gives a quantitative estimation but the transfer coefficients must be determined in the field in order to calibrate the model. Acknowledgments This research was funded by the Swiss National Science Foundation. The first phases were developed in cooperation with the University of Zurich, University of Graz, Ludwig Maximilian University of Munich, ETH Zurich, University of Boyaca and National University of Colombia. The final phase concerning the analysis of results and final publicationwas developed in the Saint Thomas University in Tunja, Colombia. Author Contributions The results presented in this paper make part of the doctoral thesis "Human Exposure Assessment of Pesticide Use in Developing Countries" developed by Camilo Lesmes Fabian and supervised by Claudia R. Binder within the project "Life Cycle Human Exposure and Risk Assessment of Pesticide Application on Agricultural Products in Developing Countries". The first author had the original idea of comparing the models including the PFAM model. The manuscript was drafted and revised by the authors within the final document of the doctoral thesis. There is evidence that during the whole pesticide application procedure, there is a leaking in the sprayer and the upper back is exposed. There is a potential exposure in 60% of the body surface. There is a potential exposure in 60% of the body surface as the region has a strong wind. (a) <1% of task duration = 0 (b) <10% of task duration = 1 (c) 10-50% of task duration = 3 (d) ≥50% of task duration = 10 There is a potential emission during the whole process of the pesticide application. There is evidence that more than 50% of the body surface is exposed (b) ≥50% of body part = 10 The system covers these three processes. The deposition on clothing covers more than 50% of the body surface (b) 10-50% of body part = 3 6 Transfer to clothing and uncovered skin (P T.BP ) (a) <1% of task duration = 0 (b) <10% of task duration = 1 (c) 10-50% of task duration = 3 (d) ≥50% of task duration = 10 There is a transfer to clothing and uncovered skin during some of the pesticide management activities. (c) 10-50% of task duration = 3 The pesticide solution is mixed with different chemicals in water. In the study area 96% of the farmers sprayed their pesticides (insecticides, fungicides, herbicides) with a backpack sprayer. The sprayers used in in the study area are between 8 and 11 years old. Therefore multiple repairments are made. (b) Repair = 2 Farmers modify the nozzles and the two types of nozzles were considered. The protection factor given by work clothing and calculated for the application activity is high for legs, thighs, chest, abdomen and lower back (>90%) when both types of nozzles (HD and LD) are used. The protection factor is low in the arms (ranging from 51.8 to 88%) and also in the upper back (ranging from 74.8 to 82.6%).
5,688.2
2015-04-29T00:00:00.000
[ "Economics" ]
An Approach for Reconstruction of Realistic Economic Data Based on Frequency Characteristics between IMFs Reconstruction of realistic economic data often causes social economists to analyze the underlying driving factors in time-series data or to study volatility. The intrinsic complexity of time-series data interests and attracts social economists. This paper proposes the bilateral permutation entropy (BPE) index method to solve the problem based on partly ensemble empirical mode decomposition (PEEMD), which was proposed as a novel data analysis method for nonlinear and nonstationary time series compared with the T -test method. First, PEEMD is extended to the case of gold price analysis in this paper for decomposition into several independent intrinsic mode functions (IMFs), from high to low frequency. Second, IMFs comprise three parts, including a high-frequency part, low-frequency part, and the whole trend based on a fine-to-coarse reconstruction by the BPE index method and the T -test method. Then, this paper conducts a correlation analysis on the basis of the reconstructed data and the related affected macroeconomic factors, including global gold production, world crude oil prices, and world inflation. Finally, the BPE index method is evidently a vitally significant technique for time-series data analysis in terms of reconstructed IMFs to obtain realistic data. Introduction e importance of revealing the underlying characteristics of macroeconomic data has attracted considerable attention from social economists for studying its underlying driving mechanism [1,2]. Because it is affected by complex factors, especially noise signals, macroeconomic data are too difficult to decompose into some data from the perspective of the more economically meaningful components. As an empirical, intuitive, direct, and self-adaptive data processing method, the empirical mode decomposition (EMD) is a novel data analysis method, which is utilized to decompose time-series data into a small number of independent intrinsic modes based on a local characteristic scale, and the IMFs have specific economic meanings [3,4]. en, the ensemble empirical mode decomposition (EEMD) [5], the complete ensemble empirical mode decomposition (CEEMD) [6], and PEEMD [7], as improvements of the EMD algorithm, are widely applied in the decomposition of time-series data for more accurate IMFs by eliminating the effects of interfering signals [8][9][10][11], and hybrid models based on this are exploited to predict the time-series data at the entry point of the IMFs' numerical distribution characteristics [6,12,13]. Furthermore, it is necessary to classify the IMFs according to the potential influencing factors to study the influence of driving factors. ere are several methods and concepts for reconstructing the IMF. Zhang et al. [14] proposed the T-test method to synthesize the IMFs into more realistic economic significance, which solves the reconstructed data of high-and low-frequency data based on the frequency characteristics of the IMFs. Yu et al. [15] proposed a decomposition-ensemble methodology with data-characteristic-driven reconstruction based on two promising principles: "divide and conquer" and "datacharacteristic-driven modeling." Aamir et al. [16] proposed a decomposition-ensemble model with reconstructed IMFs for forecasting crude oil prices based on the well-known autoregressive moving average (ARIMA) model. Gao et al. [17] proposed using average mutual information (AMI) on the Reconstruction of Modes of Decomposition. is paper proposes an economic meaning reconstruction method based on the BPE index [18] to classify high-and low-frequency data, which compares the chaos degree of the synthetic signal and the adjacent IMFs, based on the frequency relationship between IMFs, ignoring the independent distribution of frequencies in the IMFs compared with the T-test reconstruction method. is paper selects gold data as the application object, which are not only a crucial foundation of the international monetary system but also play an important role in national economic security, financial stability, and national defense security, especially in the context of the deterioration of the international financial environment and international political turmoil [19,20]. is paper utilizes the T-test method and the BPE method to divide the IMFs into high-frequency data, low-frequency data, and trending partial data based on the PEEMD [7]. en, a correlation analysis between composited data and related factors is proposed to explain the rationality of the new composition method for gold price analysis. e rest of the paper is organized as follows: Section 2 gives a brief introduction to the PEEMD, T-test, and BPE algorithms. en, a new reconstruction method based on BPE is proposed. Section 3 proposes the process for application in gold prices based on PEEMD and BPE. Section 4 presents a detailed analysis based on the composition of intrinsic modes and verifies the rationality under different composition methods. Section 5 concludes the paper. PEEMD Algorithm. e PEEMD algorithm, as an improvement of the EMD algorithm, is a generally nonlinear, nonstationary, and self-adaptive data processing method [7,12,21]. Under the assumption that the data may have some different coexisting modes of oscillations and some noise at the same time, PEEMD can extract the intrinsic modes in the original data without noise signals by utilizing permutation entropy (PE) [22,23] to estimate the effect of noise signals. e PEEMD is described as follows: (i) S(t) is a given time-series signal. e pair of white noise series n i (t) and − n i (t) are added to S(t): where i indicates the number of pairs of the added white noise. i � 1, 2, . . . , Ne, j is the number of iterations for decomposing the IMFs that meet the requirements, and a is the amplitude of the added white noise. (ii) First, EMD decomposes the two signal series r + ij (t) and r − ij (t) to obtain two IMF sets I + ij and as well as two residue sets r + ij+1 (t) and r − ij+1 (t) : (iii) By assembling the final IMF in the jth rank to eliminate the effectiveness of the added pairs of white noise signals, the following equation can be obtained: (iv) e PE of I j (t) is calculated and compared with the threshold θ 0 , which is set to reject the intermittency or noise signal in the original data. PE is always calculated, and steps 1-3 are repeated until PE j is smaller than θ 0 . (v) en, the first j − 1 IMFs are considered as intermittency or noise signals, which should be separated from the original signal, and the residue is expressed as (vi) r(t) denotes some different coexisting modes of oscillations at the same time without some noise signal, and it is decomposed completely by EMD: (vii) c k (t) are seen as the IMFs following the first j − 1 IMFs. e initial signal is described as In the PEEMD algorithm, the reconstruction error (RE) may be limited to a negligible level by adding the pair of white noises with positive and negative signs, and the PE is utilized to indicate the chaos degree so that the noise signal is eliminated in the algorithm to guarantee that the IMFs are closer to the inner intrinsic modes of the original signals than EMD. T-Test Algorithm. e principle of the T-test method for decomposing the IMFs is that the component whose zero mean characteristic has the first significant change is the demarcation as the IMFs are arranged in the descending order [24] based on a fine-to-coarse reconstruction, i.e., high-pass filtering by adding fast oscillations (IMFs with smaller index) up to slow (IMFs with larger index) so that all IMFs before this component (including this IMF) are the high-frequency parts, and the subsequent components are the low-frequency parts. Additionally, either when the residue r(t) becomes so small that it is less than the predetermined value of a substantial consequence or when the residue r(t) becomes a monotonic function from which no more IMFs can be extracted, the PEEMD stops, and the residue is considered the trend. en, the high-frequency, low-frequency, and trend parts of the original data boundary are obtained. e process of the specific IMF reconstruction is as follows [14]: (1) Compute the mean of the sum of IMF 1 to IMF i for each component (except for the residue) (2) Use the T-test to identify for which IMF i the mean first significantly departs from zero (3) Once IMF i is identified as a significant change point, identify the partial reconstruction with IMFs from this to the end as the low-frequency parts, and identify the partial reconstruction with other IMFs as the high-frequency process Zhang et al. [14] considered that the IMFs obey the zero-mean normal distribution and then used the T-test to test whether the hypothesis is true. e IMF that does not satisfy the hypothesis is the critical point of frequency reconstruction. BPE Algorithm. To estimate the degree of mode splitting in the adjacent IMFs, the BPE algorithm was proposed by Liu et al. [18] based on PE. e adjacent IMF frequency distribution characteristics are analyzed to determine the critical point between high and low frequencies between IMFs. A method based on BPE is proposed in this paper to evaluate the problem and is utilized to reconstruct IMFs. e BPE is described as follows. Hypothesis. e targeted time-series signal is composed of low correlative signals. e BPE index is defined as follows: where PE i denotes the PE value of the ith IMF component and PE ij indicates the PE value of the signal comprising the ith and jth IMF components. ere are two domains of BPE ij values. According to the permutation entropy (PE) proposed by Bandt and Pompe [22], the algorithm is illustrated as follows: (i) Given a time series x k , k � 1, 2, . . . , N. where χ m i denotes the m-dimensional delay embedding vector at time i. (ii) en, x m i has a permutation π r 0 r 1 ...r m− 1 if it satisfies where 0 ≤ r i ≤ m − 1 and r i ≠ r j . m indicates the embedded dimension, and τ is the time delay. (iii) ere are m! possible permutations of an m-tuple vector. For each permutation π, the relative frequency is determined by (iv) e PE of the m-dimension is then defined as (v) erefore, the normalized permutation entropy (NPE) can be expressed as Specifically, if BPE ij ≥ 1, it represents that the chaos degree of the synthetic signal is higher than that of the signal IMF because no signal compatibility exists between the ith and jth IMF components, so the inner modes are considered to be decomposed independently into a single IMF. By contrast, BPE ij < 1 signifies that the chaos degree of the synthetic signal is lower than that of the signal IMF component, which is largely attributed to the chaos signal being offset against the compatibility between the IMFs because some inner modes are divided into adjacent IMFs. According to the definition of BPE proposed in the literature, the IMF reconstruction process based on BPE is proposed as follows: (1) Arrange all the IMFs obtained by decomposition from high frequency to low frequency (IMF 1 , IMF 2 , IMF 3 , . . .) (2) Separately calculate the PE value and the BPE value (BPE 12 , BPE 23 , BPE 34 , . . .) (3) e number of the maximum points (n) in the BPE should be equal to the number of parts with requirement (N), which means n � N (4) Part of the IMF between the BPE maxima constitutes the corresponding frequency. Process of Data Analysis According to the above analysis, first, the time-series data are decomposed by PEEMD to obtain orthogonal IMFs; then, the short-term trend and long-term trends are composed of the BPE index and T-test for comparison. Finally, the correlation analysis is carried out with the long-term factors and short-term factors to explain the rationality of reconstructing the data. e scheme is shown in Figure 1. Decomposition. According to the discussion of PEEMDrelated parameters in the literature [7], the relevant parameters of PEEMD processing gold price data are shown in Table 1. In particular, the setting of the PE threshold directly affects the accuracy of the decomposition of the IMF results and the subsequent combined models. When the PE threshold is zero, the PEEMD decomposition method is the same as the complete ensemble empirical mode decomposition (CEEMD) method. As demonstrated in the literature, when the threshold is between 0.5 and 0.6, the best decomposition results are obtained. Based on this, the threshold value is chosen as 0.6, and the decomposition result of the gold price data is shown in Figure 2. As shown in Figure 2, the gold/dollar data are processed as IMF 1 -IMF 7 and the residual trend signal RS 8 by PEEMD. e frequency of the IMFs gradually decreases from IMF 1 to IMF 7 , and the high-frequency part and the low-frequency part of the gold price trend are included. erefore, the paper demonstrates that the high-frequency part indicates Step #2 Verification Step #3 Note. e symbols "NStd," "MaxIter," "Ne," "T," "Mode," and " r" indicate the noise standard deviation, the maximum number of screening iterations, increased white noise logarithm, delay time of PE, order of PE, and threshold, respectively. the short-term trend underlying gold price change, the lowfrequency part indicates the long-term trend, and the remaining signal RS 8 indicates the overall downward trend. In this paper, the T-test method and the BPE index are used to measure the IMFs of the decomposition to show the relative range of the frequency and then reconstruct the IMFs to obtain the high-frequency and low-frequency parts. T-Test Reconstruction. e literature proposes that the method is based on the fact that the higher the data frequency is, the more random the data are and the more the mean value approaches zero under the hypothesis of the normal distribution. e T-test method is utilized based on the numerical distribution characteristics in each single IMF. e T-test statistic for each of the IMFs is calculated, as shown in Table 2. It can be seen that the P value of IMF 3 (0.0001) is less than 0.05 for the first time, with the condition that the confidence is 95%. is indicates that the mean of IMF 3 significantly departs from zero. erefore, the superimposed IMF 1 -IMF 2 data are the high-frequency parts as a result of independence and orthogonality between each IMF. e superimposed IMF 3 -IMF 7 data are the low-frequency parts, with RS 8 being the trend. Otherwise, it can be seen that IMF 5 is significantly different from the zero mean with a P value less than 0.05. en, the results show that IMF 1 -IMF 4 can be reconstructed as high-frequency parts, and IMF 5 -IMF 7 can be reconstructed as low-frequency parts, which may be rational to some degree if judged by only statistical values. erefore, it is necessary that IMFs are arranged from high frequency to low frequency, and the IMF, which means that the first significant difference of zero value, is treated as the demarcation point. Essentially, the T-test method divides different parts, ignoring the graduation between the adjacent IMFs according to statistical characteristics in just a single IMF. is paper explores the relationship between the adjacent IMFs to classify high-and low-frequency data by BPE. BPE Reconstruction. In theory, regardless of how the choice of parameters in the PE value of the IMF is calculated, as long as the consistency is guaranteed, the law of change will always have a relative tendency. To calculate the change in PE value and BPE value under the same conditions, the parameters of PE calculated in this paper are consistent with those of PE in PEEMD decomposition. e distribution takes the delay time, and the PE order is also 6. e calculated mode functions are the PE value and BPE change trend and are shown in Table 3. IMF 1 -IMF 7 are arranged in the descending order. Moreover, it can be seen from the above table that the calculated PE values are from a maximum of 0.4287 to a minimum of 0.1084, showing a gradually decreasing trend and a decrease in frequency at any time. e degree of disorder of the data structure is gradually reduced, and the later IMF decomposition represents the gold/dollar trend Mathematical Problems in Engineering 5 information. It represents that the BPE value before IMF 5 is less than 1, indicating that there is a certain degree of mode splitting between IMF 1 and IMF 2 , IMF 2 and IMF 3 , IMF 3 and IMF 4 , and IMF 4 and IMF 5 so that the degree of chaos in the reconstructed signal is lower than that of the single IMF, and the BPE value between IMF 5 and IMF 6 (BPE 65 ) is more than 1 for the first time at 1.0014, indicating that the degree of chaos in the reconstructed signal between IMF 5 and IMF 6 is higher than that of IMF 5 . e degree of chaos increases after the absence of mode splitting between the two signals, which also shows that the frequency of IMF 1 -IMF 7 is relatively different from that of the single IMF, and there is a significant difference between IMF 5 and IMF 6 ; at the same time, the BPE value between IMF 7 and RS 8 (BPE 87 ) is 1.0198, which is also more than 1, indicating that there is also a significant frequency domain division between IMF 7 and the residual signal. erefore, the paper concludes that the frequency division of IMF 1 -IMF 5 is the high-frequency part, IMF 6 -IMF 7 is the low-frequency part, and RS 8 is the trend part based on the calculation of BPE. Verification. Different high-and low-frequency data can be used to form different reconstructed data by the Ttest and the BPE index in the comparison. is paper analyzes the influencing factors of different reconstructed high-and low-frequency data. Many studies [25][26][27][28] show that the long-term trend of gold prices is mainly affected by global gold production, and the short-term fluctuations are mainly affected by world crude oil prices and world inflation. Long-Term Trend. e low-frequency data reconstructed by the T-test and BPE analysis are shown in Figure 3. It can be intuitively seen that the low-frequency data reconstructed by the BPE index have consistency on the whole, and the low-frequency data formed by the T-test contain more local fluctuations. It can be concluded from Table 4 that the low-frequency data and the original gold data have a higher Pearson coefficient and variance [29], indicating that the low-frequency data can explain the main part of the original gold price fluctuation, so the lowfrequency data formed must reflect the impact of longterm trends to some extent. Pearson correlation coefficient and variance in the low-frequency data reconstructed by the T-test and the original gold price data are 0.9314 and 57.14%, respectively, which are higher than the low-frequency data formed by the BPE reconstruction, which shows that, in the formed low-frequency data (long-term trend), more relatively high-frequency data remain in the reconstructed low-frequency data of the T-test. To further analyze the long-term trend effects of reconstructed low-frequency data, this paper refers to the world gold mine output obtained by the World Gold Council (WGC) from 2010 to 2018, as shown in Figure 4. It can be seen intuitively from Figures 3 and 4 that the longterm trend fluctuations in gold prices are negatively correlated with global gold production, Pearson correlation coefficient between them is calculated as negative, and the absolute value of the correlation coefficient between the long-term trend and global gold production is larger by the BPE index, as shown in Table 5. It is demonstrated that the low-frequency data formed by this method have an advantage over the traditional T-test method in reflecting the long-term trend in the gold price. Short-Term Trend. e high-frequency data reconstructed by the T-test and BPE analysis are shown in Figure 5. e high-frequency data formed by the T-test are relatively higher than the frequency of the high-frequency data formed by the BPE. Table 6 shows that the variance in the high-frequency data and the original gold data is 1.17% and 11.38%, respectively, and the correlation coefficient between the high-frequency data formed by the T-test and the original gold price data is only 0.1376, which is approximately half of the correlation coefficient of the highfrequency data formed by the BPE index, which means that the high-frequency data formed by the T-test are more random and can hardly explain the short-term trend fluctuations of the original gold price data; the high-frequency data formed by BPE can relatively more accurately explain the short-term trend fluctuations of gold price data. To further explore the rationality of high-frequency data reconstructed by different methods, this paper selects the inflation rate of the United States to replace the world inflation rate [27,30] from InflationData.com to analyze the correlation. Table 7 shows the monthly inflation rate from 2010 to 2018. Similarly, this paper selects the spot price of crude oil from 2010 to 2018, and the data are downloaded from the spot trading software MT4, as shown in Table 8. Table 9 shows that the correlation coefficients of highfrequency data and short-term factors are small regardless of the method used, which shows that the calculated correlation coefficient is distorted and even changed from a negative correlation (positive correlation) to a positive correlation (negative correlation) under the trend condition without eliminating the short-term trend influencing factors due to the microeconomic factors containing many highfrequency noise signals. However, in terms of the comparison of the absolute values under the same-directional nature, the correlation between the high-frequency data formed by the BPE division and crude oil price is stronger than by the T-test, which shows that the high-frequency data formed by the BPE are closer to the short-term trend of the original gold price data. Conclusion ere are data-driven macrofactors behind any economic data. It is important for social economists to analyze the cause and predict data volatility. is paper selects the world gold price as an example for verifying the reconstruction method based on the BPE index. Adaptive PEEMD is utilized to decompose the gold price data to obtain the orthogonal mode function while eliminating as much of the effectiveness from the noise signal as possible, and then the IMFs are classified into high-and low-frequency parts to determine long-term and short-term trends by the BPE index method compared with the T-test method. e composed results are subjected to correlation analysis with global gold production, world crude oil prices, and world inflation to highlight the rationality of long-term and short-term trends composed of the two methods. According to the study results regarding the trends in gold price data, the main conclusions are as follows: (1) According to the calculated value based on the two methods in comparison, the BPE value in IMFs indicates clearer boundaries than the T-test and is more in line with the actual situation. BPE 87 and BPE 65 are more than 1, which manifests the obvious boundaries between RS 8 and IMF 7 , while the P value of IMF 3 (0.0001) is less than 0.05 for the first time, and IMF 5 , IMF 6 , and IMF 7 are also less than 0.05 by the T-test method. (2) According to the perspective from the composed result, the correlation analysis shows that Pearson correlation coefficient between low-frequency data reconstructed by the BPE index method and mine production is − 0.5005, greater than − 0.4421, which is Pearson correlation coefficient between low-frequency data reconstructed by the T-test method and mine production. is indicates that the reconstructed long-term trend is more appropriate than the realistic economic meaningful trend by the BPE index method. erefore, the high-frequency data are correspondingly more effective and are extracted from high-frequency data. e paper demonstrates that the trend data reconstructed by the BPE index method can better explain the internal drivers of gold price volatility, both in terms of longterm and short-term trends, and it lays the foundation for further study about the trend of gold price volatility. It provides new methods and ideas for studying the driving factors behind macroeconomic fluctuations to better explain the changes in macroeconomic indicators. Data Availability is paper selects the simulation signal for empirical analysis and does not contain specific data. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,668.6
2020-02-22T00:00:00.000
[ "Economics" ]
Massive Klein Tunneling in Topological Photonic Crystals Klein's paradox refers to the transmission of a relativistic particle through a high potential barrier. Although it has a simple resolution in terms of particle-to-antiparticle tunneling (Klein tunneling), debates on its physical meaning seem lasting partially due to the lack of direct experimental verification. In this article, we point out that honeycomb-type photonic crystals (PhCs) provide an ideal platform to investigate the nature of Klein tunneling, where the effective Dirac mass can be tuned in a relatively easy way from a positive value (trivial PhC) to a negative value (topological PhC) via a zero-mass case (PhC graphene). Especially, we show that analysis of the transmission between domains with opposite Dirac masses -- a case hardly be treated within the scheme available so far -- sheds new light on the understanding of the Klein tunneling. In relativistic quantum mechanics, a potential barrier can become nearly transparent to an incoming particle if the potential exceeds the particle's mass, in stark contrast to the non-relativistic quantum mechanics where a particle cannot transmit such a high potential barrier.This counterintuitive result has been known as Klein's paradox [1,2]. To be explicit, we consider the case that a relativistic particle with mass m and energy E > 0 is transmitted from a region without potential (Region I) into a potential barrier V ≥ 0 (Region II), as shown in Fig. 1 (a).The transmission is categorized into three regimes, which we call the small-V regime (E ≥ V), the reflected regime (E < V < E +mc 2 ), and the large-V regime (V ≥ E +mc 2 ).In the small-V regime, the particle is transmitted similarly to the non-relativistic quantum tunneling (Fig. 1 (b)).In the reflected regime the particle is fully reflected.Most interestingly, in the large-V regime, the particle is transmitted as an antiparticle (Figure 1 (c)).Although Klein's paradox has a simple resolution as shown in Fig. 1 (c), the physical interpretation of Klein tunneling seems still under debate [2][3][4][5][6][7][8][9][10][11].The large potential energy (V > 2mc 2 ≈ 1MeV for electron) has led to theoretical interpretations of the paradox in terms of electronpositron pair creation using quantum field theory or quantum electrodynamics, which meanwhile makes its direct verification challenging in experiments of elementary particle physics.So far, Klein tunneling has been reported experimentally with massless particles in various condensedmatter systems [12][13][14][15][16][17][18][19][20], where there is no strict distinction between particles and antiparticles.In addition, these experiments consider the dispersion near the K and K ′ points with finite momenta instead of the Γ point, which cannot be considered as ideal platform to clarify the Klein physics in a complete way.Klein tunneling was also studied in (but not limited to) deformed hexagonal lattices [21,22], photonic crystals [19,20,23,24], and magnonic systems [25], but to the best of our knowledge a direct observation of massive Klein tunneling is still missing. In this article, we propose that honeycomb-type photonic crystals (PhCs) are ideal systems for investigating massive Klein tunneling.These systems possess doubly degenerate relativistic dispersions near the Γ point [26][27][28][29][30][31].So, electromagnetic modes in these systems behave as massive Dirac quasiparticles with four-component spinor wavefunctions.Recipes of PhC design giving quasiparticles with positive mass (trivial PhC), massless (photonic graphene), and even negative mass (topological PhC) have been established [26][27][28][29][30][31].The advantage of these PhC systems is that the photonic band gap (mass gap) is on the order of 0.1eV, which can be realized by semiconductor nanofabrication.We propose that an analog of massive Klein tunneling without potential can appear at the interface of PhCs with positive and negative mass (Fig. 1 (d), (e)). We show that interfaces between PhC with opposite masses allow us to investigate the essential difference between normal and Klein tunneling.This article is organized as follows.In Section II, we explain how photonic eigenstates can be described as massive Dirac quasiparticles.In Section III, we present our model of PhC interface and confirm massive Klein tunneling at the trivial-trivial PhC interface.We reveal that when the particle has a normal incidence, the transmission coefficient through a trivial-trivial PhC interface with a large/small V is identical to that of a trivial-topological interface with a small/large V.In Section IV, we consider the angle dependence of the transmission and find that transmission with a negative index of refraction is achieved in the large-V regime, both for the trivial-trivial and trivial-topological interfaces.In Section V, we investigate whether topological interfacial states [30,32,33] disturb the transmission process.In Section VI, we discuss the implications of our results. II. Trivial and Topological Photonic Crystals Let us consider a PhC with a honeycomb lattice and choose hexagonal unit cells which contain six sites as shown in Fig. 2 (a).This system can be described by the tight-binding Hamiltonian where t 0 , t 1 > 0 represent the nearest-neighbor hopping integral inside and between unit cells, respectively.|i⟩ (i = 1 • • • 6) represent the position basis inside a unit cell, for which the photonic eigenstates and eigenvalues satisfying the eigenvalue equation are well known [26,27].The photonic eigenstates at the Γ point correspond to the two-dimensional where k = (k x , k y ) describe the wavevector near the band gap, k ± = k x ± ik y , M = t 0 − t 1 and Here, we kept the terms linear in k x , k y and ignored higher order terms.As can be seen from Eq. ( 3), the pseudospin-up sector and the pseudospin-down sector are decoupled. Therefore, hereafter we consider only the pseudospin-up sector for simplicity.The pseudospin-up Hamiltonian can be described in terms of Pauli matrices with These matrices satisfy the following anti-commutation relations where 1 is the 2 × 2 unit matrix.The photonic eigenstates in the pseudospin-up sector are twocomponent spinor wavefunctions of the form ψ + (r) = [ψ p + (r), ψ d + (r)] T , with r = (x, y), which satisfy the eigenvalue equation The photonic dispersion is shown in Fig. 2 (c), where the mass gap is equal to 2M.The blue curves are obtained by solving Eq. ( 2) numerically and the red curves show the Dirac dispersion which is obtained by solving Eq. ( 8) with the plane-wave solution . The blue and red curves coincide near the Γ point, which implies that photonic eigenstates can be described as massive Dirac quasiparticles. From its definition, it is clear that M can be either positive or negative depending on the values of t 0 and t 1 .A trivial PhC is obtained when t 0 > t 1 (i.e.M > 0).On the other hand, a topological PhC is obtained when t 0 < t 1 (i.e.M < 0).The negative mass for the topological PhC leads to band inversion, namely exchanging the |d + ⟩ and |p + ⟩ eigenstates in the order of energy near the Γ point [26,27].In other words, the positive and negative energy states, which correspond to "particles" and "antiparticles", are inverted in a topological PhC. In the above, we have shown how photonic eigenstates in honeycomb-type PhC can be described as massive Dirac quasiparticles with positive and negative masses.In what follows, we use such photonic eigenstates to study massive Klein tunneling at PhC interfaces. A. Model We consider the transmission of light through the interface of two PhCs with a potential difference (Figure 1), which can be achieved by changing the effective permittivity of the PhC in Region II.The two PhCs have different M and A values, and the system close to the PhC interface is described by the following Hamiltonian where k = −i∇ = −i(∂ x , ∂ y ) has been considered, and We use this Hamiltonian to solve the following eigenvalue equation where ψ + (r) = [ψ p + (r), ψ d + (r)] T .Equation (11) gives two coupled differential equations.On the other hand, note that Ĥ+ (x) − V(x)1 is a traceless Hermitian operator.In general, any 2 × 2 traceless Hermitian operator Ô can be written using Pauli matrices (5).Hermicity implies that the eigenvalues of Ô are real, so the square of these eigenvalues are always positive.In other words, Hermicity guarantees that the eigenvalues come in positive and negative pairs, and the square of a 2 × 2 traceless Hermitian operator is a diagonal matrix with the same elements.This can be checked using Eq. ( 6) and Eq. ( 7).Explicitly, we obtain Using 1ψ + (r) = ψ + (r) and Eq. ( 11), we arrive at the following decoupled differential equation We use Eq. ( 12) to calculate the eigenvalues while Eq. ( 11) to calculate the eigenstates.For the transmission problem in Fig. 1, the following plane-wave solution is considered Here, x , k r y ) and k t = (k t x , k t y ) are the wavevectors of the incident, reflected and transmitted wavefunctions, respectively.By definition of reflection we have k in k r (see Section IV for more detail.)We solve Eq. ( 12) in each of the two regions separately and obtain for the incident and reflected wavefunctions and for the transmitted wavefunction. B. The Transmission Coefficient To discuss the transmission and reflection properties quantitatively, we calculate the conserved current j µ = ( j µ x , j µ y ) which is obtained from the continuity equation where k 2 = k • k with k = k µ , µ ∈ {in, r, t}.Note that we do not sum over indices.Using the time-dependent Dirac equation iℏ∂ t ψ + = Ĥ(x)ψ + with Eq. ( 9), ( 14) and ( 20) we obtain The reflection coefficient R and the transmission coefficient T are calculated from the conserved currents: It is clear that the condition R+T = 1 is fulfilled.Although expressions similar to Eq. ( 26) and Eq. ( 27) have been obtained in literature [9,14], our results apply for positive and negative masses.To understand the difference between tunnelings at the trivial-trivial and trivial-topological interfaces, we observe that the kinematic factor η can be written in the following form We emphasize once again that, if M < and M > are both positive, Eq. ( 28) agrees with that in Ref. [2].However, our result applies for positive and negative masses.The trivial-trivial and the trivialtopological interfaces can be compared by introducing the effective mass For the trivial-trivial interface, we have M eff > 0 in the small-V regime and M eff < 0 in the large-V regime.In contrast, for the trivial-topological interface, we have M eff < 0 in the small-V regime and M eff > 0 in the large-V regime.In other words, the effective mass is positive for the normal tunneling and negative for the Klein tunneling.Therefore, we conclude that the mass sign of the transmitted particle interchanges normal tunneling and Klein tunneling at the trivial-trivial interface and at the trivial-topological interface. In the rest of this section, we investigate the tunneling at fixed V values, which is close to real experimental setups.Figure 4 shows the tunneling through the trivial-trivial interface (blue curve) and through the trivial-topological interface (orange curve).Normal tunneling and Klein tunneling are identified as in Fig. 3.At V = 0 (Fig. 4 IV. Negative index of refraction Negative index of refraction has been associated with the massive and massless Klein tunneling [9,14].Here, we investigate whether this association is still valid for the trivial-topological interface.As in the previous sections we focus on transmission with wavevectors near the Γ point.The physical velocity of a photonic quasiparticle is the group velocity which is defined as , and is given from Eq. ( 15) and ( 16) Component-wise we have v g = (v g,x , v g,y ).Here, we assume that E lies outside of the bandgap such that Eq. ( 29) takes real values.By definition, v g,x is positive for the incident and transmitted states. This relation is preserved if k in x is proportional to sgn(E) and k t x is proportional to sgn(E − V), which has been depicted in Fig. 1 (c) and (d).On the other hand, from continuity at the boundary we obtain k in y = k t y .If we choose ϕ in and E to be positive, then k in y and k t y are both positive.If E − V > 0 then v g,y is positive for the incident and transmitted states, so the index of refraction is positive.On the other hand, if E − V < 0, then v g,y is positive for the incident state but negative for the transmitted state, so the index of refraction is negative.This result can be generalized using Eq. ( 29) and we obtain The above equation can be understood as an analog of Snell's law where the index of refraction can be positive or negative depending on the values of E and V (Similar results are obtained in refs. [9, 14]), as shown in Fig. 5.In particular, we obtain a negative index of refraction in the large-V regime which has E > 0 and E − V < 0, i.e. for tunneling from a concave-up band to a concavedown band (Fig. 5 (c)).This situation is analogous to ref. [35] which explains the negative refraction by a concave-down photonic band.Note that Eq. ( 30) is independent of the sign of M < and M > since the mass enters into the Hamiltonian as M 2 .Therefore, negative refraction appears at the large-V regime of both trivial-trivial and trivial-topological interfaces.On the other hand, we have shown in the previous section that for a trivial-topological interface, Klein tunneling appears in the small-V regime while normal tunneling appears in the large-V regime.This result implies that negative refraction is not directly related to massive Klein tunneling. V. Jackiw-Rebbi soliton It is well known that the Jackiw-Rebbi soliton appears at the center of the bandgap of a positivenegative mass interface [32].Here, we check whether such states reduce the transmission.As in previous studies [33] we split the Hamiltonian into with where ∆ Ĥ+ is taken as a perturbation.We assume a wavefunction of the form [33] The wavefunction must vanish as |x| → ∞ which requires κ < and κ > to be positive.After solving Ĥ+ (x)ψ + (r) = E 0 ψ + (r) with Eq. ( 32) and Eq. ( 34) we obtain where κ < and κ > are real numbers.From the solution of κ < we obtain −M < < E 0 < M < , i.e.E 0 must be inside the bandgap of the trivial PhC.On the other hand, from the solution of κ > we obtain e. E 0 must also be inside the bandgap of the topological PhC.These conditions are satisfied simultaneously only if |V| < M < − M > .From the continuity of the wavefunction at x = 0 we obtain These conditions are satisfied simultaneously if E 0 takes the following form with |V| < M < − M > , which generalizes the zero-energy Jackiw-Rebbi soliton to the case with a potential.Substituting this expression into Eq.( 35) we obtain with Therefore, the interfacial state is described by the following wavefunction where the normalization condition . stable in the small-V and reflected regime with a nonzero common global bandgap.The perturbation Eq. ( 33) gives an additional energy so the energy of the interfacial state is Note that the stability of the soliton is not affected by this perturbation. Figure 6 shows the energy of the interfacial state (black line) at different V values.On top of that, we also plot the bulk band of the trivial PhC (blue curve) and topological PhC (orange curve) as a function of k y with fixed k x values.Since η s , κ ≶ , A ≶ are positive parameters, the group velocity v g,y = ℏ −1 ∂∆E/∂k y is negative, i.e. the soliton (with up spin) propagates in the negative y direction.For the pseudospin-down sector, the soliton propagates in the positive-y direction, which is a manifestation of pseudospin-momentum locking in topological interfaces with time reversal symmetry [26,36,37].We find that the slope of the interfacial state is reduced as V increases.In the limit |V| → ∆ the interfacial state becomes flat, which may have interesting future applications. Since k y is conserved, transition from the incident state to the Jackiw-Rebbi interfacial state is possible only if both states share the same E and k y value.Figure 6 (a)∼(c) shows that the dispersion of the interfacial state is always lower than that of the transmitted state (upper band of trivial PhC).Therefore, we conclude that the Jackiw-Rebbi interfacial state does not affect the transmission. VI. Discussion Finally, we summarize our results and discuss their implications.In this article, we point out that honeycomb-type PhCs provide an ideal platform to investigate the nature of Klein tunneling, where the effective Dirac mass can be tuned in a relatively easy way from a positive value (trivial PhC) to a negative value (topological PhC) via a zero-mass case (PhC graphene).We considered two types of interfaces, namely the trivial-trivial interface and the trivial-topological interface. First, by studying the transmission at both types of interfaces, we found that transmission of a particle at normal incidence at the trivial-trivial PhC interface with a large/small V is identical to that of a trivial-topological interface with a small/large V.The reason for this duality is that in the large-V regime, the mass sign of the transmitted particle is effectively reversed at the trivialtrivial interface.Particle-antiparticle tunneling occurs even without high potential at the trivialtopological interface.Therefore, we conclude that the high potential is not necessary for the definition of Klein tunneling. Second, we considered the angle dependence of the transmission and found that transmission with a negative index of refraction is achieved in the large-V regime both for the trivial-trivial and trivial-topological interfaces.In fact, it has been shown that negative refraction can be achieved with a photonic band with concave-down curvature [35], which is what we obtain in the large-V regime considered by Klein.While negative index of refraction has been associated with massive and massless Klein tunneling, here we have shown that massive Klein tunneling appears in the small-V regime of a trivial-topological interface.Therefore, the large potential and massive Klein tunneling should be considered separately. Third, we found that the Jackiw-Rebbi soliton solution at the trivial-topological PhC interface does not disrupt the transmission.Therefore, our results can be tested in PhC interfaces.Our results are not limited to PhC systems but also apply to other Dirac systems. FIG. 1 . FIG. 1.(a) Schematic of the transmission process: A relativistic particle in Region I with mass m and energy E is transmitted into a potential barrier V (Region II).ψ in , ψ r , and ψ t denote the incident, reflected, and transmitted wavefunctions, respectively.The way of transmissions depends on the value of the potential, namely V < 2mc 2 (small) or V > 2mc 2 (large), and the type of PhC in Region II (trivial or topological).(b)Conventional (normal) tunneling at small V.The red curve is the "particle" band and the blue curve is the "antiparticle" band.(c) Massive Klein tunneling at large V. (d), (e) Band inversion occurs in topological PhC with negative mass, which allows Klein tunneling at small V at the trivial-topological interface. FIG. 2 . FIG. 2. (a) PhC with a honeycomb lattice and hexagonal unit cells which contain six sites.The trivial and topological PhCs are realized by changing the hopping integrals inside and between the unit cells.(b) Photonic eigenstates at the Γ point.(c) Photonic dispersion for a trivial PhC with t 0 = 1.1t 1 , M = t 0 − t 1 and ∆ = 2M.The blue curve is obtained from the tight-binding Hamiltonian Eq. (1), while the red curve is obtained from the Dirac Hamiltonian Eq. (3). FIG. 3 . FIG. 3. Transmission coefficient T at normal insidence ϕ in = 0 for the trivial-trivial interface (a) and for the trivial-topological interface (b).Here, the bandgap in Region I (−0.5∆ ≤ E ≤ 0.5∆ shown by the horizontal gray stripe) and the bandgap in Region II (blue stripe with T = 0) are both given by ∆ = 2M with M = |M < | = |M > | = 0.1t 0 .(c) and (d) Line profile of T at E = 0.55∆ and E = ∆.Klein tunneling and normal tunneling are assigned as in Fig. 1.Line profile of T at constant V values (vertical dashed lines in (a) and (b)) is shown in Fig. 4. Figure 3 ( Figure 3 (a) and (b) show the transmission coefficient T for a trivial-trivial interface and a trivial-topological interface, respectively, with |M < | = |M > | = 0.1t 0 and ϕ in = 0.The horizontal gray stripe shows the bandgap ∆ = 2|M < | of the PhC in Region I, i.e. the region without incident particles.The blue stripe is the region with total reflection where E lies within the bandgap of the PhC in Region II.Figure 3 (c) and (d) show the line profile of T at different E values, namely E = 0.55∆ and E = ∆.The blue curve is the tunneling through the trivial-trivial interface, wherethe large-V regime (V > E + ∆/2) is identical to the Klein tunneling known so far[1,2].The orange curve is the tunneling through the trivial-topological interface, which is strikingly different from the blue curve. (a)), the current is fully transmitted at the trivial-trivial interface for any E ≥ ∆/2: This result is expected because there is no interface at V = 0 and the energy spectrum is identical in Region I and Region II.On the other hand, for the trivialtopological interface at V = 0, the current is fully reflected at the band edge E = ∆/2: This is due to different parities at the Γ point induced by band inversion.(This reflection mechanism has been used to construct topological cavity surface emitting lasers[34].)Then, the current is partially transmitted for E > ∆/2 due to the hybridization of |d⟩ and |p⟩ eigenstates.The transmission changes when ∆ >V > 0 (Fig.4 (b)) because the states in Region I and Region II with the same energy have different hybridization of |d⟩ and |p⟩, i.e. the overlapping between states in each regions is different.As the potential increases above V = ∆ (Fig.4(c)∼(f)) a dome-like shape appears.The height of the blue dome, which corresponds to the Klein tunneling known so far, increases with V. On the other hand, the height of the orange dome does not change with V. FIG. 4 . FIG. 4. Transmission coefficient T as a function of E at different V values.The gray stripe shows the bandgap of the PhC in Region I.The insets depict the photonic bands at these V values.The blue curve is the tunneling through the trivial-trivial interface with the Klein tunneling appearing at the large-V regime E ≤ V − ∆/2.'N' and 'K' stand for normal tunneling and Klein tunneling, respectively.The orange curve is the tunneling through the trivial-topological interface with the Klein tunneling appearing at the small-V regime E ≥ V + ∆/2. FIG. 5 . FIG. 5. (a) Dependence of the index of refraction on the potential V. The index of refraction is positive in the small-V regime (green region) but negative in the large-V regime (orange region).Note that this result is independent of the type of interface, i.e. independent of the mass sign.(b) and (c) Sign of the index of refraction at V = 0.5∆ and V = 1.5∆, respectively.Negative index of refraction appears for tunneling from a concave-up band to a concave-down band (or vice versa). FIG. 6 . FIG. 6. Plot of the bulk band of the trivial PhC (blue curve) and topological PhC (orange curve) as a function of k y with the interfacial state (black line).Here, we consider only the pseudospin-up states, so only one interfacial state appears due to pseudospin-momentum locking.(a)∼(c): For k x > 0 the interfacial state is lower than the transmitted state (upper band of trivial PhC), so the interfacial state does not affect the transmission. irreducible representation of the C 6v point group, namely |s⟩, |p x ⟩, |p y ⟩, |d x 2 −y 2 ⟩, |d 2xy ⟩ and | f ⟩ which are depicted in Fig. 2 (b).Here, the eigenvalue E = ω 2 /c 2 , with ω the angular frequency of light and c the speed of light in vacuum, is referred to as the "energy" associated with the tight-binding Hamiltonian (1) [26].The states (|p x ⟩ , |p y ⟩) and (|d x 2 −y 2 ⟩ , |d 2xy ⟩) are degenerate, so we can define the chiral states |p ± ⟩ = |p x ⟩ ± i |p y ⟩ and |d ± ⟩ = |d x 2 −y 2 ⟩ ± i |d 2xy ⟩ which form a new set of basis states.
5,770.8
2023-12-23T00:00:00.000
[ "Physics" ]
Topological properties of co-occurrence networks in published gene expression signatures. Meta-analysis of high-throughput gene expression data is often used for the interpretation of proprietary gene expression data sets. We have recently shown that co-occurrence patterns of gene expression in published cancer-related gene expression signatures are reminiscent of several cancer signaling pathways. Indeed, significant co-occurrence of up to ten genes in published gene expression signatures can be exploited to build a co-occurrence network from the sets of co-occurring genes ("co-occurrence modules"). Such co-occurrence network is represented by an undirected graph, where single genes are assigned to vertices and edges indicate that two genes are significantly co-occurring. Thus, graph-cut methods can be used to identify groups of highly interconnected vertices ("network communities") that correspond to sets of genes that are significantly co-regulated in human cancer. Here, we investigate the topological properties of co-occurrence networks derived from published gene expression signatures and show that co-occurrence networks are characterized by scale-free topology and hierarchical modularity. Furthermore, we report that genes with a "promiscuous" or a "faithful" co-occurrence pattern can be distinguished. This behavior is reminiscent of date and party hubs that have been identified in protein-protein interaction networks. Introduction Current biological research is characterized by the application of high-throughput technologies which allow highly parallel studies of DNA, RNA, and protein functions to be carried out on an unprecedented scale. A major bottleneck in turning the large amounts of data accumulated into practically useful knowledge is the interpretation of the results. Comparative analyses of microarray-based gene expression studies can provide valuable insights, by helping in the interpretation of individual studies and pointing out unexpected parallels between studies (Larsson et al. 2006;Rhodes and Chinnaiyan, 2005). However, a number of technical hurdles, such as differences in the experimental procedures for sample collection, RNA extraction and labeling (Draghici et al. 2006) or differences in the microarray platforms used (Kuo et al. 2006) as well as the variety of statistical approaches employed during data analysis (Shi et al. 2005) make this type of analysis cumbersome. We and others have recently proposed the use of gene list comparison approaches (Cahan et al. 2005;Finocchiaro et al. 2005;Newman and Weiner, 2005) to partially overcome these limitations, showing that meaningful conclusions can be drawn from published gene expression data in the absence of any numerical detail (Finocchiaro et al. 2007). Our approach is based on co-occurrence analysis. The underlying hypothesis assumes that genes regulated by similar pathways should co-occur more frequently that expected in published gene expression signatures. Thus, in order to systematically study co-occurrence patterns in gene expression signatures, we have generated a repository of published gene expression signatures, PubLiME (published lists of microarray experiments, available at http://bio.ifom-ieo-campus. it/publime) (Finocchiaro et al. 2007). We also proposed the Poisson-binomial distribution (which accounts for largely varying numbers of genes in reported gene lists) as an appropriate statistic to test the signifi cance of co-occurrence of up to ten genes in published gene lists. From the set of signifi cantly co-occurring genes, a co-occurrence network is subsequently constructed as an undirected graph, with genes represented as vertices and edges indicating that two genes are significantly co-occurring (Finocchiaro et al. 2007). By this approach, we have shown that a co-occurrence network derived from cancer related gene expression signatures is characterized by the presence of highly interconnected communities, which can be identifi ed using graph-cut approaches such as edge-betweenness clustering (Newman and Girvan, 2004). Gene communities in the co-occurrence network are assumed to represent the consequence of coordinated differential regulation of the community genes in diverse conditions, which might be due to common regulatory inputs. Indeed, the promoters of community genes are characterized by over-represented transcription factor binding motifs, whose presence is compatible with biological intuition (Finocchiaro et al. 2007). One of the most significant achievements obtained in recent years has been the realization that complex networks of biological entities are characterized by basic features that are also found in non-biological networks (Barabasi and Oltvai, 2004). Thus, physical insight derived from the study of non-biological complex systems may be used as a guide in the analysis of biological network function. Many naturally occurring networks possess the small-world property (Watts and Strogatz, 1998). Small-world networks are characterized by the contemporaneous presence of strong local clustering and short average path length between vertices. Such strong local clustering of the small-world model is consistent with the modularity observed in naturally occurring networks, where modules (i.e. communities of strongly interconnected vertices) are often observed (Ravasz and Barabasi, 2003). However, the smallworld model cannot explain the vertex degree distribution of naturally occurring networks, which in the majority of cases follows a power law or an exponential law. On the other hand, the scale-free network model (Barabasi and Albert, 1999) explains the vertex degree distribution and still possesses the small-world property. However, local clustering in scale-free networks is much weaker than in naturally occurring networks. Therefore, the hierarchical network model has been proposed (Ravasz et al. 2002), which combines strong local clustering with small average path length (smallworld property) and naturally observed vertex degree distribution (power law, i.e. scale-free property). Scale-free network topology has important implications for the robustness of complex systems (Albert et al. 2000). Since in scale-free networks most vertices have only a few edges, the accidental failure of vertices is likely to affect mainly the vertices themselves, without key roles for the function of the system. By contrast, the presence of hubs (vertices with many edges) makes scale-free networks particularly vulnerable because targeted removal of hub vertices quickly leads to disconnected subnetworks (Albert et al. 2000). These features are of obvious benefi t in the search for new drug targets. We were wondering whether co-occurrence networks derived from cancer related gene expression signatures share topological features common to other naturally occurring networks. If this would be the case, those features could be used in the identifi cation of key regulators of the oncogenic process. We show here that co-occurrence networks are characterized by scale-free topology and hierarchical modularity. Furthermore, we identifi ed two different co-occurrence patterns. Specifi cally, we found that some genes are differentially regulated in a wide variety of conditions and co-occur with many different genes. Paradoxically, this behavior leads to low vertex degrees in the co-occurrence network, since many co-occurrences never reached signifi cance. Among those genes, we found well-known oncogenes playing a critical role in cancer, such as Cyclin D1 (CCND1) and FOS. On the other hand, we found genes that were less prone to differential regulation, but each time their expression level changed it did so in a coordinated fashion with a similar set of genes in different conditions. These genes represent the most connected hubs of the co-occurrence network. Examples of those genes are CDC2, CDKN3, and TK1. The signifi cance of these fi ndings in interpreting gene expression data and in identifying potential target genes for follow-up studies is discussed. Generation of a repository of published cancer gene signatures The generation of the PubLiME repository has been previously described (Finocchiaro et al. 2007). Briefl y, 499 published cancer related gene expression microarray studies were scrutinized for: 1) aim of the study; 2) microarray platforms employed; 3) organism being investigated; and 4) feasibility of cross-platform annotation of published gene expression signatures. Among the 499 studies, 273 (233 human and 40 mouse) were selected for manual extraction of gene expression signatures from tables, figures, and supplementary material as lists of regulated genes. Cross-platform annotation was then performed as described (Finocchiaro et al. 2007). Data regarding publications and gene expression signatures were imported into a relational MySQL database that is accessible via a web-interface (http://bio.ifom-ieo-campus.it/publime/). Co-occurrence analysis of genes in gene expression signatures Lists of regulated genes were represented in a bipartite graph format, where gene names and publication IDs represent the two vertex sets and an edge between them indicates differential regulation observed in a particular study. An edgeswapping procedure was applied in 1000 separate runs to determine the occurrence probability of a gene in a given publication. Given the occurrence probabilities for each gene in each publication, the probability of co-occurrences of arbitrary gene combinations (also called co-occurrence modules) in a publication could be calculated by multiplying the respective occurrence probabilities. The expected number of publications in which a gene combination is found follows a Poisson-binomial distribution (a binomial distribution with trial specifi c probabilities). Mean µ and variance σ of this distribution can be calculated as: where p i designates the co-occurrence probability of a given gene combination in publication i. N is the total number of publications. A Z-score transformation of the observed number k of co-occurrences of a given combination of genes can then be applied to assess the signifi cance of co-occurrence. To limit noise effects, we required a co-occurrence module to be observed in at least fi ve publications and the Z-score to be at least 5. A more detailed description of the analysis procedure can be found in Finocchiaro et al. 2007. Co-occurrence network construction From the set of signifi cant co-occurrence modules, a co-occurrence network was constructed in the following fashion: For each module, the gene names are represented by vertices and an edge is drawn between all pair-wise combinations of genes present in the module. This procedure is repeated for all signifi cant co-occurrence modules. Regression analysis Regression analysis was applied to estimate the scaling factors for the scale-free and exponential network models, as well as to investigate the relationship between the clustering coeffi cient C(k) and the vertex degree k. In a scale-free network, the vertex degrees are distributed according to a power law: where P(k) describes the probability of observing a vertex of degree k. γ is the scaling factor. After log transformation, this relationship becomes: Thus, the relationship between ln(P(k)) and ln(k) is given by a line with slope -γ. To estimate γ , the observed data where plotted with ln(P(k)) on the y-axis and ln(k) on the x-axis and Mathematica software ("Fit" function) was used to fi nd the equation of the line that best fi ts the data according to the least squares criterion. The slope of the regression line provides an estimate for the scaling factor γ. Regression analysis for the exponential network model was carried out similarly. However, since in an exponential network the vertex degrees follow an exponential law: which after log transformation becomes: ln( ( ))P k k − γ the data where plotted with ln(P(k)) on the y-axis and k (instead of ln(k)) on the x-axis before applying the Mathematica "Fit" function. The clustering coeffi cient C(k) measures the fraction of observed edges divided by the number of theoretically possible edges linking direct vertex neighbors and thus offers a measure to detect modularity in networks. In hierarchical networks, the average clustering coeffi cient scales with C(k) ~ k −1 and is independent of network size (Ravasz and Barabasi, 2003;Ravasz et al. 2002). Regression analysis was applied to verify this relationship in the PubLiME co-occurrence network, by plotting ln(C(k)) on the y-axis and ln(k) on the x-axis followed by applying the Mathematica "Fit" function to the data. The resulting regression line should have a slope close to −1 if the network is hierarchical. R-square value The R-square value is calculated as: R-square assumes values between 0 and 1 and shows how much of the variability in the data is explained by the regression model. A value of 1 indicates a perfect fi t. Functional gene category enrichment analysis The DAVID database (Dennis et al. 2003) was used for functional category enrichment analysis, following the instructions given at http://niaid.abcc. ncifcrf.gov. The gene lists analyzed correspond to direct vertex neighbors of the genes studied. The multiple testing corrected P-values (Benjamini correction) are reported. Software Custom Java based software was used for determining occurrence probabilities, co-occurrence probabilities and the identifi cation of signifi cant co-occurrence modules from PubLiME data. JUNG (http://jung. sourceforge.net/index.html) and Netsight (http://jung. sourceforge.net/netsight/) software were used for graph visualization. Mathematica software (Fit function) was used for linear regression analyses. Results Co-occurrence network vertex degrees are distributed non-randomly Previously, we have reported co-occurrence analysis of published gene expression signatures collected in the PubLiME repository (Finocchiaro et al. 2005;Finocchiaro et al. 2007). From the set of significantly co-occurring genes, a co-occurrence network was constructed as described in Materials and Methods. A representation of the PubLiME co-occurrence network is shown in Figure 1. To investigate the topological properties of this network, we performed a vertex degree ranking analysis of the network as a fi rst step (Fig. 1). The vertex diameter represents vertex degree (larger diameter indicates larger degree) and from this analysis the gene displaying the highest vertex degree is CDKN3 (77 edges), which is shown by an arrow. CDKN3 is a dualspecificity phosphatase that binds to cyclindependent kinases and inhibits cell cycle progression (Hannon et al. 1994). The next most connected genes are CDC2 (58 edges), CCNB1 (49 edges), LGALS1 (48 edges), and MYBL2 (42 edges). For these genes, the vertex degree is indicated in white letters in Figure 1. Without assuming a particular distribution of vertex degrees, a Z-score transformation of vertex degrees could be used to evaluate whether the vertex degrees of the above mentioned genes are compatible with a random distribution. Such Z-score transformation of vertex degrees was carried out by subtracting the mean vertex degree form the observed vertex degree, followed by dividing the result by the standard deviation of vertex degrees. The mean vertex degree of the network shown in Figure 1 was found to be 7.73, with a standard deviation of 9.42. Using these values, we obtained the Z-scores for the vertex degree of every gene. According to Tchebyshev's theorem, the probability of observing these values by chance is at most the inverse of the square of Z-scores. These values are reported in Table 1. This fi rst analysis thus shows that the vertex degrees are not distributed in a random fashion. Analysis of co-occurrence network topology Analysis of the PubLiME co-occurrence network's topology produced the results shown in Figure 2. Figure 2A shows the distribution of the probability of observing a vertex with a given vertex degree P(k) as a function of the vertex degree k. The natural logarithm of both values is displayed and since P(k) Ͻ= 1, the values on the y-axis are negative. The data illustrate a linear relationship between the two variables (black squares) and show that high vertex degrees are less probable. Scale-free networks are characterized by the relationship P(k) ~ k −γ (γ = scaling coeffi cient), or ln(P(k)) ~ −γ ln(k), i.e. an inverse linear relationship between ln(P(k)) and ln(k), as observed in the data. The slope of the line fi tted to the data (grey triangles) using the least squares method (see Materials and Methods) evaluates to −2.19, which is typical for naturally occurring networks (Albert and Barabasi, 2002). However, while many naturally occurring networks were found to be scale-free, some networks (e.g. transcription regulatory networks) turned out to be exponential (Barabasi and Oltvai, 2004). In exponential networks, the vertex degree distribution is described by the relationship P(k) ~ e −γk . This function implies a linear relationship between ln(P(k)) and k, with slope -γ. In order to test whether the PubLiME co-occurrence Figure 1. The PubLiME co-occurrence network. A representation of the PubLiME co-occurrence network is shown. The Z-score cutoff during co-occurrence analysis was set to 5 and co-occurrence modules of size 3 were required to be present in at least 5 publications. Larger vertex degrees are visualized by larger vertex diameter. The gene with the largest vertex degree (CDKN3) is indicated by an arrow and vertex degrees of the fi ve most connected genes are shown in white letters. network was better described by an exponential model, we fi tted a line to the observed degree distribution in [k, ln(P(k))] space (see Materials and Methods) and visualized the result in [ln(k), ln(P(k)] space in Figure 2A (grey squares), in order to obtain a direct visual representation of the quality of fi t for both the scale-free and exponential models. Visual inspection of the data showed that both the linear scale-free and the slightly curved exponential models fi tted the data quite closely. In order to decide which model fi ts the data better, the results were then displayed in [k, P(k)] space (Fig. 2B) and the R-square value was calculated for both models. The R-square value indicates how much of the variation in the data is explained by the model. For the exponential model we obtained an R-square value of 0.68, while for the scale-free model the R-square value was 0.87. If R-square values are calculated in [ln(k), ln(P(k))] space, the corresponding values are 0.89 (exponential) and 0.96 (scale-free). Thus, the scale-free model explained the data much better than the exponential model and we concluded that the PubLiME co-occurrence network represented more likely a scale-free network than an exponential network. To evaluate the modularity of the network, we also analyzed the scaling properties of the clustering coeffi cient C(k). In hierarchical networks, the average clustering coeffi cient scales with C(k) ~ k −1 (Ravasz and Barabasi, 2003;Ravasz et al. 2002). Therefore, we tested whether the average clustering coeffi cient of the PubLiME co-occurrence network had this property. The results are shown in Figure 2C, which illustrates a linear relationship between ln(C(k)) and ln(k) (black squares). Although there are some outlier values, we observed that in the PubLiME co-occurrence network the clustering coeffi cient seemed to obey the C(k) ~ k −1 rule. Regression analysis (grey triangles) was thus applied to estimate the scaling coeffi cient (see Materials and Methods) and we obtained a value of −1.06 (Fig. 2C) that was close to the theoretically expected scaling coeffi cient of −1. As a further characteristic of hierarchical networks, it has been shown that the average clustering coeffi cient is usually much larger than in Barabasi-Albert networks having with similar degree distribution and is also largely independent of network size (Ravasz and Barabasi, 2003;Ravasz et al. 2002). In order to test whether these properties are present in the PubLiME co-occurrence network, we constructed co-occurrence networks from the PubLiME dataset, using different cutoff values for the support parameter S which requires a co-occurrence module to be observed in at least S publications. As a result, we obtained networks of different sizes and thus compared them to Barabasi-Albert networks with similar degree distribution and size generated by the JUNG random graph generator function. As can be seen in Figure 2D, the average clustering coeffi cient was largely independent of network size for the PubLiME co-occurrence networks, while it dropped rapidly in Barabasi-Albert networks. Thus, PubLiME networks apparently possess a scale-free topology with clear signs of hierarchical modularity. Hubs in the PubLiME co-occurrence network We next asked whether differences in the cooccurrence patterns of genes can be identifi ed. Previous analysis of the PubLiME dataset revealed that several genes display profound differences in their propensity of being detected as differentially regulated in a gene expression microarray experiment (Finocchiaro et al. 2007). Indeed, whereas the expression levels of some genes change in response to a wide variety of different biological conditions, most genes were found to display stable expression levels. For example, CCND1 was found to be differentially regulated in 15% of published studies, whereas two thirds of all human genes were never reported as differentially regulated. Thus, the question to be addressed is whether the genes that are most connected in the co-occurrence network are identical to the genes with the highest propensity to being differentially regulated in diverse conditions. To investigate this question, the PubLiME dataset's genes were sorted in descending order, according to both the total number of occurrences in the literature (Table 2) and the vertex degree in the co-occurrence network ( Table 3). The top ten genes are shown in each case. Strikingly, the majority of genes that occur most frequently in the literature were not part of the co-occurrence network. This was because they did not co-occur consistently with other genes. Among these genes, some have known roles in oncogenesis, such as Cyclin D1 (CCND1), FOS and p21 (CDKN1A). Moreover, three of these genes co-occurred consistently with at least some genes (MYC, TNFAIP3, VEGF). However, with the exception of MYC, their vertex degrees were not exceptional. On the other hand, genes with highest vertex degrees were not ranked among the genes that occur most frequently in the literature (except for MYC, see Tables 2, 3). These results demonstrate that genes with the highest vertex degree in the co-occurrence network did not represent those which are most susceptible to underlie differential regulation. These data can be ) is plotted against the natural logarithm of vertex degrees (black diamonds). The slope of the line fi tted to these data (the scaling parameter of the scale-free model (grey triangles)) by the least squares method is found to be −2.19. The exponential model (grey squares) has been obtained by fi tting a line to the data in [k, ln(P(k)] linear-log space and is visualized here in [ln(k), lnN(P(k))] log-log space. exp-exponential model, sf-scale-free model. B) Observed vertex degree distribution (black diamonds) in [k, P(k)] linear-linear space along with the predicted vertex degree distributions according to the scale-free (grey triangles) and the exponential models (grey squares) are shown. C) The natural logarithm of the average clustering coeffi cient of vertices with the same degree is plotted against the natural logarithm of vertex degrees. Only vertices with degree above 20 were analyzed. The slope of the line fi tted to these data using the least squares method (the scaling parameter) is found to be −1.06. D) The average clustering coeffi cient is shown for PubLiME co-occurrence networks derived for support 8, 7, 6, and 5. The support parameter indicates the minimal number of lists a module must be part of. The different support values cause the resulting networks to be of different sizes (number of vertices shown on the X-axis). Barabasi-Albert networks of equal size and degree distribution have been generated using the JUNG package random graph generator function for comparison purposes. The average clustering coeffi cient falls rapidly in Barabasi-Albert networks as network size grows. In PubLiME networks, the average clustering coeffi cient is stable. explained by assuming that the frequently regulated genes display a "promiscuous" behavior and co-occur in regulated gene lists with different genes in different conditions. Assuming the correctness of the model described above, the vertex neighbors of frequently regulated genes should be characterized by heterogeneous functional gene annotations. We thus used the DAVID database (Dennis et al. 2003) (http://niaid.abcc.ncifcrf.gov/) to interrogate the set of vertex neighbors for enrichment of functional categories. The most signifi cant category, along with the multiple testing corrected P-value as reported by DAVID is shown in Table 2. As reported, no coherent functional category could be identifi ed for any of the frequently regulated genes. Surprisingly, however, nearly all genes with high vertex degree (Table 3) showed neighbors with consistent functional annotation. These results indicate that those genes not only co-occur with similar sets of genes in different conditions, but they also co-occur with genes playing similar roles in cellular physiology. In contrast to the promiscuous behavior of frequently regulated genes, they are thus "faithful" to a subset of genes which might be required for carrying out their function in a coordinated fashion. In conclusion, two types of genes are likely distinguished in the PubLiME data set: 1) genes that respond to many different conditions by showing differential expression, but their expression is poorly correlated with the behavior of other genes, or 2) genes that are differentially expressed in fewer conditions, but their differential expression is often accompanied by differential expression of similar sets of genes. We can conclude that the PubLiME co-occurrence network is mainly dominated by faithful genes, whose function can generally be predicted from the function of their neighbors. Promiscuous Discussion In this work, we investigated the topological properties of PubLiME co-occurrence networks. Pub-LiME is a database storing published gene expression signatures in a gene lists format (Finocchiaro et al. 2005;Finocchiaro et al. 2007). Other researchers have previously reported the development of similar resources (Cahan et al. 2005;Newman and Weiner, 2005). Gene expression studies are widely applied in order to shed light on several biological processes. However, standardized procedures on how to identify the biologically meaningful pieces of data in generally quite large datasets are still missing. A common procedure is to use gene category enrichment analysis on lists of differentially regulated genes to identify biological processes that are affected by a given biological condition. However, since this procedure relies on preassembled gene lists, new pathways cannot be identifi ed. An additional problem is posed by the annotation quality of databases (Khatri et al. 2005). Furthermore, once a gene list has been found to be signifi cantly associated with a given pathway, it is not clear which of the tens or hundreds of genes in the list are critically involved in regulating the pathway. Within this frame, topological analysis of cooccurrence networks may offer an interesting alternative for several reasons. First, pathway target genes can be identifi ed using graph-cut approaches, without relying on pre-assembled gene lists. Second, the hubs in co-occurrence networks suggest interesting genes for more detailed analysis. We have shown that the PubLiME co-occurrence network displays a scale-free vertex degree distribution. While biological networks derived from highthroughput protein-protein interaction data, metabolism, or protein domains have been known for some time to possess scale-free topology (Barabasi and Oltvai, 2004;Titz et al. 2004), these studies have been carried out on networks making reference to structural properties of the biological entities. On the other hand, the PubLiME cooccurrence network does not rely on structure-driven interactions. It is based instead on co-occurrences of genes in published gene expression signatures manually extracted from a wide variety of studies using model cell lines or patient tissues. Furthermore, our analysis of the clustering coeffi cient suggests that the co-occurrence network possesses hierarchical modularity, a conclusion that is compatible with previously reported gene communities identifi ed in this network (Finocchiaro et al. 2007). It is worth noting that sampling quality can infl uence topology predictions quite significantly . The PubLiME co-occurrence network has not been sampled, since the entire network has been analyzed for its topological properties. However, the PubLiME co-occurrence network is based on published gene expression signatures, and the collection of signatures accessible in PubLiME is necessarily incomplete, also because new signatures are being constantly produced. Therefore, future studies will be required to validate our conclusions. It should also be recognized that the data collection in PubLiME is by design biased towards cancer-related gene expression signatures and may also be further biased in unknown ways, due to the experimental choices made by the researches whose signatures have been archived. Thus, the conclusions drawn about specifi c hub genes should be considered with caution and validated by future research. Nevertheless, we believe that the general conclusions about the hierarchical topology of cooccurrence networks of published gene expression signatures are not affected by these biases, since we have previously shown that gene communities correspond to different cancer signaling pathways and that the promoters of community genes are usually enriched for transcription factor binding motifs that are in line with biological intuition and experimentation (Finocchiaro et al. 2007). Moreover, we have shown that communities identifi ed in humans correspond to communities identifi ed in murine model systems (Finocchiaro et al. 2007). In other words, the hierarchical nature of the network seems to refl ect biological reality. Thus, while the number and composition of communities will certainly be subject to changes as new signatures are being analyzed, the general topology is expected to remain hierarchical. How could topological information of cooccurrence networks be used to identify interesting target genes for more detailed investigation? The scale-free nature of the co-occurrence network with a scaling parameter of 2.19 suggests that there are some hub genes having connections to a large part of the total of genes constituting the network. Indeed, CDKN3, a dual specifi city phosphatase of cyclin dependent kinases that was discovered in the early 90's (Hannon et al. 1994), is linked to 77 of the 306 genes (25%) in the network. These data suggest that CDKN3 plays a key role in regulating cell division. Interestingly, CDKN3 displayed more connections than CDC2 (58 edges), a bona fi de key regulator of cell cycle progression. In general, hubs of the co-occurrence network represent genes with consistent co-occurrence behavior over a set of conditions. They co-occur with similar sets of genes and represent the core of subnetworks or modules, whose function can be predicted from the functional annotations of the genes constituting the community. As such, they represent excellent candidates for follow-up studies. However, detailed investigation of hub genes in the co-occurrence network has revealed that they differ in their propensity to co-occur with similar sets of genes. We noticed that the genes that were most frequently reported as differentially regulated in the literature were not among the genes that are most connected in the co-occurrence network. This observation suggests a "promiscuous" co-occurrence pattern for those genes. Interestingly, among them we fi nd some known oncogenes such as CCND1 and FOS. It should be noted that most of these genes (with the exception of MYC) are not hubs in the cooccurrence network. They are referred to as hubs here simply because they are the most frequently occurring genes in the PubLiME dataset. In other words, they are occurrence hubs rather than cooccurrence hubs. On the other hand, the genes with most edges in the co-occurrence network are not among the genes that are most often reported in lists of differentially regulated genes. However, when they are found differentially regulated, similar sets of genes are often found co-regulated. Thus, they are "faithful" to a subset of differentially regulated genes. Furthermore, the co-regulated genes are often characterized by similar functional annotations as the hub itself. Taken together, these data suggest the presence of occurrence hubs and co-occurrence hubs in the PubLiME dataset, which often do not represent the same genes. Such a behavior is reminiscent of date and party hubs (Han et al. 2004). Party hubs tend to associate with similar sets of vertices in various conditions and are thought to represent structural organizers of semi-autonomous network modules. Date hubs, on the other hand, associate with different vertices in different conditions and might represent key regulators that orchestrate the activity of network modules according to the needs of a cell in specifi c circumstances. In a fi rst approximation, promiscuous genes appear similar to date hubs while faithful genes behave more like party hubs. Given the observation that known oncogenes seem to behave as date hubs, it may be highly informative to study the behavior of other date hubs in cancer cells, in order to achieve interesting insight into the signaling pathways operating in cancer cells and the regulators infl uencing their function. The analysis of party hubs, on the other hand, may lead to the identifi cation of novel drug targets whose inactivation might cause functional debilitation of downstream targets of deregulated signaling pathways, which in the co-occurrence network are forming communities of highly interconnected genes, or modules, with party hubs as central organizers.
7,305.2
2008-01-01T00:00:00.000
[ "Biology", "Computer Science" ]
Unraveling the Complex Interplay of Fis and IHF Through Synthetic Promoter Engineering Bacterial promoters are usually formed by multiple cis-regulatory elements recognized by a plethora of transcriptional factors (TFs). From those, global regulators are key elements since these TFs are responsible for the regulation of hundreds of genes in the bacterial genome. For instance, Fis and IHF are global regulators that play a major role in gene expression control in Escherichia coli, and usually, multiple cis-regulatory elements for these proteins are present at target promoters. Here, we investigated the relationship between the architecture of the cis-regulatory elements for Fis and IHF in E. coli. For this, we analyze 42 synthetic promoter variants harboring consensus cis-elements for Fis and IHF at different distances from the core −35/−10 region and in various numbers and combinations. We first demonstrated that although Fis preferentially recognizes its consensus cis-element, it can also recognize, to some extent, the consensus-binding site for IHF, and the same was true for IHF, which was also able to recognize Fis binding sites. However, changing the arrangement of the cis-elements (i.e., the position or number of sites) can completely abolish the non-specific binding of both TFs. More remarkably, we demonstrated that combining cis-elements for both TFs could result in Fis and IHF repressed or activated promoters depending on the final architecture of the promoters in an unpredictable way. Taken together, the data presented here demonstrate how small changes in the architecture of bacterial promoters could result in drastic changes in the final regulatory logic of the system, with important implications for the understanding of natural complex promoters in bacteria and their engineering for novel applications. INTRODUCTION Bacteria have evolved complex gene regulatory networks to coordinate the expression level of each gene in response to changing environmental conditions. In this aspect, a typical bacterium such as Escherichia coli uses around 300 different transcriptional factors (TFs) to control the expression of more than 5,000 genes, and gene regulation in bacteria has been extensively investigated in the last six decades (Lozada-Chavez, 2006). Among the known TFs from E. coli, global regulators are able to control the highest percentage of transcriptional units in response to significant physiological or environmental signals, such as the metabolic state of the cell, the availability of carbon sources, and the presence of oxygen (Martínez-Antonio et al., 2003;Ishihama, 2010), while local regulators are responsible for gene regulation in response to specific signals (such as sugars and metals) (Ishihama, 2010;Browning and Busby, 2016). Most TFs control gene expression through their interaction with specific DNA sequences located near the promoter region, the cis-regulatory element, or transcriptional factor binding site Busby, 2004, 2016). Over the decades, many cis-regulatory elements for many TFs from E. coli have been experimentally characterized, mapped, and compiled in databases such as RegulonDB and EcoCyc (Gama-Castro et al., 2016;Keseler et al., 2017). Analysis of these datasets demonstrates that TFs usually act in a combinatorial way to control gene expression, where multiple cis-regulatory elements for different TFs are located in the upstream region of the target genes (Guazzaroni and Silva-Rocha, 2014;Rydenfelt et al., 2014;Gama-Castro et al., 2016). Therefore, the arrangement of cis-regulatory elements at the target promoters is crucial to determine which TFs will be able to control the target gene and how these regulators interact with each other once bound to the DNA (Collado-Vides et al., 1991;Ishihama, 2010). Several studies have explored the relationship between the architecture of cis-regulatory elements and the final logic of the target promoters, and initial attempts have focused on the mutation of cis-regulatory elements from natural promoters to investigate how these elements specify the promoter activity dynamics (Sawers, 1993;Darwin and Stewart, 1995;Izu et al., 2002;Setty et al., 2003). More recently, synthetic biology approaches have been used to construct artificial promoters through the combination of several cis-regulatory elements, and these have been characterized to decipher their architecture/dynamics relationship (Cox et al., 2007;Isalan et al., 2008;Kinkhabwala and Guet, 2008;Shis et al., 2014). However, while most synthetic biology approaches have focused on cis-elements for local regulators (which do not commonly regulate gene expression in a combinatorial manner), we recently investigated this combinatorial regulation problem with global regulators (Guazzaroni and Silva-Rocha, 2014;Amores et al., 2015;Monteiro et al., 2018). This is important because global regulators (such as IHF, Fis, and CRP) have numerous binding sites along the E. coli genome and frequently co-occur at target promoters (Guazzaroni and Silva-Rocha, 2014). Thus, Fis and IHF are two global regulators that play a critical role in coordinating gene expression in E. coli as well as in mediating DNA condensation in the cell (Azam and Ishihama, 1999;Browning and Busby, 2004;Browning et al., 2010;Ishihama, 2010). Fis, an abundant nucleoid-associated protein (NAP), is related to gene expression regulation in fast-growing cells, varying its function (as a repressor or activator transcriptional factor) according to its biding site position related to the core promoter (Hirvonen et al., 2001), while IHF is a NAP, which activity relates to changes in gene expression in cells during the transition from exponential to stationary phase (Azam and Ishihama, 1999;Azam et al., 2000;Browning et al., 2010). Moreover, IHF binds to AT-rich DNA motifs with well-defined sequence preferences, while Fis also prefers AT-rich regions with a more degenerate sequence preference (Déthiollaz et al., 1996;Ussery et al., 2001;Dorman and Deighan, 2003;Aeling et al., 2006). Additionally, cross-regulation between Fis and IHF has been demonstrated for several systems, and how specific vs. promiscuous DNA recognition can be achieved for these two global regulators is not fully understood (Browning et al., 2010;Ishihama, 2010;Rossiter et al., 2015). We previously explored how complex synthetic promoters harboring cis-regulatory elements for CRP and IHF can generate diverse regulatory logic depending on the final architecture of synthetic promoters, demonstrating that it is not possible to predict the regulatory logic of complex multiple promoters from the known dynamics of their simple versions (Monteiro et al., 2018). Here, we further explore this approach to investigate the relationship between cis-regulatory elements for Fis and IHF. Using consensus binding sites for these 2 TFs at different promoter positions and in different numbers, we first demonstrated that while some promiscuous interactions occur between the TFs and the binding sites, some specific cis-regulatory architectures can completely abolish non-specific interactions. Additionally, complex promoters constructed by the combination of cis-elements for Fis and IHF can generate many completely different outputs, such as Fis-repressed promoters, IHF-repressed promoters, and systems where Fis and IHF act as activators. As these changes in promoter logic result from changes in promoter architecture only (and not on the affinity of the transcriptional factor to each individual cis-element), the data presented here reinforce the notion that complex bacterial promoters can display emergent properties, where their final behavior cannot be defined from the characterization of the individual component. Taken together, our findings present a comprehensive strategy for fine-tuning gene circuits to perform optimally in a given context (e.g., engineering of synthetic promoters) as well as provide insights for the understanding of natural complex promoters controlled by global regulators. Generation of Complex Promoters for Fis and IHF In order to investigate the effect of promoter architecture in the regulation by Fis and IHF, we evaluated the effect of 12 complex promoters constructed in early work (Monteiro et al., 2018) and we constructed 30 new combinatorial promoters with consensus DNA sequences for Fis (Fis-BS) and IHF (IHF-BS) binding sites positioned upstream of a weak core promoter (−35/−10 region) at specific positions (1-4) centered at the −61, −81, −101, and −121 regions related to the transcriptional start site (TSS) (Figure 1). For that, we generated double-strand DNA sequences for Fis-BS, IHF-BS, and a neutral sequence (Neg) with no related transcriptional binding site, which were combined for the generation of a library of synthetic promoters, merging the transcriptional binding sites for Fis, IHF, and/or neutral sequence for each position ( Table 1). The complex promoters were assembled by DNA ligation and cloned into pMR1, a midcopy number vector harboring mCherry and GFPlva as reporter fluorescent proteins (Figure 1). The resulting reporter plasmids (with each promoter controlling only by GFPlva expression) were used to transform competent E. coli wild-type strain FIGURE 1 | Strategies to construct synthetic complex promoters. (A) DNA sequences harboring the consensus sequence for IHF or Fis binding were selected, along with a control sequence that cannot be recognized by any TF. (B) Double-stranded DNA fragments were produced with cohesive ends specific for each promoter position (numbered from 1 to 4) and assembled together with a weak core promoter harboring the −35/−10 boxes for RNAP recognition (Guazzaroni and Silva-Rocha, 2014). (C) The fragments were cloned into a promoter probe vector (pMR1) harboring resistance to chloramphenicol (CmR), a medium-copy number origin of replication (p15a), and two reporter genes (mCherry and GFPlva). The libraries were introduced into wild-type and mutant strains of E. coli from the KEIO collection (Baba et al., 2006). The resultant strains were analyzed at the population level in a plate reader and the data processed using script in R. (BW25113-WT) and/or E. coli mutants for ihfA ( ihfA) and fis ( fis) (from Keio collection) (Baba et al., 2006). Using these constructs, we assayed promoter activity for 8 h in minimal media (M9 complete), measuring the relative GFP expression (GFP/OD) in all strains in the plate reader fluorimeter Victor X3 (PerkinElmer). As a negative control, we used the Neg sequence occupying the 4 possible positions before the core promoter. All data presented in this work are referred to 4 h of cell growth. In the next sections, we present the results of the promoter analysis per category to uncover the cis-regulatory logic for each variant. Changing the Fis Binding Site Architecture Modulates Fis and IHF Binding Specificity We analyzed the architecture effect for Fis cis-regulatory elements by evaluating the influence of position and sequence combination for Fis-BS. For that, we used promoters merging Fis-BS and Neg sequences to measure relative GFP expression (GFP/OD) levels after 4 h of cell growth in wild-type, fis, and ihfA E. coli strains, and normalized the results to our negative control (top bars in Figure 2). The results displayed in Figure 2 show that most of the promoters harboring Fis-BS exhibit low activity in wild-type E. coli, comparable to the negative control. However, when these promoters were assayed in E. coli fis strain (red bars), 4 of them displayed a significant increase in activity compared to the wild-type strain (green and gray in Figure 2). Particularly, in the presence of Fis protein, Fis could occupy Fis-BS and act as a repressor of promoter activity. However, not all architectures with Fis-BS at the 4th or 3rd positions display this promoter behavior. This phenomenon only occurs in two other cases with more than 1 Fis-BS combination (promoters shaded in green in Figure 2). This reveals a complex association between promoter architecture and expression profile, which seems to be dependent on the Fis-BS position and arrangement. We also assayed Fis-BS promoters in the E. coli ihfA strain (blue bars) to evaluate the specificity of Fis for Fis-BS. Strikingly, despite most promoters display similar activity levels in the ihfA strain as in the wild-type, 1 single promoter variant harboring Fis-BS at the 3rd position (−101 relative to the TSS) displayed a substantial increase in activity in the ihfA mutant relative to the wild-type strain (promoter shaded in gray in Figure 2). This result indicates that IHF also acts as a repressor of this promoter variant. Although it was restricted to a single promoter variant, these results suggest that non-specific IHF binding to the Fis-BS exists, suggesting that promiscuous regulatory interaction could occur and seems to be dependent on promoter architecture, since this phenomenon is detected only for Fis-BS at the 3rd position. Altogether, these results suggest a complex interplay between the position and combination of Fis-BS and the regulation of gene expression. IHF Binding Sites Can Be Recognized by the Fis Regulator in an Architecture-Dependent Manner Using the same strategy as in Figure 2, we investigated the regulatory logic of promoters harboring multiple cis-regulatory elements for IHF, merging IHF-BS, and Neg sequences. Figure 3 shows that most promoters displayed low activity in the wild type strain of E. coli and higher activity in E. coli ihfA (blue bars), in agreement with previous data on complex IHF promoters (green and gray shaded) (Monteiro et al., 2018). However, when these promoters were assayed in E. coli fis strain (red bars), we observed that 4 promoter architectures also displayed higher activity in this mutant (promoters shaded in green in the figure), indicating that Fis was also able to repress these promoter variants, highlighting a possible crosstalk (Cepeda-Humerez et al., 2015;Friedlander et al., 2016) between these 2 TFs, which should be further investigated in the future. However, it is worth noticing that the promoter variants harboring cisregulatory elements for IHF at 4th or 3rd and 4th positions Promoter activities are shown in bars and normalized based on the activity of the reference promoter (i.e., a promoter with 4 neutral sequences). Promoter analyses were performed for 4 h of growth, three genetic backgrounds of E. coli (wild type-gray bars, fis-red bars, and ihfA-blue bars). Promoters that displayed a significant increase in activity compared to the wild-type strain were shaded in green or gray for easy viewing. Statistical differences between synthetic promoters and their control (wild type condition) are highlighted by (*) as analyzed using Student's t-test with p < 0.05. (B) Summary of most significant changes in promoter architecture leading to changes in promoter logic. (−101 and −121 relative to the TSS) displayed both a strong repression by IHF but no modulation by Fis (promoter shaded in gray in Figure 3). Again, these results reinforce that the gene expression pattern and the promiscuous or specific binding to transcriptional factors allows for the fine-tuning of promoter activities based on their architectures. Promoter activities are shown in bars and normalized based on the activity of the reference promoter (i.e., a promoter with 4 neutral sequences). Eleven promoter variants previously described (Monteiro et al., 2018) were analyzed in wild type (gray bars) fis (red bars), and ihfA (blue bars) mutant strains of E. coli. Promoter architectures that displayed higher activity in E. coli fis and ihfA are shaded in green, indicating that Fis was also able to repress these promoter variants, highlighting a possible crosstalk. Promoter that displayed a strong repression by IHF but no modulation by Fis are shaded in gray, reinforce that the gene expression pattern and the promiscuous or specific binding to transcriptional factors allows for the fine-tuning of promoter activities based on their architectures. Statistical differences between synthetic promoters and their control (wild type condition) are highlighted by (*) as analyzed using Student's t-test with p < 0.05. (B) Summary of most significant changes in promoter architecture leading to changes in promoter logic. Statistical differences between synthetic promoters and their control are highlighted by (*) as analyzed using Student's t-test with p < 0.05. Merging IHF-BS and Fis-BS Leads to an Unpredictable Expression Pattern After we investigated the regulatory interactions for promoters harboring cis-regulatory elements for a single transcriptional factor (IHF or Fis), we constructed promoters combining binding sites for both TFs and Neg sequences. In order to systematically investigate the effect of combined transcriptional factor-binding sites on promoter logic, we first fixed 1 IHF-BS at the 1st position (−61) and varied Fis-BS for the 2nd, 3rd, and 4th positions. As shown in Figure 4A, 1 promoter harboring 1 single IHF-BS at the 1st position showed no activity in the wild-type E. coli strain but increased activity in the fis and ihfA mutant strains. However, adding Fis-BS at the 2nd or 3rd position resulted in promoters with reduced activity in the ihfA and fis mutant strains, compared to IHF-BS at the 1st position (promoters shaded in green in Figure 4A). Comparison of these green shaded promoters to promoters with 1 single Fis-BS at the 2nd or 3rd positions in Figure 2, we cannot observe any patterns between the merging of binding sites for these transcriptional factors, that is, the activity of promoters consisting of both Fis-BS and IHF-BS is not the sum of behaviors from Fis-BS and IHF-BS individually. When 1 single IHF-BS was fixed at the 4th position (−121), the resulting promoter displayed strong activity in ihfA strains ( Figure 4B). However, when 1 single Fis-BS was added at the 1st position (−61), the resulting promoter displayed increased activity in the E. coli fis strain, while it showed no activity in the wild type and ihfA strains. Therefore, this promoter architecture may be being repressed, especially by Fis regulator (shaded in green). However, for promoters with Fis-BS fixed at the 1st position (Figure 2), we observed a reduction in the promoter activity in the fis strain, demonstrating that the presence of IHF in this specific position may influence a positive expression in the absence of Fis. Finally, the addition of 1 single or multiple Fis-BS at different positions completely blocked promoter activity, and this was not relieved in either fis or ihfA strains, showing that transcriptional factors and binding site sequences of IHF and Fis contribute to promoter complexity. A mutant for ihfA and fis should be a compelling model to completely understand this promoter logic, but a mutant for both TFs has proven to be difficult to construct. It is important to note that IHF and Fis, which are transcriptional factors, are also NAPs, so the gene expression identified here could be related to possible changes in the DNA geometry (Déthiollaz et al., 1996). Taken together, these results also suggest that Fis and IHF proteins and their binding sites exert complex regulatory patterns, hampering promoter behavior predictions. Combination of Fis and IHF Binding Sites Generates Strong Fis and IHF Activated Promoters In all promoters presented until this point, while the combination of different cis-regulatory genes was able to determine the regulatory logic displayed by IHF and Fis, the 2 TFs acted as repressors of promoter activity (Figures 2-4). However, this behavior shifted when we constructed promoter versions harboring IHF-BS at the 1st and 4th positions and varying sites for Fis-BS ( Figure 5). As shown in this figure, when 1 FIGURE 5 | Analysis of promoters with 2 fixed IHF-binding sites. (A) The architecture of the synthetic promoter is shown on the left (blue boxes represent IHF-BS and red boxes represent Fis-BS). Promoter activities are shown in bars and normalized based on the activity of the reference promoter (i.e., a promoter with 4 neutral sequences). Promoter variants were analyzed in wild type (gray bars) fis (red bars), and ihfA (blue bars) mutant strains of E. coli. For this analysis, IHF cis-regulatory elements were placed at positions 1 and 4, and additional Fis-BS were introduced into the promoters. Promoter that displayed a strong activity in the wild-type when compared to the fis or ihfA mutant of E. coli are shaded in green. Statistical differences between synthetic promoters in wild type condition and fis and ihfA condition are highlighted by (*) as analyzed using Student's t-test with p < 0.05. (B) Summary of most significant changes in promoter architecture leading to changes in promoter logic. Statistical differences between synthetic promoters and their control are highlighted by (*) as analyzed using Student's t-test with p < 0.05. single Fis-BS was added at the 2nd position (−81), the resulting promoter displayed a strong activity in the wild-type strain of E. coli, when compared to the version lacking this element (promoter in the green shaded region in Figure 5). Furthermore, when these promoters were assayed in E. coli fis or ihfA, we observed a substantial reduction in their activity, indicating that both TFs acted as activators of the combinatorial promoter. The same behavior was also observed for a promoter harboring 2 IHF-BS (at the 1st and 4th positions) and 2 Fis-BS (2nd and 3rd positions), where reduction in the gene expression was even more evident. The same does not occur for a promoter harboring the 2 sites of IHF-BS and Fis-BS at the 3rd position, indicating the dependence and complexity of the relationship between promoter architecture and gene expression. These results highlight the rise of emergent properties in complex promoters for global regulators (Monteiro et al., 2018), as increasing the number of cis-regulatory elements can drastically shift the final regulatory logic of the system. Conclusions Bacteria are naturally endowed with complex promoters harboring multiple binding sites for several TFs. While several works based on mathematical modeling have argued that combinatorial regulation can be predicted from the characterization of individual promoter elements (Yuh, 1998;Bintu et al., 2005;Hermsen et al., 2006;Zong et al., 2018), along with the previous report (Monteiro et al., 2018) and here we provide growing evidence that small changes in the architecture of cis-regulatory elements can drastically change the final response of the system (Kreamer et al., 2016). The unpredictable behaviors observed in these studies might also depict a deeper evolutionary trend in gene regulation that has selected molecular systems/mechanisms capable of promoting both evolvability and robustness of gene expression levels through non-linear gene regulation (Steinacher et al., 2016). Thus, understanding the way the architecture of cis-regulatory elements determines gene expression behavior is pivotal not only to understand natural bacterial systems but also to provide novel conceptual frameworks for the construction of synthetic promoters for biotechnological applications (Monteiro et al., 2019b). Frequently, in genetic bioengineering applications, it is also necessary to fine-tune and balance specific gene expression due to the complexity of regulatory networks (Boyle and Silver, 2012;Scalcinati et al., 2012;Steinacher et al., 2016). Several recent studies have focused on the improvement of this strategy for diverse purposes (Egbert and Klavins, 2012;Siegl et al., 2013;Hwang et al., 2018). The present adjusting approach could be used as a strategy for the fine-tuning of genetic circuits to perform optimally in a given context. Our approach provided a library (from this study and from our previous work (Monteiro et al., 2018) of 74 promoter architectures characterized in different strains and conditions for in total of 230 outputs (different promoters in different strains and growth conditions) (Figure 6 and Table S1). Promoters from our synthetic promoter library with small adaptations could be used for diverse purposes in the biotechnological and bacterial network gene regulation fields. Abstracting all the gene regulations investigated in this work, we are able to provide a visual summary of the findings reported here from a Boolean logic perspective (Figure 7). As shown in Figure 7A, changing a perfect Fis binding in 20 bp (from position -121 to -101) can turn a specific Fis-repressed promoter into a system repressed by both Fis and IHF. Using a more formal logic gate definition (Amores et al., 2015), this modification can turn a promoter with a NOT logic into one with an NOR logic. However, a promoter harboring 2 IHF-binding sites at positions -121 and -101 displayed specific IHF-repression, while changing the second binding site to position -61 resulted in a promoter repressed by both IHF and Fis ( Figure 7B). In terms of promoter logic, this change in cis-element architecture also turns a promoter with NOT logic into one with an NOR logic. When a single IHF-binding site was presented at position -121, the final promoter was only repressed by IHF ( Figure 7C). Yet, introducing an additional Fis-binding site at position -61 of this promoter turned it into a system exclusively repressed by Fis. This change maintained the NOT logic of the promoter but changed the TF able to repress the activity. Finally, and more remarkably, while a promoter with 2 IHF-BS (at positions -121 and -61) was repressed by both Fis and IHF, adding a third binding site for Fis at position -81 resulted in a promoter strongly activated by both TFs (Figure 7D). Therefore, this single-change cis-element architecture turned a promoter with NOR logic into an entirely OR promoter responsive to the same TFs. This remarkable regulatory versatility and unpredictability unveiled by synthetic combinatorial promoter shows that we only start to understand the complexity of gene regulation in bacteria. While the work presented here covers two of the main global regulators of E. coli, further studies are still necessary to uncover the hidden complexity of combinatorial gene regulation in this bacterium. Plasmids, Bacterial Strains, and Growth Conditions E. coli DH10B was used for cloning procedures, while E. coli BW25113 was used as the wild-type strain (WT); E. coli JW1702-1 was used as a mutant for the IHF transcription factor (TF), and E. coli JW3229 was used as a mutant for the Fis TF. All strains were obtained from the Keio collection. For the procedures and analyses, E. coli strains were grown in M9 minimal media (6.4 g/L, Na 2 HPO 4 •7H 2 O, 1.5 g/L KH 2 PO 4 , 0.25 g/L NaCl, 0.5 g/L NH 4 Cl) supplemented with chloramphenicol at 34 µg/mL, 2 mM MgSO 4 , 0.1 mM casamino acids, and 1% glycerol as the sole carbon source (Complete M9) at 37 • C. Plasmids, bacterial strains, and primers used in this study are listed in Table 1. Design of Synthetic Promoter Scaffolds and Ligation Reactions The construction of synthetic promoters was performed by the ligation reaction of 5 ′ -end phosphorylated oligonucleotides acquired from Sigma Aldrich ( Table 1). The design of all single strands was projected to carry a 16 bp sequence containing the Fis binding site (F), IHF binding site (I), or a neutral motif (N), which is a sequence where any TF is able to bind ( Figure 1A). These locations were identified as positions 1, 2, 3, and 4, respectively ( Figure 1B) and to be located at −61, −81, −101, or −121 bp upstream of the core promoter ( Figure 1C). In addition to the 16 bp oligonucleotides, all single strands were designed to contain 3 base pairs overhang for its corrected insertion on the promoter (Figure 1C). Additionally, a core promoter based on the lac promoter, which is a weak promoter and therefore requires activation. The design of the synthetic promoters and the positions of the cis-elements were made based on strategies already performed by our group (Monteiro et al., 2018), aiming to arrange the cis-elements aligned to the transcription initiation site, considering the DNA curvature. To assemble the synthetic promoters, the 5 ′ and 3 ′ strands corresponding to each position were mixed at equimolar concentrations and annealed by heating at 95 • C for 5 min, followed by gradual cooling to room temperature (25 • C) for 5 min, and finally maintained at 0 • C for 5 min. The external overhangs of the cis-element at position 4 and the core promoter were designed to carry EcoRI and BamHI digested sites. In this way, it was allowed to ligate to a previously digested EcoRI/BamHI pMR1 plasmid. All five fragments (4 ciselements positions plus core promoter) were mixed equally in a pool with the final concentration of 5 ′ phosphate termini fixed at 15 µM. For the ligase reaction, 1 µL of the fragment pool was added to 50 ng EcoRI/BamHI pMR1 digested plasmid in the presence of ligase buffer and ligase enzyme to a final volume of 10 µL. Ligation was performed for 1 h at 16 • C, after which the ligase reaction was inactivated for 15 min at 65 • C. Two µL of the ligation was used to electroporate 50 µL of E. coli DH10B competent cells. After 1-h regenerating in 1 mL LB media, the total volume was plated in LB solid dishes supplemented with chloramphenicol at 34 µg/mL. Clones were confirmed by colony PCR with primers pMR1-F and pMR1-R (Table 1) using pMR1 empty plasmid PCR reaction as further length reference on electrophorese agarose gel. Clones with a potential correct length were submitted to Sanger DNA sequencing to confirm correct promoter assembly. Promoter Activity Analysis and Data Processing Promoter activity was measured for all 42 promoters at different genetic backgrounds and conditions. For each experiment, the plasmid containing the promoter of interest was used to transform E. coli wild type, E. coli ihfA mutant, or E. coli fis mutant, as indicated. Freshly plated single colonies were selected with sterile loops and then inoculated in 1 mL of M9 media. After 16 h 10 µL of this culture was assayed in 96 wells microplates in biological triplicate with 190 µL of M9 media. Cell growth and GFP fluorescence were quantified using a Victor X3 plate reader (PerkinElmer) that was measured for 8 h at intervals of 30 min. All graphics were constructed based on 4 h of cell growth since under our experimental setup and previous work (Monteiro et al., 2018), most promoters reach maximal activity at 4 h of growth. Therefore, this is the best time point to compare maximal promoter activity. Promoter activities were calculated as arbitrary units dividing the GFP fluorescence levels by the optical density at 600 nm (reported as GFP/OD 600 ) after background correction. Technical triplicates and biological triplicates were performed in all experiments. Raw data were processed using ad hoc R script (https://www.r-project.org/), and plots were constructed using R (version R-3.6.3). For all analyses, we calculated fold-change expression using pMR1-NNNN as the promoter reference. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS RS-R and LM designed the experimental strategy. LM, AS-M, and CW performed the experiments. LM, AS-M, CW, and RS-R analyzed and interpreted the data. LM and RS-R wrote the manuscript. All authors have read and approved the final version of the manuscript.
6,895
2020-06-18T00:00:00.000
[ "Engineering", "Biology" ]
The role of Asp578 in maintaining the inactive conformation of the human lutropin/choriogonadotropin receptor. A constitutively activating mutation encoding Asp578→Gly in transmembrane helix 6 of the lutropin/choriogonadotropin receptor (LHR) is the most common cause of gonadotropin-independent, male-limited precocious puberty. This mutant LHR produces a 4.5-fold increase in basal cAMP when expressed in COS-7 cells. To better understand the normal role of Asp578 in the LHR we studied the effect of seven other amino acid substitutions at this position. No agonist binding or response was detected with the Asp578→Pro mutant. Agonist binding affinity was unaffected by the other substitutions and estimated receptor concentrations ranged from 11 to 184% of wild type. Substitution of Asp578 with Asn, a similarly sized, uncharged residue, did not produce agonist-independent activation. In contrast, replacement with Glu, Ser, or Leu caused 4.9-5.6-fold stimulation of basal cAMP. Substitution with Tyr (8.5-fold) or Phe (7.5-fold) had a greater activating effect. Only the Tyr, Phe, and Leu mutants showed constitutive activation of the inositol phosphate pathway. Our data suggest that it is the ability of the Asp578 side chain to serve as a properly positioned hydrogen bond acceptor, rather than its negative charge, that is important for stabilizing the inactive state of the LHR. A bulky aromatic side chain at position 578 may further destabilize the inactive receptor conformation. A constitutively activating mutation encoding Asp 578 3 Gly in transmembrane helix 6 of the lutropin/ choriogonadotropin receptor (LHR) is the most common cause of gonadotropin-independent, male-limited precocious puberty. This mutant LHR produces a 4.5-fold increase in basal cAMP when expressed in COS-7 cells. To better understand the normal role of Asp 578 in the LHR we studied the effect of seven other amino acid substitutions at this position. No agonist binding or response was detected with the Asp 578 3 Pro mutant. Agonist binding affinity was unaffected by the other substitutions and estimated receptor concentrations ranged from 11 to 184% of wild type. Substitution of Asp 578 with Asn, a similarly sized, uncharged residue, did not produce agonist-independent activation. In contrast, replacement with Glu, Ser, or Leu caused 4.9 -5.6-fold stimulation of basal cAMP. Substitution with Tyr (8.5-fold) or Phe (7.5-fold) had a greater activating effect. Only the Tyr, Phe, and Leu mutants showed constitutive activation of the inositol phosphate pathway. Our data suggest that it is the ability of the Asp 578 side chain to serve as a properly positioned hydrogen bond acceptor, rather than its negative charge, that is important for stabilizing the inactive state of the LHR. A bulky aromatic side chain at position 578 may further destabilize the inactive receptor conformation. The lutropin receptor (LHR) 1 is a member of the family of G protein-coupled receptors (GPCRs) and its structure is predicted to consist of a large extracellular domain connected to a bundle of seven membrane-spanning ␣-helices (1,2). Hormone binding to the extracellular domain triggers a conformational change in the transmembrane bundle that leads to G protein activation. We (3)(4)(5) and others (6 -11) have described mutations of the LHR gene that promote agonist-independent receptor activation in familial and sporadic cases of gonadotropinindependent, male-limited precocious puberty (testotoxicosis). The Asp 578 3 Gly mutation is the most common cause of testotoxicosis (3,9). An Asp residue is found at this position in transmembrane helix 6 (TM 6) of all mammalian glycoprotein hormone receptors and of partially homologous invertebrate GPCRs (2,12,13), suggesting that it may play an evolutionarily conserved function in this group of receptors. According to the GPCR model developed by Baldwin (14) the side chain of Asp 578 is predicted to face toward the internal hydrophilic cleft, in position to form an electrostatic or hydrogen-bond with one or more residues in another helix (3,15). Inactive receptors are thought to exist in a constrained conformation that is destabilized by the binding of agonist (16 -18). The resulting conformational change allows cytoplasmic domains of the receptor, including portions of the third intracellular loop, to interact productively with G proteins. Some activating amino acid substitutions may mimic agonist occupancy by increasing the proportion of receptors that are in the active conformation (17). Characterization of such substitutions may provide insight into the nature of the inactive state and the normal mechanism of receptor activation by agonist. Although many activating GPCR mutations have now been described, the molecular basis of the activating effects has only been explored in a few cases. In rhodopsin, loss or weakening of an electrostatic bond between TM 3 (Glu 113 ) and TM 7 (Lys 296 ) causes constitutive activation (19,20), and the degree of activation is also inversely correlated with the size of the side chain at position 296 (19). For the ␣ 1B -adrenergic receptor, substitution of an Ala residue at the junction of the third intracellular loop and TM 6 with any one of 19 other amino acids is constitutively activating, but there is no obvious relationship between the level of activity and the size, charge, or hydrophobicity of the substituent (16). To better understand the normal role of position 578 in maintaining the inactive receptor conformation of the LHR, we used site-directed mutagenesis to substitute 7 other amino acids with varying chemical properties for the wild-type (WT) Asp in the human LHR. The mutant receptors were transiently expressed in COS-7 cells, and human chorionic gonadotropin (hCG) binding, cAMP, and inositol phosphate production were measured in intact transfected cells. EXPERIMENTAL PROCEDURES Site-directed Mutagenesis of the LHR-Human LHR cDNA (1) was inserted into the EcoRI site of the M13mp18 vector, and oligonucleotide-mediated site-directed mutagenesis was used to generate clones encoding the desired mutation (T7GEN kit; US Biochemical, Cleveland, OH). Residue numbers were determined by counting from the methionine start site (1). WT and mutant clones were inserted into the EcoRI site of the SV-40-driven pSG5 vector (Stratagene, La Jolla, CA). Mutations were confirmed by DNA sequencing of the final construct, and plasmid DNA was purified by CsCl gradient ultracentrifugation. Transfection and Assays-COS-7 cells (ϳ10 7 cells) were transfected by electroporation (Bio-Rad) with 25 g of purified plasmid DNA containing a mutant or WT LHR sequence. When smaller amounts of LHR DNA were used, the total amount of DNA per cuvette was kept constant by adding pSG5 vector DNA. After electroporation, each batch of transfected cells was divided into aliquots for binding, cAMP, and inositol phosphate assays. Cells intended for binding assays were suspended in Dulbecco's modified Eagle's medium containing 10% fetal calf serum, and transferred to 6-well plates (ϳ5 ϫ 10 5 cells/well). Cells for cAMP and inositol phosphate assays were suspended in inositol-free medium supplemented with 10% fetal calf serum and 2.5 Ci/ml myo-[2-3 H] inositol (DuPont NEN, Boston, MA) and were transferred to 24-well plates (ϳ10 5 cells/well). 48 h after transfection, cells were washed with assay buffer (Hanks' balanced salt solution containing 0.5% (w/v) crystalline bovine serum albumin and 20 mM HEPES-NaOH, pH 7.4). 125 I-hCG binding was measured by incubating cells for 16 h at 4°C in 1 ml of assay buffer containing approximately 300,000 cpm of 125 I-hCG (CR-127, 14,900 IU/mg, National Hormone and Pituitary Program; labeled to about 40 Ci/g by Hazelton Washington, Vienna, VA) and 0 -10 Ϫ7 M unlabeled hCG. cAMP and inositol phosphate production were measured concurrently by incubating cells for 1 h at 37°C in 0.2 ml of assay buffer containing 10 mM LiCl, 0.5 mM IBMX (3-isobutyl-1methylxanthine), and 0 -1000 ng/ml hCG. Perchloric acid was added to each well, samples were centrifuged, aliquots of supernatant were neutralized with KOH and HEPES, and total cAMP in each aliquot was determined by 125 I radioimmunoassay (Eiken, Tokyo, Japan). Total inositol phosphates were measured using Dowex AG1-X8 anion exchange column chromatography (Bio-Rad). All assays were performed at least in triplicate, on at least three separate occasions with different batches of cells, and always included control cells transfected with WT LHR DNA. COS-7 cells transfected with pSG5 vector alone were not stimulated by hCG and did not exhibit specific 125 I-hCG binding. The program LIGAND (21) was used to calculate K d and B max values for hCG binding. K d and EC 50 values were log-transformed, averaged, and reconverted to calculate the geometric mean. The 95% confidence limits of K d and EC 50 were obtained by log transformation, calculating the mean Ϯ 1.96 S.D., and reconversion. cAMP and inositol phosphate data are expressed as fold increase over basal in cells transfected with WT human LHR DNA (mean Ϯ S.E., n Ն 3 experiments). The density of live cells when the assays were performed varied Ͻ10% between wells transfected with WT LHR and those transfected with mutant constructs. RESULTS AND DISCUSSION For electroporation of COS-7 cells 25 g of human LHR DNA is routinely used (3,4). To examine the effect of receptor density on cAMP and inositol phosphate responses, we tested different amounts of WT DNA; 25, 5, 1, and 0.2 g/cuvette. Vector DNA was added to keep the total DNA amount constant (25 g/cuvette). Receptor density estimated by 125 I-hCG binding (B max ) increased with increasing amounts of LHR DNA used for transfection, but there was no effect on hCG affinity (Fig. 1). The maximal agonist-stimulated cAMP production was proportional to estimated receptor density. At the lowest density (B max ϭ 8%), hCG caused only a 1.5-fold increase in cAMP. The EC 50 of the hCG-stimulated cAMP response (4 ng/ml) was not affected by receptor density. Cells transfected with WT human LHR exhibit increased production of inositol phosphates in response to high concentrations of hCG (4) and this response was also dependent on the density of cell surface receptors (Fig. 1B). Agonist-induced inositol phosphate production was barely detectable in cells with B max Յ 25% of control. Cells transfected with LHR DNA encoding Asp 578 3 Pro (D578P) did not exhibit high affinity binding of 125 I-hCG and hCG-induced cAMP or inositol phosphate production (Table I and Fig. 2C). This mutant LHR may never reach the cell surface or may exist in a conformation that is unable to bind hCG. None of the other six amino acid substitutions at position 578 had a significant effect on the equilibrium dissociation constant (K d ) of LHR for the agonist hCG (Table I). This is consistent with data demonstrating that glycoprotein hormone binding occurs primarily to the large N-terminal extracellular domain (2). The estimated surface concentrations of the mutant receptors, expressed as a percentage of the B max of WT LHR simultaneously transfected, ranged from 11% for the Leu mutant (D578L), to 184% for the Ser mutant (D578S). Fig. 2A compares the effects of substituting Glu or Asn on basal and hCG-stimulated cAMP production. Substitution with Glu (D578E) is equivalent to simply extending the ionizable carboxylate side chain of the WT Asp by one methylene group. This conservative modification was nevertheless found to cause a 5.1-fold stimulation in basal cAMP accumulation, an effect similar to that caused by the original Gly mutant (D578G). In contrast, substitution of Asp with Asn (D578N), a similarly sized residue that is uncharged, but shares Asp's ability to serve as a hydrogen bond acceptor, had no effect on basal activity. This occurs despite the fact that the Asn mutant was expressed at higher density than the WT receptor (Table I). These data are consistent with earlier results obtained with the rat LHR (22). Another amino acid residue that is capable of participating in a hydrogen bond is Ser. As shown in Fig. 2B, basal cAMP production by the Ser mutant (D578S) was increased 4.9-fold. Because D578S showed much higher expression than WT (B max ϭ 184% of WT) we also transfected COS-7 cells with 10-fold less of the D578S DNA construct (2.5 g) using our results with the WT LHR (Fig. 1) as a guide. This transfectant had a B max that was 65% of WT, but still exhibited a significantly elevated basal cAMP level (1.8-fold) ( Table I, Fig. 2B). These results imply that a Ser residue at position 578 is unable to fully stabilize the inactive receptor conformation. This may be due to the fact that the Ser side chain is shorter than that of Asp or Asn. If one assumes that B max is an accurate estimate of the relative receptor density, the substitution of Ser for Asp has a less dramatic activating effect than the Gly or Glu substitu- tions when judged on a "per receptor" basis. The hydrophobic side chain of Leu is only slightly larger than that of Asp or Asn, but it lacks the ability to form a hydrogen bond. Although the density of mutant Leu receptors (D578L) estimated by B max was only 11% of WT (Table I), it was found to cause 5.6-fold stimulation of basal cAMP accumulation. Unlike the other mutant receptors, which showed maximal hCGstimulated cAMP levels similar to that of WT (Fig. 2, A, B, and D), D578L was virtually unresponsive to agonist (Fig. 2C). This may be related to the markedly decreased concentration of D578L receptors on the surface (COS-7 cells with a similarly low concentration of WT receptors exhibit minimal response to agonist; see Fig. 1), or it may be due to an intrinsic difference in this mutant receptor (e.g. a conformation that is already maximally activated and/or inaccessible to agonist). We (5) and others (9) recently identified a naturally occurring LHR mutation encoding the substitution of Asp 578 with Tyr (D578Y) in three boys with unusually early and severe presentations of testotoxicosis. COS-7 cells expressing the Tyr mutant receptor exhibited an 8.5-fold increase in cAMP production in the absence of agonist (Fig. 2D), an effect that is significantly greater than that produced by any of the other Asp 578 substitutions. To verify that this strong activation was an intrinsic property of the mutant receptor and not due at least in part to its relative overexpression (B max ϭ 135% of WT), we also transfected COS-7 cells with only 3 g of the D578Y construct. This resulted in a decrease in B max to 45% of WT (Table I). As shown in Fig. 2D, cells expressing the reduced concentration of mutant Tyr receptors continue to exhibit markedly increased basal cAMP levels. The unusual clinical phenotype of boys with this mutation is likely related to the strongly activating nature of the Tyr substitution. To investigate whether the bulky aromatic side chain of Tyr was responsible for the remarkably high level of basal activation, we made another mutant receptor with Phe at position 578 (D578F). This mutant receptor was found to be just as strongly activated (7.6-fold increase in basal cAMP) as D578Y (Fig. 1D). Taken together, our data suggest that it is the ability of the Asp 578 side chain to serve as a properly positioned hydrogen bond acceptor, rather than its negative charge, that is normally important for stabilizing the inactive state of the LHR. We hypothesize that a hydrogen bond between Asp 578 and a residue in another helix is critical for maintaining the inactive conformation, and that loss or weakening of this bond increases the proportion of receptor molecules that become activated in the absence of agonist. Hydrogen bonds formed by the Asn carboxyamide side chain can be as strong as those formed by the Asp carboxylate side chain (23). That the inactive state of the LHR is dependent on the geometry of the hydrogen bond formed by Asp 578 and its partner(s) is indicated by the fact that replacement of the Asp side chain with smaller (Ser) or larger (Glu, Tyr) polar side chains causes destabilization of the inactive conformation. It is important to note that substitutions at position 578 do not fully activate the LHR, and that mutations of different residues in TM 6, TM 5, and TM 2 are also capable of promoting receptor activation (4 -11). Bonds formed by Asp 578 may be part of a larger interhelical network involved in maintaining the inactive state. Tyr and Phe are the most activating substitutions tested. In addition to loss of a hydrogen bond, introduction of a bulky aromatic side chain at position 578 may further destabilize the inactive receptor conformation by disrupting the packing of adjacent transmembrane helices. This contrasts with the data obtained on Lys 296 in TM 7 of rhodopsin, where substitutions with smaller, "cavity-creating" residues were found to be especially activating (19). Cells transfected with WT LHR not only produce cAMP in response to agonist, but have also been shown to exhibit increased production of inositol phosphates in response to high concentrations of agonist (4, 24) ( Table I). The coupling of the LHR to this secondary signaling pathway is less efficient, and is more dependent on receptor density. Of the six substitutions that cause constitutive activation of the cAMP pathway, only the Leu, Tyr, and Phe mutants also cause constitutive activation of the inositol phosphate pathway, and the degree of stimulation (1.4 -1.9-fold over WT basal) is less dramatic (Table I and Fig. 3). This may be due to differences in coupling effi- ciency or to the fact that different receptor conformations are involved in activating the two pathways (25,26). In the human thyrotropin receptor (TSHR) the residue that corresponds to Asp 578 is Asp 633 . Two naturally occurring mutations of Asp 633 have been found in hyperfunctioning thyroid adenomas and shown to cause constitutive activation (27,28). In contrast to our data on the LHR, the Asp 633 3 Tyr TSHR mutant did not appear to possess a more strongly activating phenotype than Asp 633 3 Glu or other TSHR mutations, nor did it cause constitutive activation of the inositol phosphate pathway (28). Despite extensive sequence similarity between these two receptors, the TSHR has been shown to differ significantly from the LHR in its level of spontaneous basal activity (29), and it may be that interhelical packing is less constrained in the TSHR than in the LHR. Certain Asp and Glu residues have been shown to play key functional roles in bacteriorhodopsin (23,30), sensory rhodopsin (31), rhodopsin (19,32), and other GPCRs (14,18,22,33). Changes in protonation can influence the equilibrium between conformational states. Replacement of Asp or Glu with similarly sized but uncharged residues (Asn and Gln, respectively), has often been used to test the importance of a potentially negatively charged side chain on receptor function. "Genetic neutralization" of different ionizable residues has been shown to facilitate (19,31,32), impair (22,33), or have no effect on (19,31,34) conformational signaling. In the case of the LHR and many other GPCRs, for example, it appears that a negative charge on the highly conserved Asp in TM 2 is needed to facilitate the conformational change to an active state (18,22,33). In contrast, substitution of Asp 578 in the LHR with Asn results in a receptor that functions exactly like the WT receptor ( Fig. 2A). This suggests that a negative charge at position 578 is not necessary for stabilizing the inactive state, nor is it needed for the transition to the agonist-activated state. In summary, the ability of the Asp 578 side chain to serve as a properly positioned interhelical hydrogen bond acceptor, rather than its negative charge, appears important for stabilizing the inactive state of the LHR. Studies are underway to identify those residues that may normally interact with Asp 578 . In addition to loss of a hydrogen bond, introduction of a bulky aromatic side chain at position 578 may further destabilize the inactive receptor conformation by disrupting the packing of adjacent transmembrane helices.
4,526.2
1996-12-13T00:00:00.000
[ "Biology", "Chemistry" ]
A New Efficient Method to Improve Handwritten Signature Recognition In this research we demonstrate the improvement for handwritten recognition using edge detection technique and our novel technique of adding intensive data. We collect totally 600 signatures from 30 people. Then we transform the hand written signatures images to be image file and resize to 144 x 38 pixels along the width and the height, respectively. Every pixel is encoded its intensity value from 0 to 255. The value 0 is the highest intensity (black) and 255 is white. Next, we use 4 different algorithms: Support Vector Machine (with linear, polynomial, radial basis, and sigmoid kernel functions), k-Nearest Neighbors, Perceptron, and Naïve Bayes (using Gaussian, multinomial, and Bernoulli density functions). From the experiment result, SVM with polynomial kernel function shows the highest accuracy (95.33%). Then we use 4 techniques of edge detection:Sobel, Prewitt, Robert, Canny and Thinning technique. WithSobel edge detection technique, we found that the accuracy is gained to 96% (higher than the highest of original data). We also observe that Sobel technique can improve the accuracy of k-NN with a significant level (from 78.67% to 91.33%). Moreover, we try to append the high intensity color data. And by this technique, we notice significant improvement of k-NN accuracy up to 96%. In SVM with linear function, after applying our technique the accuracy is improved to 98.00% which is the highest accuracy of this research. Introduction The use of biometric in authentication or individual identification receives much attention in the current.It provides convenience of not having to carry identification documents, which reduces the problem of document falsification.The signature is external identity which is widely used for identifying individual.Signature of a person is distinct and it is hardly to be forged or counterfeited. Vargas et al. (1) reviewed the handwritten signatures focusing on the grey-scale measurement and co-occurrence matrix technique and local binary pattern base on MCYT-75 and GPDS-100 databases.The result was that the EER (Equal Error Rate) = 16.27%. Guerbai et al. (2) proposed the powerful use of OC-SVM for handwritten signature verification.The result from the experiment was 5-7% AER (Average Error Rate) in CEDAR dataset, whereas 15 -17% AER in GPDS dataset. Frias-Martinez et al. (3) demonstrated the handwritten recognition based on the Support Vector Machine (SVM) and compared to a traditional classification technique like Multi-Layer Perceptron (MLP).The experimental results showed that SVM could provide up to 71% accuracy rate, which isbetter than the MLP technique. Zheng et al. (4) conducted edges and gradients detection, which was an innovative method for finding clearer edges.They used Least Squares Support Vector Machine (LS-SVM) with radial basis kernel function and Sobel and Canny edge detection.The outcome revealed that these techniques were even more effective than applying only a single machine learning technique. Most researches on signature recognition often focused on a comparative study to find algorithms suitable for the signature recognition.However, we often encounter problems of a similar signature of different individuals or slightly different signatures of the same individual.Due to various environmental conditions, the accuracy of signature identification turns derogated.We have realized the importance of pre-processing.It is the importance step that can affect the accuracy rate.This paper proposes a technique to enhance the signature recognition by focusing the improvement of signature images.The signature images will be improved by edge detection technique and thinning edgetechnique.In addition, we propose a novel concept, which is never seen in any previous signature recognition researches; that is, to append the interesting area of imagedataset.And in this case is high intensity color data. Pattern Recognition The pattern recognition (5) is the study about object classification with respect to "Feature" of each "Class."The method can be applied to various fields, for example, the individual identification using biological data, e.g.fingerprint, face, iris, DNA, or even a signature, as well as the recognition of documents, e.g.pattern recognition of spam mail. Issues related to pattern recognition and classification have been of great interest at present.As a result, technology and various advanced tools have been developed to be applied.The classification often requires knowledge of various branches, e.g.data mining, artificial neural network, machine learning, data improvement processes such as image-data improvement by edge detection. Image Edge Detection The image edge detection (6) is used to detect lines showing around the shape of an object by cutting away any other details, e.g.color or streaked.The image used to represent the shape of the object is represented as a "Binary Image."Edge detection can be done in a variety of ways with similar principles; that is, to find the difference of color between the "Gray Scale" of one point and the other point.If the light intensity is very different, the edges will be clearer; however, if the color difference is less, the edges can be vague.The edge detection can be applied in computer vision, e.g.boundary separation betweenobject and background orobject recognitionetc. Sobel Edge Detection Sobel edge detection (6) is the edge detection method by using the 2 filters with the size 3 x 3 called "Sx" and "Sy" to separate objects and background.The gradient values of each band will be computed and create the filters.Example of Sx andSy filter are in figure 1. Prewitt Edge Detection Prewitt edge detection ( 6) is detection technique using the same concept as the Sobel edge detection.The differential is the value in the filters that shown in figure 2. Prewitt gradient can be calculated as shown in equation 2. Robert Edge Detection Robert edge detection ( 6) is a technique using 2x2 size filters called "Gx" and "Gy".The concept of this edge detection is to calculate the gradientof an image which is summarized from the differences between diagonally adjacent pixels.The filter of Robert edge detection is shown in figure 3. Robert gradient can be calculated as shown in equation 3. Canny Edge Detection The first step of Canny edge detection (6) is to eliminated noise.Noise can be removed by using Gaussian Filter to clear the speckles and smooth the edge of image.In the second step, a gradient operator will be applied to achieve the gradient's intensity and direction.Then, The non-maximum suppression is used for thinning the images' edge by determining if the pixel is a better candidate than its neighbors.The final step is using doublethresholding algorithm to specify contour pixels and make the edge continuous. The calculation of Gaussian Filterwhich is applied in Canny edge detection can be explained by equation 4. Thinning Edge The edge thinning (6) is an important preparation process (pre-processing) that is widely used to slenderize image with thick edges, which is produced from the edge detection.This is to remove the excessively thick edge pixels.Usually, excessively thick edge slenderizing is used in character recognition and signature recognition to eventually generate thinnest edge lines with only one pixel. The thinning edge operation can be done by using P1 and P2 filters.The first step in doing that is to use the P1 filter; using the 3x3 template to scan the image data and then decide whether or not the pixels around the edges can be deleted.If the pixels can be deleted, mark them but do not delete them yet.After scanning throughout the image,delete the marked pixels.In the final step, P2 filter is used as when using the P1 filter.After deleting the marked pixels, repeat these steps until no more image data can be deleted. Perceptron Perceptron (7,8) is one of the most popular algorithms used in classification.This algorithm is based onbasic linear function model to classify the data with centroid as representative population.The linear function model mechanism is tocreate a line connecting the centroid of two groups and then create a perpendicular line to break the groups apart.Perceptron employs this method of linear function model as an initiative separation line.After that, the algorithm will check for the misclassified data point.If a fault exists, the counterbalance to weight of that data point to achieve the accurate classification, or to achieve classification with least erroneous data. Another advantage of the Perceptron is that we can tune the learning rate to determine for the algorithm's accuracy.If learning rate is too small value,the weighting in erroneous data will be also small and we will see gradual changes of the separation line.On the other hand, if the learning rate is too high in value, it will result in too aggressive changes that will affect other data and that data must be classifiedmany times.This can explainby figure 4, where η is the learning rate values. Fig. 4.The difference between 3 cases of learning rate in Perceptron algorithm (8) Support Vector Machine Support Vector Machine (7)(8)(9)(10) or SVM is an algorithm based on a linear function model, which is developed from the Perceptron algorithm.It is a way to increase flexibility of classification to acquire large margin as much as possible.The concept of this algorithm is to place the data onto feature spaceand draw lines connecting the edges of each group.And then, the algorithm uses these data points on the edge to represent groups.The nearest data points of each group are called "support vector".Then data separation lines of both groups are created to classify the data with the largest margins as possible, that shown in figure 5.In some cases, this algorithm can allow for misclassification to achieve the lager margin byusing slack variable. Fig. 5.The support vectors and classification in SVM algorithm Another advantage of the Support Vector Machine is that the processing time is less than Perceptron algorithm because Support Vector Machine does not require all the data points to be calculated.In addition, the Support Vector Machine provides various kind of functions, called kernel, to fit a specific type of data distribution.These kernel functions include linear, polynomial, radial basis, and sigmoid. Naïve Bayes Naive Bayes (7)(8)11) is an algorithm that uses the Bayes theorem to assist in classification. It i based on the assumption that the attributes of the sample are independent.The algorithm is suitable for the set of large sample.The modeling is in the form of conditional probability.The advantage of this method of learning is that we can use the data and "Prior knowledge" to help in learning.This algorithm gives good performance when compare with the other algorithms.In terms of the calculation, the principles of probability will be used and will be based on the theory ofBayes. k-Nearest Neighbors k-Nearest Neighbors (7,11) Figure 6 shows the classification of k-Nearest Neighbors with the different k values.Results will vary depending on the number of the closest k, for example when k=1 for the incoming data (represented by x) will be classified as (-).When k=2, the incoming data can be classified as either (-) or (+).When k=3,the incoming data will be classified as (+). The objectives of this research are: 1)To study and compare the effectiveness of hand-written signature recognition models from 4 learning algorithms: Perceptron, SVM, Naïve Bayes, and k-NN. 2)To improve the accuracy of hand-written signature recognition model by using image improvement and by addition of high intensity data. Research Framework This research consists of 6 stages (and diagrammatically shown in figure 7) as follows: 1)Collecting 600 signatures from 30 university students who use the hand-written signature in daily life and turn to images file by scanning device.We adjust the image's color to black and white.Then, equalize their size.The rawdata of all hand-written signature image files are storedat the main author's website: https://sites.google.com/site/nhinganusaracpesut/signature/datasets 2)Using of edge detection technique and thinning edges to sharpen signature images. 3)Converting the data into a numeric table in accordance with color intensity, and then converting the numeric table to the array data. Experimental Results With all 600 signatures from 30 individuals, we use Python 2.7 Language on Editor Spyder to predict the results of signature recognition. The image improvement at the second step of our proposed frameworkyields the results as shown in figure 8. From table 1, the four algorithms used in this study include Perceptron algorithm, Support Vector Machine algorithm (Linear Function, Polynomial Function, Radial Basis Function, and Sigmoid Function), Naive Bayes algorithm (Gaussian Function, Multinomial Function, and Bernoulli Function), and k-Nearest Neighbors algorithm.These algorithms are used in comparative test.The results obtained indicate that, in using the original image files, the SVM-Polynomial Function provides the highest accuracy of 95.33% and we see 94.67% by SVM-Linear Function.The Naive Bayes algorithm with Multinomial Function gives 82.67% of accuracy, and the k-Nearest Neighbors algorithm gives 78.67% of accuracy.The accuracy improvement of learning algorithms after using image processing with edge detection technique and thinning reveal that k-Nearest Neighbors algorithm's accuracy is increased by 12.66%; that is, from 78.67% to 91.33% with the use of Sobel edge detection technique.This increment is very significant.The accuracy of SVM-Linear Function is increased by applying the Sobel edge detection technique as well and the improved accuracy is 96.00%.Sobel edge detection is the best image processing technique applied prior to the signature recognition with learning algorithms. Table 2 shows the results of using the additional intensive data technique with Sobel edge detection: SVM-Linear Function provides the accuracy of 98.00%, which is the highest accuracy in this research.Moreover, we also found that the k-Nearest Neighbors algorithm provides higher accuracy by using the additional intensive data; that is the accuracy increases from 91.33% to 96.00% (which is higher than the maximum value of the original data). The accuracy comparisons of signature image recognition without any other techniques, recognition with edge detection technique, and recognition with both edge detection and our additional intensive data techniques are shown in figure 9.It can be noticed that the combination of Sobel edge detection technique and our novel proposed additional intensive data technique yields the highest recognition rate at 98%. Conclusions We study the problem of handwritten signature recognition with the main objective of devising techniques to improve recognition accuracy rate.According to the signatures collected from hand-written users for this research, the SVM-Linear Function is the most suitable learning algorithm for modeling the signature recognition with the edge detection technique applied for image improvement and the additionalintensive data technique newly proposed for accuracy improvement.This combination of edge detection and additional intensive data techniques provides the accuracy rate of up to 98.00%.For the technique of image improvement, the researchers note that we have possibility to achieve higher accuracy if we study more advanced techniques of image processing. or k-NN is a popular classification algorithm in the field of pattern recognition.The concept of this algorithm is classifyingthe new data base on the k closest training examples.And the class of new data will be assigned by the majority class label of the k closest training data. Fig. 8 . Fig. 8. Example images after applying edge detection and thinning techniques to signature images of three persons Fig. 9 . Fig. 9. Accuracy comparisons of original signature image data recognition, recognition from Sobel edge detected data, and recognition from both Sobel and additional intensive data techniques Table 1 . Experimental results of signature recognition with image improvement techniques Table 2 . Experimental results when using additional intensive data technique
3,467.8
2015-02-05T00:00:00.000
[ "Computer Science" ]
Gain Control through Divisive Inhibition Prevents Abrupt Transition to Chaos in a Neural Mass Model Experimental results suggest that there are two distinct mechanisms of inhibition in cortical neuronal networks: subtractive and divisive inhibition. They modulate the input-output function of their target neurons either by increasing the input that is needed to reach maximum output or by reducing the gain and the value of maximum output itself, respectively. However, the role of these mechanisms on the dynamics of the network is poorly understood. We introduce a novel population model and numerically investigate the influence of divisive inhibition on network dynamics. Specifically, we focus on the transitions from a state of regular oscillations to a state of chaotic dynamics via period-doubling bifurcations. The model with divisive inhibition exhibits a universal transition rate to chaos (Feigenbaum behavior). In contrast, in an equivalent model without divisive inhibition, transition rates to chaos are not bounded by the universal constant (non-Feigenbaum behavior). This non-Feigenbaum behavior, when only subtractive inhibition is present, is linked to the interaction of bifurcation curves in the parameter space. Indeed, searching the parameter space showed that such interactions are impossible when divisive inhibition is included. Therefore, divisive inhibition prevents non-Feigenbaum behavior and, consequently, any abrupt transition to chaos. The results suggest that the divisive inhibition in neuronal networks could play a crucial role in keeping the states of order and chaos well separated and in preventing the onset of pathological neural dynamics. I. INTRODUCTION Neurons can be understood as information processing units that transform synaptic input into a spike train output.This transformation is often described by an input-output function, which can be experimentally measured.Recent experiments have demonstrated that different inhibitory mechanisms can modulate this function [1,2].These inhibitory mechanisms can be considered to be either subtractive or divisive based on the modulation that is applied on the postsynaptic neurons.The subtractive modulation shifts the sigmoidal input-output function to higher inputs (hyperpolarizing effect), whereas the divisive modulation decreases the slope of the function (also termed the neuronal gain) [3].Recent studies demonstrated that the two types of modulations are applied on the cortical pyramidal neurons by two distinct inhibitory populations.Dendrite-targeting interneurons provide the subtractive inhibition, whereas divisive inhibition is provided by somatargeting interneurons [2,4].Additionally, the connectivity patterns between these populations were revealed in recent anatomical study in neocortex, where it was shown that the dendrite-targeting interneurons inhibit the soma targeting but not the other way around [5]. In particular, the role of divisive inhibition (i.e., gain control) has been explored by experimental as well as computational studies, as it is a nonlinear effect that enables more complex functionality into the system.Gain control has been shown to be crucial in human vision [6], sensory processing [7,8], gaze direction [9], selective attention [10], and motor processing [11].In simulated networks of neurons, divisive inhibition was shown to prevent some problems (e.g., proximity to unstable behavior, sensitivity of dynamics to connectivity parameters, and slow reaction to fast fluctuating input) caused by the lack of divisive inhibition [12].Additionally it was shown that including divisive inhibition can improve storage capacity in neuronal networks without compromising its dynamical stability [13].In terms of network dynamics, the divisive modulation was found to regulate the duration of the active and silent phases during rhythmic bursting activity [14,15].Despite these findings, there are still open questions about the role of divisive inhibition in the overall network dynamics.In particular, the effect on the transition between different network states is unclear.In the present computational work, transitions from low-amplitude oscillations to high-amplitude paroxysmal oscillations are of interest.A high-amplitude paroxysmal oscillation in local neocortical networks is a model of hypersynchronous activity which indicates pathological dynamics, as, for example, in epilepsy [16].Experimental studies investigated the role of different elements of neocortical networks and their interaction with the thalamus in the generation of such oscillations [17,18].These oscillations closely resemble the spike-wave complexes that characterize the pathological activity during seizures.Despite the fact that the interaction between neocortex and thalamus enhances spike-wave complexes, it was shown that the thalamus is not necessary for the generation or propagation of these paroxysmal oscillations in neocortex [17,19,20].Computational studies have used limit cycles or chaotic attractors to model these paroxysmal oscillations [21][22][23][24][25].Despite the fact that chaoticity is a property not always found in this type of pathological activity, the chaotic attractors usually have high-amplitude and complex behavior that resembles such paroxysmal activity.Also a three-dimensional chaotic attractor has the same dimensionality as the seizure attractors analyzed from EEG signals [26].For a review on seizure dynamics and the types of attractors that are used to model this type of activity can be found in Ref. [27].In the present work, the transition to paroxysmal oscillations is modeled as a transition from a low-amplitude limit cycle (order) to a high-amplitude chaotic attractor (chaos) in a model of local neocortical networks which were shown to exhibit such activity even in isolation.Furthermore, a recent study on the macroscopic behavior of spiking neural networks verifies the coexistence of order and chaos in local networks [28]. One intensively studied route to chaos is via a so-called cascade of period-doubling bifurcations [29].During a perioddoubling bifurcation, a limit cycle is replaced by a new periodic orbit with double the period of the original orbit.Period-doubling bifurcations are well documented in complex neural systems, both theoretically [30,31] and experimentally [32][33][34].Additionally, cascades of period-doubling bifurcations are often found preluding the onset of paroxysmal or irregular behavior in these studies.Interestingly, the cascades of period-doubling bifurcations can happen at a constant transition rate, first described by Feigenbaum [35].The Feigenbaum constant was since found to apply universally in many dissipative systems in nature [36][37][38].The significance of Feigenbaum universality, and particularly of the Feigenbaum constant [35], is that it provides a prediction for the onset of chaos in parameter space.A system complying with Feigenbaum universality has a well-defined relative boundary between order and chaos, whereas a system with non-Feigenbaum behavior can exhibit abrupt transitions between the two states.Non-Feigenbaum behavior is still an open field of study.Some classes of such behavior have already been characterized, mainly in discrete dynamical systems [39][40][41].Examples of this behavior in continuous dynamical systems remain poorly understood. Previous studies of neural population dynamics reported period-doubling transitions [24,30,31,42].However, none of them, to our knowledge, focused on the influence of inhibitory mechanisms on the period-doubling cascades.The primary aim of this study is to apply bifurcation theory and investigate the role of divisive inhibition in a neural mass model while it undergoes a period-doubling cascade leading to chaos.In particular, we will explore how the system behaves in relevance to Feigenbaum universality in two different cases: while the model includes divisive inhibition and while it uses only subtractive inhibition. A. Modeling framework As in previous studies, neural mass models are used in this work, giving an abstract and macroscopic description of neocortical networks comprised of excitatory and inhibitory populations.The dynamics arise from the interaction of these populations which are expressed by a set of ordinary differential equations (ODEs).In particular, the model used here is an extended version of the spatially localized Wilson-Cowan model [43], which is primarily used to model the oscillatory behavior of neural systems [44][45][46].This type of model is conceptually simple and well studied and can be easily analyzed using bifurcation theory [47][48][49].Therefore it is an ideal choice for investigating how abstract concepts like subtractive or divisive inhibition can change the network's behavior at the population level.The model introduced here can be considered as a generalization of the Wilson-Cowan model [43].This generalization can be used to model not only subtractive inhibition, as featured in the classic model, but also divisive inhibition.In order to achieve this, we consider the excitation and external inputs to be the drivers of the network, whereas the inhibition is used only to modulate the sigmoidal input-output functions of all the units in the network.Distinguishing between drivers and modulators in the network is inspired by the Sherman-Guillery proposition [50].This separation of drivers from the modulators comes in contrast to the way that inhibition was previously modeled: as a subtraction from the input, that is, as a negative driver.An equivalent result can be obtained instead by shifting the input-output function to higher inputs (see Fig. 1, bottom left), that is, subtractive modulation of the input-output function.This displacement represents the subtractive inhibition in the proposed model.Similarly, the divisive inhibition can be modeled as a gain control mechanism that decreases the slope and maximum output of the input-output functions (see Fig. 1, bottom right).By choosing the logistic function as the inputoutput function, we can model these modulations by promoting the constants for displacement and slope to variables that can be dynamically controlled by the inhibitory populations in the model.This modification results in a function of three variables F (x,θ,α), F : R × R + 0 × R + 0 → R. Variable x represents the input or the driver of the unit, variable θ represents the displacement of the sigmoidal curve along the x axis, and variable α represents the slope of the curve.Thus, the input-output function F (driver, subtractive modulator, divisive modulator) is given by: where the minimum displacement θ j and the maximum slope α j are constants, representing the default case when no modulatory inhibition is delivered to the unit.Specifically, these constants differ for different populations: j = {e,s,d} for the excitatory, subtractive, and divisive inhibitory populations, respectively.Note also that the last term in the expression only depends on the variable of slope α and it is used for decreasing the maximum value of the output along with the decrease in slope (see Fig. 1 for a schematic). Using the input-output function F we now have an easy way to express subtractive and divisive modulations in the function arguments. ESD model The model incorporating both subtractive and divisive inhibition (the ESD model, also see Fig. 1) is given by the following system of ordinary differential equations (ODEs): where with j = {e,s,d}.The variables of the system, E, S, and D, express the activity level of the excitatory, subtractive inhibitory, and divisive inhibitory population respectively.The functions F j are the sigmoidal input-output functions as presented above.Parameters P j give the external inputs to the units and they are considered to be independent of time in this study.The refractory period (see Ref. [43]) is assumed to be the same for all populations and equal to 1 (omitted here).The connectivity parameter w ji 0 represents the weight of connection from unit i to unit j .The absence of the inhibitory connection w sd is justified by the anatomical findings [5].Note that the divisive inhibitory population is considered divisive just because it delivers divisive inhibition to the excitatory population.Its self-inhibition connection w dd remains subtractive; no divisive modulation is evidenced, to our knowledge, in neuronal population other than on the pyramidal cells.All three populations are assumed to work at the same time scale, so all time constants are omitted in this model.A schematic of the model can be found in Fig. 1. E S Ś model An equivalent model but without the divisive inhibition (ES Ś model) is given by the following system of ODEs: Nonideal divisive inhibition The divisive inhibition in the network can be modeled in a way that is not purely divisive modulation of the input-output function of the excitatory population but rather a combination of divisive and subtractive modulation.This can be thought as a more biologically realistic modulation, which more closely resembles the experimental data [2,4].An additional constant parameter q ∈ [0,1] is introduced in the model in order to express the fraction of divisive modulation that is delivered to the excitatory population.The rest of the modulation, 1 − q, is delivered as subtractive.For a schematic see Fig. 5(a).The only change in the ESD model [Eq.( 2)] is the input-output function of the excitatory population: and, consequently, The input-output functions F s and F d remain the same as in Eq. ( 1).We shall use this nonideal divisive inhibition at a later stage in the analysis to simulate mixture models between the ESD and ES Ś models.Table I summarizes all the model parameters we used in this study. B. Feigenbaum number The Feigenbaum number expresses the rate by which the system undergoes the period-doubling bifurcations en route to chaos.Therefore it can be used as a relative measure of the abruptness of period-doubling cascades.Considering a cascade of period-doubling bifurcations R 2 ,R 4 ,R 8 , . . .R 2 n , the Feigenbaum number is given by: An approximation of this number, based on the first four period-doubling bifurcations R 2 ,R 4 ,R 8 , and R 16 , was calculated by use of the following ratio: This measure can be compared with the Feigenbaum constant δ = 4.6692 • • • [35], which was discovered to be a characteristic constant for all one-dimensional maps and many dissipative systems undergoing period-doubling cascades [51,52].Values of δ near the Feigenbaum constant are considered as an indication that the system complies with the Feigenbaum universality, whereas values far from this constant indicate a non-Feigenbaum behavior. Essentially, the number δ estimated from a period-doubling cascade offers us a way to classify the route into chaos.If δ is close to the Feigenbaum constant, then the transition into chaos is well understood and can be reduced to the dynamics of a one-dimensional map.If δ is far from the Feigenbaum constant, then the transition to chaos is underpinned by more complex processes.Particularly, if δ > δ, then the onset of chaos is considered relatively more abrupt. C. Bifurcation analysis using numerical continuation The toolbox MATCONT [53] was used for the detection of bifurcation points and the numerical continuation of bifurcation curves presented in this work.A fourth-order Runge-Kutta method (ode45) implemented in MATLAB Release 2013b (The MathWorks, Inc., Natick, MA) was used for the numerical integration of the ODEs in all the reported results.Other built-in ODE solvers were also used and produced similar results. III. RESULTS In this work, the role of divisive inhibition in the transition from order to chaos through a period-doubling cascade is examined.For this purpose, the model introduced in the previous section was used.The connectivity between the two inhibitory populations is unidirectional and follows the experimental findings in neocortical networks [2,5].A schematic of the model with all the connectivity parameters can be found in Fig. 1.A numerical approach was used for its analysis throughout. The introduced model can exhibit transitions from a limit cycle (order) to a chaotic attractor through a cascade of perioddoubling bifurcations.Figure 2(a) shows the bifurcation diagram of such a transition.The cascade parameter used in this example is the self-excitation synaptic weight, w ee .The limit cycle emanates from a supercritical Andronov-Hopf bifurcation and the amplitude of the oscillation increases with the increase of the cascade parameter.The first four period-doubling bifurcations are labeled as R 2 ,R 4 ,R 8 , and R 16 , denoting the beginning of the period-2, period-4, period-8, and period-16 cycles, respectively.Instances of the phase space at different stages of the transition can be seen in Fig. 2(b), including the chaotic attractor appearing at the end of the cascade.The strange attractor is topologically similar to the Rössler attractor [54], the Sprott D attractor [55], and the Genesio-Tesi attractor [56] exhibiting a stretching and folding mechanism [57].The period-doubling cascade shown in Fig. 2(a) has a Feigenbaum number close to the Feigenbaum constant, δ ≈ δ. The behavior during the transition in this model, which includes divisive inhibition, is compared with an equivalent one which does not include divisive inhibition.All the parameters of the model are the same; the only thing that changes is the quality of inhibition delivered by the secondary inhibitory population.The two versions of the model are labeled as ESD and ES Ś and are expressed by the sets of ODEs Eq. ( 2) and Eq. ( 4), respectively.The first four period-doubling bifurcations R 2 ,R 4 ,R 8 , and R 16 were numerically calculated in both ESD and ES Ś models using the MATCONT toolbox [53] (see Models and Methods).The same toolbox was used to produce the period-doubling bifurcation curves by varying both the cascade parameter and another connectivity parameter.This way it is possible to explore whether the transition from order to chaos is sensitive to changes in network connectivity.The resulting bifurcation diagrams are shown in Figs.3(a) and 3(b). As shown in the lower panels of Figs.3(a) and 3(b), the rate of bifurcation δ was calculated for each value of the varying parameter on the abscissa (see Models and Methods).This measure was plotted alongside the Feigenbaum constant δ = 4.6692 • • • [35] represented by the dashed line in the same panels.This plot reveals whether the behavior of the model follows Feigenbaum universality.In Fig. 3 self-excitation w ee being the cascade parameter and w ed (respectively w eś for ES Ś) being the varying connectivity parameter. From the example in Fig. 3, it is obvious that the two versions of the model, ESD and ES Ś, can exhibit a significantly different behavior.While the δ value is always near the Feigenbaum constant in the ESD model, the δ value for the ES Ś model is sensitive to changes of the parameter w eś and can take a wide range of values.There is actually a linear increase as the parameter w eś is increased.The results indicate that the ESD model complies with the Feigenbaum universality and has always smooth transition from order to chaos, whereas the ES Ś model exhibits a non-Feigenbaum behavior with transitions which can be much more abrupt.Similar observations can be made using other cascade parameters, like w ds or w se , and other varying connectivity parameters (see also Appendix A). This difference in behavior between ES Ś and ESD model can be also seen in the first return maps of their chaotic attractors [see Fig. 3(c) and 3(d)].By taking the local minima of the chaotic activity of excitatory population E, we constructed the first return maps E min (n + 1) vs E min (n) where E min (n) the n-th local minimum.In particular, the chaotic activity was produced using the parameters marked with a red cross in Figs.3(a) and 3(b).As shown in Figs.3(c) and 3(d), only the return map of the ESD attractor is one-dimensional and unimodal, indicating compliance to Feigenbaum universality [52].This type of return map is typical of attractors resulting from a stretching and folding mechanism [57].In contrast, the ES Ś model produces a two-dimensional and bimodal return map, indicating that the attractor is not folded completely upon the first return; that is, its dynamics cannot be described by a one-dimensional map as the universality requires. The question that arises is how does the divisive inhibition in the ESD model ensure Feigenbaum behavior preventing any abrupt transitions?Can this observation be generalized to all possible parameter sets that produce transitions to chaos through period-doubling cascade?Studies on non-Feigenbaum behavior suggest that phenomena of codimension-2 or higher can disturb the period-doubling curves in their neighborhood, resulting in arbitrarily abrupt transitions [39].In Fig. 4, the bifurcation diagrams include also the Andronov-Hopf bifurcation curve and the saddle-node (also known as fold) bifurcation curve along with the period-doubling bifurcation curves.The ES Ś diagram reveals a codimension-2 bifurcation called zero-Hopf bifurcation (or fold-Hopf) at the point where the two bifurcation curves tangentially intersect [58].As the δ value indicates, the period-doubling bifurcation curves are disturbed near the zero-Hopf bifurcation resulting into a non-Feigenbaum behavior.For higher values of w ds , though, away from the zero-Hopf bifurcation point, the δ value returns to values near the Feigenbaum constant.A branch of subcritical Neimark-Sacker bifurcation also appears in Fig. 4(a), originating from the zero-Hopf point as expected [58].This results in the appearance of an unstable torus in phase space for parameter values between the Andronov-Hopf and Neimark-Sacker bifurcation curves (shaded area).Tori (stable and unstable) and quasiperiodic activity are features easily found in the ES Ś model and they are always linked with the zero-Hopf and Neimark-Sacker bifurcations in this model.This and similar findings (see Appendix A) suggest that the appearance of such bifurcations (zero-Hopf and Neimark-Sacker) can be responsible for the non-Feigenbaum behavior and abrupt transitions in our model.The implication of Neimark-Sacker bifurcation in non-Feigenbaum behavior is also documented in another study in which such phenomena are found in a twodimensional model map [41].Apparently these bifurcations can appear in the ES Ś system but what about the ESD?Does the divisive inhibition prevent such bifurcations for all possible parameter settings? To address this question we introduce an additional parameter in the ESD model.The parameter q ∈ [0,1] creates a continuum between the two extremes: ES Ś at q = 0 and ESD at q = 1.With any other value within this range, the secondary inhibitory population of the model exhibits a combination of subtractive and divisive inhibition [for a schematic see Fig. 5(a)].Using this model, it is possible to incorporate the results shown in Figs. 3 and 4 system switches gradually from non-Feigenbaum (q < 0.2) to Feigenbaum (q > 0.4).The saddle-node bifurcation curve and also the zero-Hopf bifurcation point are found for low values of q, near the ES Ś extreme.Starting from q = 0, the saddle-node bifurcation curve is very close to the Andronov-Hopf bifurcation curve.They tangentially meet each other at the zero-Hopf point and then they diverge rapidly from each other for values of q higher than 0.2.The saddle-node bifurcation curve reaches a cusp point (CP) at around q = 0.51 and returns back to lower values of q, preventing any subsequent interactions with the Andronov-Hopf bifurcation curve for high values of q (q > 0.5).Note also that at around q = 0.4 and w ee = 18.3 the two bifurcation curves seem to intersect again but actually they do not; two different fixed points bifurcate separately in this case so no zero-Hopf is produced there.Despite the fact that the cusp point does not always appear in such bifurcation diagrams and therefore the saddle-node bifurcation curve can sometimes reach high values of q (depending on the parameter set used), the two bifurcation curves always seem to diverge from each other for high values of q (data not shown).If this is true for all possible parameter sets, then the appearance of a zero-Hopf bifurcation is impossible for high values of q (near the ESD extreme).Also the presence of the cusp point (CP) indicates that the phenomenon of hysteresis is also possible for low q values.During this phenomenon the whole period-doubling cascade can be skipped and the system can directly transition from a resting state to chaos through a saddle-node bifurcation instead (see Appendix B for an example).Note that the saddle-node bifurcation is considered as the most common transition into seizure dynamics [59]. Next we examined if it is possible to find any zero-Hopf bifurcation points in the parameter space for high values of q.For this purpose, a numerical optimization approach was used to search for such points in the parameter space for different values of q.In particular, the search starts from random points in the parameter space and tries to converge to a fixed point which undergoes simultaneously saddle-node and Andronov-Hopf bifurcations by varying the connectivity parameters.More precisely, the search tries to find fixed points where one of the eigenvalues of the Jacobian matrix is zero, λ 1 = 0, and the other two are purely imaginary conjugates, λ 2,3 = ±iω [60] (see Appendix C for details).Running the algorithm multiple times with these criteria, it is expected to reveal multiple occurrences of zero-Hopf bifurcation in the parameter space.The algorithm was tested and successfully managed to find the particular zero-Hopf bifurcation points that were first found by MATCONT and shown in Figs. 4 and 5, demonstrating its reliability.After multiple runs (10 7 for each q value), the overall results of this search are shown in Fig. 5(b).The two curves show all the occurrences of zero-Hopf bifurcation which were found for different values of q.Each curve corresponds to a different subset of varying parameters (see Appendix C for details).Apparently all the zero-Hopf points found in the system are limited in the region near the ES Ś extreme, with q < 0.5.This plot clearly shows that it is impossible for the random search algorithm to find any fixed points undergoing a zero-Hopf bifurcation for values of q 0.5.This result coupled with some further analysis on the distribution of the zero-Hopf points (see Appendix D) indicates that pure or almost pure divisive inhibition can prevent phenomena like zero-Hopf and Neimark-Sacker bifurcations.Therefore it prevents non-Feigenbaum behavior and abrupt transitions into chaos. IV. DISCUSSION The comparison between a model of neocortical networks with divisive inhibition (ESD, q = 1) and an equivalent one without divisive inhibition (ES Ś, q = 0) suggests that gain control plays a special role in the dynamics of such networks.The present numerical study of the transition from order to chaos in this type of networks shows that pure or almost pure divisive inhibition ensures that the transition always complies with Feigenbaum universality.Complying with Feigenbaum universality means that there is always a well-defined boundary (in relative terms) that separates the regions of order and chaos in the parameter space regardless of the specific values of connectivity weights.In contrast, when the divisive inhibition is replaced by subtractive, the transition can be abrupt or smooth depending on the exact parameter settings of the network.Without divisive inhibition, the period-doubling transition to chaos can be considered unpredictable.An intuitive representation of the difference between the two cases is depicted in the schematic of Fig. 5(c).As outlined in the Introduction, similar findings about divisive inhibition (preventing instabilities and the sensitivity to parameter changes) were found in a recurrent network of interconnected neurons with firing rate dynamics [12].This indicates that this effect of divisive inhibition is not limited only to the abstract Wilson-Cowan type of models.Consequently, the role of divisive inhibition is worth investigating also in spiking neural networks that feature gain modulation (e.g., Ref. [61]). The non-Feigenbaum behavior, which is only found when the inhibition is far from being divisive (low q values), is linked with the appearance of zero-Hopf and Neimark-Sacker bifurcations.In general, when the model lacks divisive inhibition, it seems to have a more diverse dynamic repertoire with the possibility of exhibiting phenomena of codimension-2 or higher.This increased effective dimension of the dynamics might be the underlying explanation for the non-Feigenbaum behavior found in the model, as suggested in Ref. [39].The gain control mechanism prevents these phenomena and therefore prevents non-Feigenbaum critical behavior.Additionally, the simple generic structure of this system suggests that gain control might have a similar effect in structurally equivalent systems. The model is capable of incorporating mixed effects from divisive and subtractive inhibition (using the parameter q).This is because different factors have been suggested to enable and modulate (possibly in combination) divisive inhibition.Such factors include synaptic noise [62,63] and target positions on the principal cells of the inhibitory input [4].As both the synaptic noise variance and the target position can vary in a continuous fashion, it is reasonable to include the degree of divisive inhibition as a continuous parameter.We shall elaborate on the implications of linking synaptic noise to our parameter q. Experimental studies investigated how synaptic noise influences the gain control mechanism of shunting inhibition which is provided primarily by soma-targeting interneurons [2].By simulating the background synaptic noise with a dynamic clamping technique, it was shown that highly variable synaptic input is required for the modulation of neuronal gain [62].A similar in vitro approach was taken in Ref. [63], where they controlled the magnitude and the variance of the excitatory and inhibitory conductances independently.In agreement with the previous study, the input-output relationship of pyramidal neurons was divisively modulated proportionally with the variance of the injected conductances.A computational study of this mechanism also demonstrated the importance of highly variable synaptic noise in gain modulation with a biophysically detailed neuron model [64] (for a review, see Ref. [3]).In our model, high values of the q parameter can indicate that the network is functioning under high synaptic noise conditions, that is, high variance of the synaptic currents.By linking the parameter q with high synaptic noise, our model provides hypotheses which can be tested either experimentally with electrophysiological setups or computationally with detailed networks of spiking neuron models.For instance, we predict that when the synaptic noise is increased, the possible dynamic phenomena in the network are more limited (no codimension-2 bifurcations), structures such as tori are less likely to arise, the transition to chaos through period-doubling bifurcations follows a Feigenbaum behavior, and the chaotic attractor can be described by a one-dimensional first return map.In experimental setups, the prediction about tori might prove most interesting, as they would correspond to quasiperiodic oscil-lations or amplitude modulated oscillations (also observed in experiments [65]). Our findings might also have implications in the understanding of pathological dynamics in the brain and the role that gain control may play on their onset.The dynamics in pathologies like epilepsy and Parkinson's disease are characterized by hypersynchronous activity in local networks [16,66] and, consequently, reduced entropy [67].This activity can be either regular or irregular but the common feature is the excessive synchronous activity that is detected in electroencephalogram (EEG) or local field potential (LFP) recordings.Such abnormally synchronous oscillations are also characterized as paroxysmal events, that is, featuring rapid fluctuations between extremely high and low values in the mean-field potential.The transition into chaos in our model can be considered as such a transition to a pathological state exactly because of the paroxysmal oscillations that emerge (e.g., see Fig. 8 in Appendix B). Figure 2(a) shows how the local minima get more and more extreme as we progress through the cascade and into the chaotic regime.A similar plot can be produced for the local maxima of the system (not shown) indicating that the range of activity increases while the oscillation becomes more complex.These observations are typical among perioddoubling cascades and the resulting chaotic attractors as dictated by the α constant of the Feigenbaum universality [35].Through these cascades, the activity is pushed to its limits and, consequently, becomes increasingly paroxysmal.Many computational studies modeled pathological dynamics and in particular seizurelike activity with chaotic attractors that produce complex paroxysmal oscillations resembling the spike-wave or polyspike-wave complexes usually seen in epilepsy patients' EEG [24,25,31,42].In addition, some of these models feature period-doubling cascades at the onset or offset of the paroxysmal activity [24,31,42].Period-doubling cascades were also detected in the analysis of EEG taken from patients of temporal lobe epilepsy [34].Hence we suggest that the low-dimensional chaotic attractor as introduced here could be identified as a paroxysmal state, with gain control serving as a way to prevent relatively rapid transitions into such a state. In more general terms, chaotic dynamics in local networks imply a failure of stable periodic activity that is necessary for long-range synchronization at specific frequencies.Hence, any brain function that relies on stable periodic activity of local neocortical networks would be impaired by the onset of chaos.It is well established that long-range synchronization at α and θ frequencies are crucial for memory and other cognitive functions [68].Furthermore, failure in long-range synchronization at β and γ frequencies is associated with pathologies like schizophrenia [16].The results presented here suggest that divisive inhibition is responsible for maintaining stable periodic behavior and thus enabling synchronization.Indeed, preliminary simulation results in a paradigm of long-range synchronization between two local networks suggest that the inclusion of divisive inhibition prevents chaotic activity and enhances synchronization.Hence, by preventing abrupt transition into chaos, divisive inhibition could act to prevent the onset of pathological neural dynamics.ACKNOWLEDGMENTS C.A.P. was supported by the Wellcome Trust (099755/Z/12/A).M.K. and Y.W. were supported by the Human Green Brain project (http://www.greenbrainproject.org)funded through EPSRC (EP/K026992/1) and the CANDO project (http://www.cando.ac.uk/) funded through the Wellcome Trust (102037) and EPSRC (NS/A000026/1).A.J.T. was supported by MRC (MR/J013250/1).This work made use of the facilities of N8 HPC Centre of Excellence, provided and funded by the N8 consortium and EPSRC (Grant No. EP/K000225/1).The Centre is co-ordinated by the Universities of Leeds and Manchester.We thank Gerold Baier, Fred Wolf, and Bulcsú Sándor for helpful discussions.I, Parameter Set 5. Comparing it to the typical, fractal in nature, Feigenbaum cascade in ESD (see Fig. 2), this cascade can be considered as a much more abrupt transition into chaos with δ ≈ 12.35 which is much higher than the Feigenbaum constant [35].I.The period-doubling bifurcation curves, in both of these examples, are found to be distorted or twisted resulting into values of δ that can vary widely.The presence of the saddle-node bifurcation curves near these cascades is hypothesized to be interfering locally with limit cycles in phase space.This prevents the neighboring limit cycles to bifurcate in a typical Feigenbaum way.Saddle-node and Andronov-Hopf bifurcation curves are also interacting producing the phenomena of zero-Hopf (ZH) and Neimark-Sacker (torus) bifurcations.Such phenomena were implicated in non-Feigenbaum behavior in previous works [39,41].I.The legend applies for both (a) and (b).Other plotting conventions are the same as in Fig. 4. APPENDIX B: TRANSITION TO CHAOS THROUGH A SADDLE-NODE BIFURCATION (HYSTERESIS PHENOMENON) As shown in Fig. 5(d), the system can enter the chaotic region through a saddle-node bifurcation at low values of q.This is possible as a result of a hysteresis phenomenon.Consider the following scenario.We keep q constant at 0.2.By increasing the parameter w ee from 17 to 19, the activity starts from a resting state (i.e., converges to a stable fixed point) and remains in the resting state despite the appearance of a limit cycle at the Andronov-Hopf curve (at w ee ≈ 18.7).The system is bistable at this point.Then, by increasing w ee even more, the limit cycle undergoes the period-doubling cascade while the activity remains at rest.At w ee ≈ 21 the stable fixed point collapses on the saddle point and vanishes (saddle-node bifurcation), leaving the system monostable again with the chaotic attractor as the only attractor in phase space.At this point the system transitions from a resting state to a chaotic state directly.A trace of such a transition can be seen in Fig. 8. Until time point 50, the activity is at rest.The saddle-node bifurcation occurs at time point 50 and the activity is chaotic after that.At that time point, the value of the parameter w ee reaches 21.See also Table I for the rest of the parameters [same as Fig. 5(d)].Note that this type of bifurcation, the bistable nature of the behavior, and the direct current shift that is present resemble the experimental signature of seizurelike event onset [59]. APPENDIX C: RANDOM SEARCH IN PARAMETER SPACE FOR ZERO-HOPF BIFURCATIONS A nonlinear optimization method was used to find zero-Hopf bifurcations starting from random points in the parameter space.In particular, fminsearch function was used in MATLAB Release 2013b for the reported results.The search tries to find fixed points where one of the eigenvalues of the Jacobian matrix is zero, λ 1 = 0, and the other two are purely imaginary conjugates, λ 2,3 = ±iω [60].The algorithm starts from a random point p 0 in the parameter space and tries to solve the optimization problem min p z(p) where the function z(p) is given by: The set p is a set of parameters including the three variables E,S, and D and four connectivity weights w.The functions g i , i = {1,2,3}, are the right-hand side of the ODEs in Eq. ( 2) in combination with the input-output function in Eq. (5).Minimizing these functions to 0 is equivalent as solving the system of the three nullclines and therefore finding a fixed point in the phase space.The eigenvalues λ i , i = {1,2,3}, are the eigenvalues of the Jacobian matrix of the system.Minimizing the product i |λ i | 2 ensures that at least one of the eigenvalues is 0 which is the criterion for the saddle-node bifurcation.Given that one of the eigenvalues is 0, minimizing the sum i Re(λ i ) 2 + [ i Im(λ i )] 2 ensures that the other two eigenvalues are complex conjugates with zero real parts.This is the criterion for the Andronov-Hopf bifurcation.The penalty term l is a positive number only when the search algorithm diverges outside the valid parameter space in which the search is limited.This number is proportional to the divergence from the valid parameter space.In all other cases l = 0. The valid parameter space is enclosed in the range [0 0.5] for each of the three variables E,S, and D and the range [0 50] for each of the varying connectivity parameter w.The results shown in Fig. 5(b) are produced for two different cases, for either p = {E,S,D,w ee ,w ed ,w se ,w ds } or p = {E,S,D,w es ,w ss ,w de ,w dd }.Note that both combinations of the connectivity parameters involve all three nullclines of the system.Given that the parameter space is a seven-dimensional space, for each q value the algorithm was run 10 7 times in order to achieve a reasonably comprehensive search. APPENDIX D: DISTRIBUTION OF THE ZERO-HOPF PHENOMENA IN THE PARAMETER SPACE This section reports some supporting results about the random search which was performed to find zero-Hopf bifurcation points for different values of the parameter q.As shown in Fig. 5(b), the points were only found for low values of q.Varying the parameters w es ,w ss ,w de , and w dd (red dashed line) all the zero-Hopf points were limited at q 0.45. Figure 9(a) shows the distribution of these points across the four connectivity parameters.It is evident that all aero-Hopf points are found in a limited range of values below 50 for each parameter.The search was actually limited to values below 50, but even if this limit was higher, it would not return more points.This is true assuming that all zero-Hopf points are lying on a single continuous hypersurface in the parameter space. Figure 9(b) shows the distribution of the same points across the varying parameters w ee ,w ed ,w se , and w ds [black solid line in Fig. 5(b)].In this case it is apparent that as the q value increases, the zero-Hopf points are found at increasingly higher values of w ed .At q = 0.25, the ZH points are actually found near the arbitrarily chosen upper limit of 50 for w ed .This suggests that our results might be biased.The other three parameters (w ee ,w se , and w ds ) are clearly limited to values lower than 50 so they do not raise any concerns.In order to check whether more ZH points can be found for higher values of q by varying these parameters, the valid ranges for parameter values were expanded to [0 200] for w ed and [0 100] for the other three parameters.The random search was run again and indeed a few more ZH points were found for q = 0.3 and q = 0.35.For these additional ZH points, w ed has very high values (in the range [150 200]), whereas the other three parameters remain very limited to values below 50 (data not shown).Apparently, the hypersurface that accommodates the ZH points collapses on just one parameter, namely w ed .This result suggests that it might be possible to produce zero-Hopf phenomena even for high values of q just by keep increasing disproportionally w ed while the other parameters remain stable at low values.It is also evident from the previously reported results [e.g., see Figs.FIG. 10.Increasing w ed and keeping everything else constant quickly flattens out the chaotic attractor.Ampl(X) denotes the max-min amplitude in dimension X.Note that q = 1 in this case.See also Table I for the rest of the parameter values. the chaotic region can actually be extended for high values of w ed .But as Fig. 10 shows, the chaotic attractor quickly flattens out as w ed increases.The max-min amplitude along the dimension D decreases much faster than the max-min amplitude along the other two dimensions making the attractor almost a two-dimensional object in phase space.This flattening of the attractor defeats its modeling purpose.Assuming that all the parameters have the same order to magnitude, ZH points cannot be found for high values of q. So based on these results and assuming that all zero-Hopf points are lying on a single continuous hypersurface and also assuming that all connectivity parameter values have the same order of magnitude, no zero-Hopf bifurcations can be found in a model with strong divisive inhibition (high q value) and therefore any abrupt period-doubling transition to chaos is prevented. FIG. 1 . FIG. 1. (Color online) Schematic of the ESD model.The model incorporates an excitatory population (E), a subtractive (S), and a divisive (D) inhibitory population.The lower panels show how the two inhibitory mechanisms modulate the input-output function of their target population.Note the nonreciprocal inhibitory connection between S and D. The external inputs to the populations are omitted in this schematic. FIG. FIG. 3 . FIG. 3. (Color online) Comparison between the ES Ś and ESD model in terms of their Feigenbaum behavior and their first return maps.[(a)-(b)] Bifurcation diagrams for the ES Ś and ESD models showing the first four period-doubling curves.The parameter w ee was used as the cascade parameter in both cases.The lower panels show the calculated δ of the cascades for varying w eś and w ed , respectively.[(c)-(d)] First return maps of typical chaotic attractors produced by the ES Ś and ESD model, respectively.The maps are produced by taking the local minima of the variable E (red dots on the attractors in the inset plots).The parameter values used to produce the chaotic attractors are marked with a red cross in the respective bifurcation diagram in (a) or (b) (see also TableI). FIG. 4 . FIG. 4. (Color online) Zero-Hopf (ZH) and Neimark-Sacker bifurcations are implicated in the non-Feigenbaum behavior of the ES Ś model.The Andronov-Hopf bifurcation curves (orange) are plotted alongside the period-doubling bifurcation curves for both the ES Ś and ESD model.A saddle-node bifurcation curve (cyan) was also found near the Andronov-Hopf curve only in the case of ES Ś.These two bifurcation curves tangentially intersect and produce a zero-Hopf (ZH) bifurcation point.Near the ZH point, the value of δ is increased indicating a non-Feigenbaum behavior in the ES Ś model.In contrast, the ESD model does not exhibit any interaction between Andronov-Hopf and saddle-node curves (at least in this example) and the model continues to obey Feigenabum universality.The shaded area indicates the existence of a torus.Other plotting conventions are the same as in Figs.3(a)-3(b). ,w ed ,w se ,w ds varying w es ,w ss ,w de , FIG. 5 . FIG. 5. (Color online)Zero-Hopf (ZH) bifurcations can only be found when divisive inhibition is far from being purely divisive (low values of q).(a) Schematic of the model incorporating the q parameter which enables the modeling of nonideal divisive inhibition and creates a spectrum between ES Ś and ESD.(b) Counting the zero-Hopf bifurcations found for different values of q using a random search in the parameter space.(c) Intuitive schematic of how the presence or the absence of the divisive inhibition shapes the boundary between order and chaos.(d) Bifurcation diagram showing an example of the spectrum between ES Ś (for q = 0) and ESD (for q = 1).Note the presence of the saddle-node curve (cyan) near the Andronov-Hopf curve (orange) only for low values of q. Figure 6 shows an example of an abrupt period-doubling cascade leading to chaos taken from the ES Ś model.The specific parameters used are given in TableI, Parameter Set 5. Comparing it to the typical, fractal in nature, Feigenbaum cascade in ESD (see Fig.2), this cascade can be considered as a much more abrupt transition into chaos with δ ≈ 12.35 which is much higher than the Feigenbaum constant[35].Figures7(a) and 7(b) are showing two more examples of non-Feigenbaum behavior produced by the ES Ś model.The specific parameters used in both cases are given in TableI.The period-doubling bifurcation curves, in both of these examples, are found to be distorted or twisted resulting into values of δ that can vary widely.The presence of the saddle-node bifurcation curves near these cascades is hypothesized to be interfering locally with limit cycles in phase space.This prevents the neighboring limit cycles to bifurcate in a typical Feigenbaum way.Saddle-node and Andronov-Hopf bifurcation curves are also interacting producing the phenomena of zero-Hopf (ZH) and Neimark-Sacker (torus) bifurcations.Such phenomena were implicated in non-Feigenbaum behavior in previous works[39,41]. Figures 7 ( Figure 6 shows an example of an abrupt period-doubling cascade leading to chaos taken from the ES Ś model.The specific parameters used are given in TableI, Parameter Set 5. Comparing it to the typical, fractal in nature, Feigenbaum cascade in ESD (see Fig.2), this cascade can be considered as a much more abrupt transition into chaos with δ ≈ 12.35 which is much higher than the Feigenbaum constant[35].Figures7(a) and 7(b) are showing two more examples of non-Feigenbaum behavior produced by the ES Ś model.The specific parameters used in both cases are given in TableI.The period-doubling bifurcation curves, in both of these examples, are found to be distorted or twisted resulting into values of δ that can vary widely.The presence of the saddle-node bifurcation curves near these cascades is hypothesized to be interfering locally with limit cycles in phase space.This prevents the neighboring limit cycles to bifurcate in a typical Feigenbaum way.Saddle-node and Andronov-Hopf bifurcation curves are also interacting producing the phenomena of zero-Hopf (ZH) and Neimark-Sacker (torus) bifurcations.Such phenomena were implicated in non-Feigenbaum behavior in previous works[39,41]. FIG. 7 . FIG. 7. (Color online) Zero-Hopf bifurcation found near the non-Feigenbaum critical behavior in the ES Ś model.The parameter sets for these examples can be found in TableI.The legend applies for both (a) and (b).Other plotting conventions are the same as in Fig.4. FIG. 8 . FIG.8.Example of the activity entering chaos directly from the resting state.At time point 50, a saddle-node bifurcation occurs and the node on which the activity was resting vanishes.After that point the trajectory converges to the chaotic attractor. FIG. 9 . FIG.9.Box plots showing the distribution of ZH points found in the parameter space for the four varying parameters (a) {w es ,w ss ,w de ,w dd } and (b) {w ee ,w ed ,w se ,w ds }.The horizontal line indicates the median value, the box edges indicate the 25th and 75th percentiles and the whiskers extend to include the whole range of values.(a) As q increases, the ZH points are found in an increasingly limited range of values below 50.(b) In contrast, the values for the parameter w ed are increasing as q increases and they reach the upper limit of 50. , an example of the comparison between ESD and ES Ś is shown with Example of an abrupt period-doubling cascade in the ES Ś model.The fourth period-doubling bifurcation comes relatively much more abruptly compared to the previous one ( δ ≈ 12.35).
10,880.8
2015-09-23T00:00:00.000
[ "Physics" ]
Dynamical Transitions in a Pollination–Herbivory Interaction: A Conflict between Mutualism and Antagonism Plant-pollinator associations are often seen as purely mutualistic, while in reality they can be more complex. Indeed they may also display a diverse array of antagonistic interactions, such as competition and victim–exploiter interactions. In some cases mutualistic and antagonistic interactions are carried-out by the same species but at different life-stages. As a consequence, population structure affects the balance of inter-specific associations, a topic that is receiving increased attention. In this paper, we developed a model that captures the basic features of the interaction between a flowering plant and an insect with a larval stage that feeds on the plant’s vegetative tissues (e.g. leaves) and an adult pollinator stage. Our model is able to display a rich set of dynamics, the most remarkable of which involves victim–exploiter oscillations that allow plants to attain abundances above their carrying capacities and the periodic alternation between states dominated by mutualism or antagonism. Our study indicates that changes in the insect’s life cycle can modify the balance between mutualism and antagonism, causing important qualitative changes in the interaction dynamics. These changes in the life cycle could be caused by a variety of external drivers, such as temperature, plant nutrients, pesticides and changes in the diet of adult pollinators. Introduction Il faut bien que je supporte deux ou trois chenilles si je veux connaître les papillons Le Petit Prince, Chapitre IX -Antoine de Saint-Exupéry Mutualism can be broadly defined as cooperation between different species [1]. In mutualistic interactions typically there are benefits and costs, in terms of resources, energy and time potentially lead to periodic alternation between mutualism and herbivory. Thus, when nonequilibrium dynamics are involved, questions concerning the overall nature (positive, neutral or negative) of mixed interactions may not have simple answers. In this article we study the feedback between insect population structure, pollination and herbivory. We want to understand how the balance between costs (herbivory) and benefits (pollination) affects the interaction between plants (e.g. D. wrightii) and herbivore-pollinator insects (e.g. M. sexta)? Also what role does insect development have in this balance and on the resulting dynamics? We use a mathematical model which considers two different resources provided by the same plant species, nectar and vegetative tissues. Nectar consumption benefits the plant in the form of fertilized ovules, and consumption of vegetative tissues by larvae causes a cost. Our model predicts that the balance between mutualism and antagonism, and the long term stability of the plant-insect association, can be greatly affected by changes in larval development rates, as well as by changes in the diet of adult pollinators. Methods Our model concerns the dynamics of the interaction between a plant and an insect. The insect life cycle comprises an adult phase that pollinates the flowers and a larval phase that feed on non-reproductive tissues of the same plant. Adults oviposit on the same species that they pollinate (e.g. D. wrightii -M. sexta interaction). Let denote the biomass densities of the plant, the larva, and the adult insect with P, L and A respectively. An additional variable, the total biomass of flowers F, enables the mutualism by providing resources to the insect (nectar), and by collecting services for the plant (pollination). The relationship is facultative-obligatory. In the absence of pollination, plant biomass persists by vegetative growth (e.g. root, stem and leaf biomass are being constantly renewed). For the sake of simplicity and because we want to focus on the plant-insect interaction, we describe vegetative growth using a logistic growth rate, a choice that is empirically justified for tobacco plants [18]. In the absence of the plant, however, the insect always goes extinct because larval development relies exclusively on herbivory, even if adults pollinate other plant species. This is based on the biology of M. sexta [6]. The mechanism of interaction between these four variables (P, L, A, F), as shown in Fig. 1, is described by the following system of ordinary differential equations (ODE): where r: plant intrinsic growth rate, c: plant intra-specific self-regulation coefficient (also the inverse its carrying capacity), a: pollination rate, b: herbivory rate, s: flower production rate, w: flower decay rate, m, n: larva and adult mortality rates, σ: plant pollination efficiency ratio, ε: adult consumption efficiency ratio. Like ε, parameter γ is also a consumption efficiency ratio, but we will call it the maturation rate for brevity since we will refer to it frequently. Our model assumes that pollination leads to flower closure [19], causing resource limitation for adult insects. Parameter g represents a reproduction rate resulting from the pollination of other plants species, which we do not model explicitly. Most of our results are for g = 0. We now consider the fact that flowers are ephemeral compared with the life cycles of plants and insects. In other words, some variables (P, L, A) have slower dynamics, and others (F) are fast [20]. Given the near constancy of plants and animals in the flower equation of (1), we can predict that flowers will approach a quasi-steady-state (or quasi-equilibrium) biomass F % sP/ (w + aA), before P, L and A can vary appreciably. Substituting the quasi-steady-state biomass in system (1) we arrive at: In system (2) the quantities in square brackets can be regarded as functional responses. Plant benefits saturate with adult pollinator biomass, i.e. pollination exhibits diminishing returns. The functional response for the insects is linear in the plant biomass, but is affected by intraspecific competition [21] for mutualistic resources. We non-dimensionalized this model to reduce the parameter space from 12 to 9 parameters, by casting biomasses with respect to the plant's carrying capacity (1/c) and time in units of plant biomass renewal time (1/r). This results in a PLA (plant, larva, adult) scaled model: Table 1 lists the relevant transformations. There is an important clarification to make concerning the nature and scales of the conversion efficiency ratios σ, ε involved in pollination, and γ for herbivory and maturation. This has to do with the fact that flowers per se are not resources or services, but organs that enable the mutualism to take place, and they mean different things in terms of biomass production for plants and animals. For insects, the yield of pollination is thermodynamically constrained. First of all, a given biomass F of flowers contains an amount of nectar that is necessarily less than F. More importantly, part of this nectar is devoted to survival, or wasted, leaving even less for reproduction. Similarly, not all the biomass consumed by larvae will contribute to their maturation to adult. Ergo ε < 1, γ < 1. Regarding the returns from pollination for the plants, the situation is very different. Each flower harbors a large number of ovules, thus a potentially large number of seeds [22], each of which will increase in biomass by consuming resources not considered by our model (e.g. nutrients, light). Consequently, a given biomass of pollinated flowers can produce a larger biomass of mature plants, making σ larger than 1. The PLA model (3) has many parameters. However, here we focus on herbivory rates (β) and larvae maturation (γ) because increasing β turns the net balance interaction towards antagonism, whereas increasing γ shifts insect population structure towards the adult phase, turning the net balance towards mutualism. Both parameters also relate to the state variables at equilibrium (i.e. z/y = βγx/ν in (3) for dz/dτ = 0). We studied the joint effects of varying β and γ numerically (parameter values in Table 1) using XPPAUT [23]. ODE were integrated using Matlab [24] or GNU/Octave [25]. We also present a simplified graphical analysis of our model, in order to explain how different dynamics can arise, by varying other parameters. The source codes supporting these results are provided as supplementary material (S1 File). Results Numerical results Fig. 2 shows interaction outcomes of the PLA model, as a function of β and γ for specialist pollinators (ϕ = 0). This parameter space is divided by a decreasing R o = 1 line that indicates whether or not insects can invade when rare. R o is defined as (see derivation in S1 File): and we call it the basic reproductive number, according to the argument that follows. Consider the following in system (3): if the plant is at carrying capacity (x = 1), and is invaded by a very small number of adult insects (z % 0), the average number of larvae produced by a single adult in a given instant is εαx/(η + z) % εα/η, and during its life-time (ν −1 ) it is εα/ην. Larvae die at the rate μ, or mature with a rate equal to γβx = γβ, per larva. Thus, the probability of larvae becoming adults rather than dying is γβ/(μ + γβ). Multiplying the life-time contribution of an adult by this probability gives the expected number of new adults replacing one adult per generation during an invasion (R o ). More formally, R o is the expected number of adult-insectgrams replacing one adult-insect-gram per generation (assuming a constant mass-per-individual ratio). Below the R o = 1 line, small insect populations cannot replace themselves (R o < 1) and two outcomes are possible. If the maturation rate is too low, the plant only equilibrium (x = 1, y = z = 0) is globally stable and plant-insect coexistence is impossible for all initial conditions. If the maturation rate is large enough, stable coexistence is possible, but only if the initial plant and insect biomass are large enough. This is expected in models where at least one species, here the insect, is an obligate mutualist. In this region of the space of parameters, the growth of small insect populations increases with population size, a phenomenon called the Allee effect [26]. Above the R o = 1 line the plant only equilibrium is always unstable against the invasion of small insect populations (R o > 1). Plants and insects can coexist in a stable equilibrium or via limit cycles (stable oscillations). The zone of limit cycles occurs for intermediate values of the maturation rate (γ) and it widens with rate of herbivory (β). Plant equilibrium when coexisting with insects can be above or below the carrying capacity (x = 1). When above carrying capacity the net result of the interaction is a mutualism (+,+). While in the second case we have antagonism, more specifically net herbivory (−,+). As it would be expected, increasing herbivory rates (β) shifts this net balance towards antagonism (low plant biomass), while decreasing it shifts the balance towards mutualism (high plant biomass). The quantitative response to increases in the maturation rate (γ) is more complex however (see the bifurcation plot in S1 File). Given that there is herbivory, we encounter victim-exploiter oscillations. However, the oscillations in the PLA model are special in the sense that the plant can attain maximum biomasses above the carrying capacity (x > 1). For an example see Fig. 3. Instead of a stable balance between antagonism and mutualism, we can say that the outcome in Fig. 3 is a periodic alternation of both cases. This is not seen in simple victim-exploiter models, where oscillations are always below the victim's carrying capacity [27,28]. The relative position of the cycles along the plant axis is also affected by herbivory: if β decreases (increases), plant maxima and minima will increase (decrease) in Fig. 3 (see bifurcation plot in S1 File). In some cases the entire plant cycle (maxima and minima) ends above the carrying capacity if β is low enough (see S1 File), but further decrease causes damped oscillations. We also found examples in which coexistence can be stable or lead to limit cycles depending on the initial conditions (see example in S1 File), but this happens in a very restrictive region in the space of parameters (see bifurcation plot in S1 File). Limit cycles can also cross the plant's carrying capacity under the original interaction mechanism (1), which does not assume the steady-state in the flowers (see S1 File, using parameters in the last column of Table 1). Fig. 4 shows the β vs γ parameter space of the model when the adults are more generalist. The relative positions of the plant-only, Allee effect, and coexistence regions are similar to the case of specialist pollinators (Fig. 2). However, the region of limit cycles is much larger. The R 0 = 1 line is closer to the origin, because the expression for R 0 is now (see derivation in S1 File): In other words, this means that the more generalist the adult pollinators (larger ϕ), the more likely they can invade when rare. There is also a small overlap between the Allee effect and limit cycle regions, i.e. parameter combinations for which the long term outcome could be insect extinction or plant-insect oscillations, depending on the initial conditions. Graphical analysis The general features of the interaction can be studied by phase-plane analysis. To make this easier, we collapsed the three-dimensional PLA model into a two-dimensional plant-larva (PL) model, by assuming that adults are extremely short lived compared with plants and larvae (see resulting ODE in S1 File). The closest realization of this assumption could be Manduca sexta, which has a larval stage of approximately 20-25 days and adult stages of around 7 days [29,30]. For a given parametrization (Table 1), the PL model has the same equilibria as the PLA model, but not the exact same global dynamics due to the alteration of time scales. Yet, this simplification provides insights about the outcomes displayed in Figs. 2 and 4. 5 shows representative examples of plant and larva isoclines (i.e. non-trivial nullclines) and coexistence equilibria (intersections). Isocline properties are analytically justified (see S1 File and supplemented [31] worksheet). The local dynamics around equilibria depends on the eigenvalues of the jacobian matrix of the PL model at the equilibrium. However, the highly non-linear nature of the PL model (see S1 File), makes it pointless to try infer the signs of the eigenvalues by analytical means (except for trivial and plant-only equilibrium). Thus, we propose to use to local geometry of isocline intersections to infer local stability [32]. Plant isoclines take two main forms: gsa < Zn the isocline lies entirely belowðto the left of Þ the carrying capacity gsa > Zn parts of the isocline lie aboveðto the right of Þ the carrying capacity ( ð6Þ In both cases, plants grow between the isocline and the axes, and decrease otherwise. Larva isoclines are simpler, they start in the plant axis and bend towards the right when insects tend towards specialization (ϕ < ν), as shown by Fig. 5. When insects tend towards generalism (ϕ > ν), their isoclines increase rapidly upwards like the letter "J" (not shown here, see S1 File). Insects grow below and right of the larva isocline, and decrease otherwise. The γσα < ην case in Fig. 5A illustrates scenarios in which pollination rates (α), plant benefits (σ), adult pollinator lifetimes (1/ν) and larva-to-adult transition rates (γ) are low. The plant's isocline is a decreasing curve crossing the plant's axis at its carrying capacity K (x = 1, y = 0). The intersection with the larva isocline creates a globally stable equilibrium, approached by oscillations of decreasing amplitude. The local stability of this equilibrium can be explained partly by the geometry of the intersection: Fig. 5A shows that if plants increase (decrease) above (below) the intersection point, while keeping the insect density fixed, they enter a zone of negative (positive) growth; and the same behavior holds for the insects while keeping the plants fixed. In ecological terms, both species are self-limited around the equilibrium, a strong indication of stability [32]. Together with the fact that the trivial (x = 0, y = 0) and carrying capacity equilibrium (x = 1, y = 0) are saddle points, we conclude that plants and insects achieve a globally stable equilibrium after a period of transient oscillations (provided that insects are viable, e.g. β, γ, ε are large enough). This equilibrium is demographically unfavorable for the plant because its biomass lies below the carrying capacity (x < 1). Indeed, for extreme scenarios of negligible plant pollination benefits (i.e. α and/or σ tend to zero), the plant's isocline approximates a straight line with a negative slope, like the isocline of a logistic prey in a Lotka-Volterra model, which is well known to cause damped oscillations [32]. The γσα > ην case in Figs. 5B,C,D cover scenarios in which pollination rates (α), pollination benefits (σ), adult pollinator lifetimes (1/ν) and larva-to-adult (harm-to-benefit) transition rates (γ) are high. One part of the plant's isocline lies above the carrying capacity, which means that coexistence equilibria with plant biomass larger than the carrying capacity (x > 1) are possible, and this is favorable for the plant. Fig. 5B, shows and example where the larva isocline intersects the plant's isocline twice above the carrying capacity. One intersection is a locally stable coexistence equilibrium, whereas the other intersection is a saddle point. The saddle point belongs to a boundary that separates regions of initial conditions leading to insect persistence or extinction. This can explain the Allee effect, i.e. insect growth rates increase (go from negative to positive) with insect density when insect populations are very small. As the second inequality of (6) widens (γσα ) ην), the plant's isocline takes a mushroomlike shape (or "anvil" or letter "O"), as in Fig. 5C,D. The plant's isocline displays a very prominent "hump", like in the prey isocline of the Rosenzweig-MacArthur model [27]. As a "rule of thumb", intersections at the right of the hump would lead to damped oscillations, for the reasons explained before (Fig. 5A, for γσα < ην). Also as a "rule of thumb", intersections at the left of the hump (like in Fig. 5C,D) are expected to result in reduced stability. This is because a small increase (decrease) along the plant's axis leaves the plant at the growing (decreasing) side of its isocline, promoting further increase (decrease). This means that plants do not experience self-limitation, which is an indication of instability [32], and we infer that oscillations will not vanish. Fig. 5D shows an example where an intersection at the left of the hump causes instability, leading to limit cycles. However, Fig. 5C shows an exception of this prediction (the intersection is stable). In both examples the intersection occurs above the plants carrying capacity, thus revealing oscillations alternating above and below the plant's carrying capacity. We want to stress one more time, that these predictions based on isocline intersection configurations (left vs right of the hump) must be taken as "rules of thumb". Fig. 5C also reveals an important consequence of the dual interaction between the plant and the insect. As we can see, the presence of a saddle point leads to the Allee effect explained before. But this figure also shows that large larval densities can lead to insect extinction. This can be explained by the fact that at large initial densities, the larva overexploits the plant, and this is followed by an insect population crash from which it cannot recover due to the Allee effect. From Mutualism to Exploitation Dynamics As γ, σ, α increase and/or η, ν decrease more and more, the decreasing segment of the plant isocline (the part at the right of the hump) approximates a decreasing line (actually a straight asymptotic line, see S1 File), while the rest of the isocline is pushed closer and closer to the axes. In other words, when pollination rates (α), benefits (σ), adult lifetimes (1/ν) and larva development rates (γ) increase, plant isoclines would resemble the isocline of a logistic prey, with a "pseudo" carrying capacity (the rightmost extent of the isocline) larger than the intrinsic carrying capacity (x = 1). Fig. 5D is an example of this. These conditions would promote stable coexistence with large plant equilibrium biomasses. Discussion We developed a plant-insect model that considers two interaction types, pollination and herbivory. Ours belongs to a class of models [33,34] in which balances between costs and benefits cause continuous variation in interaction strengths, as well as transitions among interaction types (mutualism, predation, competition). In our particular case, interaction types depend on the stage of the insect's life cycle, as inspired by the interaction between M. sexta and D. wrightii [6,14] or between M. sexta and N. attenuata [10]. There are many other examples of pollination-herbivory in Lepidopterans, where adult butterflies pollinate the same plants exploited by their larvae [5,7]. We assign antagonistic and mutualistic roles to larva and adult insect stages respectively, which enable us to study the consequences of ontogenetic changes on the dynamics of plant-insect associations, a topic that is receiving increased attention [8,17]. Our model could be generalized to other scenarios, in which drastic ontogenetic niche shifts cause the separation of benefits and costs in time and space. However, it excludes cases like the yucca/yucca moth interaction [35] where adult pollinated ovules face larval predation, i.e. benefits themselves are deducted. Instead of using species biomasses as resource and service proxies [34], we consider a mechanism (1) that treats resources more explicitly [36]. We use flowers as a direct proxy of resource availability, by assuming a uniform volume of nectar per flower. Nectar consumption by insects is concomitant with service exploitation by the plants (pollination), based on the assumption that flowers contain uniform numbers of ovules. Pollination also leads to flower closure [19], making them limiting resources. Flowers are ephemeral compared with plants and insects, so we consider that they attain a steady-state between production and disappearance. As a result, the dynamics is stated only in terms of plant, larva and adult populations, i.e. the PLA model (3). The feasibility of the results described by our analysis depends on several parameters. The consumption, mortalities and growth rates, and the carrying capacities (e.g. a, b, m, n and r, c in the fourth column of Table 1), have values close to the ranges considered by other models [34,37]. Oscillations, for example, require large herbivory rates, but this is usual for M. sexta [15]. Mutualism-antagonism cycles The PLA model displays plant-insect coexistence for any combination of (non-trivial) initial conditions where insects can invade when rare (R o > 1). Coexistence is also possible where insects cannot invade when rare (R o < 1), but this requires high initial biomasses of plants and insects (Allee effect). Coexistence can take the form of a stable equilibrium, but it can also take the form of stable oscillations, i.e. limit cycles. Previous models combining mutualism and antagonism predict oscillations, but they are transient ones [35,38], or the limit cycles occur entirely below the plant's carrying capacity [39]. We have good reasons to conclude that the cycles are herbivory driven and not simply a consequence of the PLA model having many variables and non-linearities. First of all, limit cycles require herbivory rates (β) to be large enough. Second, given limit cycles, an increase in the maturation rate (γ) causes a transition to stable coexistence, and further increase in herbivory is required to induce limit cycles again (Fig. 2). This makes sense because by speeding up the transition from larva to adult, the total effect of herbivory on the plants is reduced, hence preventing a crash in plant biomass followed by a crash in the insects. Third, when adult pollinators have alternative food sources (ϕ > 1), the zone of limit cycles in the space of parameters becomes larger (Fig. 4). This also makes sense, because the total effect of herbivory increases by an additional supply of larva (which is not limited by the nectar of the plant considered), leading to a plant biomass crash followed by insect decline. The graphical analysis provides another indication that oscillations are herbivory driven. On the one hand insect isoclines (or rather larva isoclines) are always positively sloped, and insects only grow when plant biomass is large enough (how large depends on insect's population size, due to intra-specific competition). Plant isoclines, on the other hand, can display a hump (Fig. 5B,C,D), and they grow (decrease) below (above) the hump. These two features of insect and plant isoclines are associated with limit cycles in classical victim-exploiter models [27]. If there is no herbivory or another form of antagonism (e.g. competition) but only mutualism, the plant's isocline would be a positively sloped line, and plants would attain large populations in the presence of large insect populations, without cycles. However, mutualism is still essential for limit cycles: if mutualistic benefits are not large enough (γσα < ην), plant isoclines do not have a hump (Fig. 5A) and oscillations are predicted to vanish. The effect of mutualism on stability is like the effect of enrichment on the stability in pure victim-exploiter models [28], by allowing the plants to overcome the limits imposed by their intrinsic carrying capacity. There is a minor caveat regarding our graphical analysis: whereas a hump in the plant's isocline is a requisite for oscillations to evolve into limit cycles, this does not mean that isocline intersections at the left of the hump always lead to limit cycles (Fig. 5C). To our best knowledge, this always happens only for quite specific conditions in pure victim-exploiter models [40]. As long as we cannot prove by analytical means that intersection geometry determines local stability, the prediction of limit cycles remains a "rule of thumb", based on extrapolating our knowledge about other victim-exploiter models. Classification of outcomes: mutualism or herbivory? Interactions can be classified according to the net effect of one species on the abundance (biomass, density) of another (but see other schemes [41]). This classification scheme can be problematic in empirical contexts because reference baselines such as carrying capacities are usually not known [42]. Our PLA model illustrates the classification issue when non-equilibrium dynamics are generated endogenously, i.e. not by external perturbations. Since plants are facultative mutualists and insects are obligatory ones, one can say the outcome is net mutualism (+,+) or net herbivory (−,+), if the coexistence is stable, and the plant equilibrium ends up respectively above or below the carrying capacity [33,34]. If coexistence is under non-equilibrium conditions and plant oscillations are entirely below the carrying capacity (e.g. for large herbivory rates), the outcome is detrimental for plants and hence there is net herbivory (−,+); oscillations may in fact be considered irrelevant for this conclusion (or may further support the case of herbivory, read below). However, when the plant oscillation maximum is above carrying capacity and the minimum is below, like in Fig. 3, could we say that the system alternates periodically between states of net mutualism and net herbivory? Here perhaps a time-based average over the cycle can help up us decide. The situation could be more complicated if plant oscillations lie entirely above the carrying capacity (see an example in S1 File): one can say that the net outcome is a mutualism due to enlarged plant biomasses, but the oscillations indicates that a victim-exploiter interaction exists. As we can see, deciding upon the net outcome require consideration of both equilibrium and dynamical aspects. Factors that could cause dynamical transitions Environmental factors. The parameters in our analyses can change due to external factors. One of the most important is temperature [43]. It is well known, for example, that climate warming can reduce the number of days needed by larvae to complete their development [44], making larvae maturation rates (γ) higher. For insects that display Allee effects, a cooling of the environment will cause the sudden extinction of the insect and a catastrophic collapse of the mutualism, which cannot be simply reverted by warming. By retarding larva development into adults, cooling would increase the burden of herbivory over the benefits of pollination, making the system less stable by promoting oscillations. Flowering, pollination, herbivory, growth and mortality rates (e.g. s, a, b, r, m and n in equations 1) are also temperature-dependent and they can increase or decrease with warming depending on the thermal impacts on insect and plant metabolisms [45]. This makes general predictions more difficult. However, we get the general picture that warming or cooling can change the balance between costs and benefits impacting the stability of the plant-insect association. Dynamical transitions can also be induced by changes in the chemical environment, often as a consequence of human activity. Some pesticides, for example, are hormone retarding agents [46]. This means that their release can reduce maturation rates, altering the balance of the interaction towards more herbivory and less pollination and finally endangering pollination service [47,48]. In other cases, the chemical changes are initiated by the plants: in response to herbivory, many plants release predator attractants [49], which can increase larval mortality (μ). If the insect does nothing but harm, this is always an advantage. If the insect is also a very effective pollinator, the abuse of this strategy can cost the plant important pollination services because a dead herbivore today is one less pollinator tomorrow. Another factor that can increase or decrease larvae maturation rates, is the level of nutrients present in the plant's vegetative tissue [50,51]. On the one hand, the use of fertilizers rich in phosphorus could increase larvae maturation rates [51]. On the other hand, under low protein consumption M. sexta larvae could decrease maturation rate, although M. sexta larvae can compensate this lack of proteins by increasing their herbivory levels (i.e. compensatory consumption) [50]. Thus, different external factors related to plant nutrients could indirectly trigger different larvae maturation rates that will potentially modify the interaction dynamics. Pollinator's diet breadth. An important factor that can affect the balance between mutualism and herbivory is the diet breadth of pollinators. Alternative food sources for the adults could lead to apparent competition [52] mediated by pollination, as predicted for the interaction between D. wrigthii (Solanacea) and M. sexta (Sphingidae) in the presence of Agave palmieri (plant) [6]: visitation of Agave by M. sexta does not affect the pollination benefits received by D. wrightii, but it increases oviposition rates on D. wrightii, increasing herbivory. As discussed before, such an increase in herbivory could explain why oscillations are more widespread when adult insects have alternative food sources (ϕ > 0) in our PLA model. Although we did not explore this with our model, the diet breadth of the larva could also have important consequences. In the empirical systems that inspired our model, the larva can have alternative hosts [14], spreading the costs of herbivory over several species. The local extinction of such hosts could increase herbivory on the remaining ones, promoting unstable dynamics. To explore these issues properly, models like ours must be extended to consider larger community modules or networks, taking into account that there is a positive correlation between the diet breadths of larval and adult stages [7]. From the perspective of the plant, the lack of alternative pollinators could also lead to increased herbivory and loss of stability. The case of the tobacco plant (N. attenuata) and M. sexta is illustrative. These moths are nocturnal pollinators, and in response to herbivory by their larvae, the plants can change their phenology by opening flowers during the morning instead. Thus, oviposition and subsequent herbivory can be avoided, whereas pollination can still be performed by hummingbirds [11]. Although hummingbirds are thought to be less reliable pollinators than moths for several reasons [9], they are an alternative with negligible costs. Thus, a decline of hummingbird populations will render the herbivore avoidance strategy useless and plants would have no alternative but to be pollinated by insects with herbivorous larvae that promote oscillations. Conclusions Many insect pollinators are herbivores during their larval phases. If pollination and herbivory targets the same plant (e.g. as between tobacco plants and hawkmoths), the overall outcome of the association depends on the balance between costs and benefits for the plant. As predicted by our plant-larva-adult (PLA) model, this balance is affected by changes in insect development: the faster larvae turns into adults the better for the plant and the interaction is more stable; the slower this development the poorer the outcome for the plant and the interaction is less stable (e.g. oscillations). Under plant-insect oscillations, this balance can be dynamically complex (e.g. periodic alternation between mutualism and antagonism). Since maturation rates play an essential role in long term stability, we predict important qualitative changes in the dynamics due to changes in environmental conditions, such as temperature and chemical compounds (e.g. toxins, hormones, plant nutrients). The stability of these mixed interactions can also be greatly affected by changes in the diet generalism of the pollinators.
7,588
2014-04-18T00:00:00.000
[ "Environmental Science", "Biology" ]
Improving activity and enantioselectivity of lipase via immobilization on macroporous resin for resolution of racemic 1- phenylethanol in non-aqueous medium Background Burkholderia cepacia lipase (BCL) has been proved to be capable of resolution reactions. However, its free form usually exhibits low stability, bad resistance and no reusability, which restrict its further industrial applications. Therefore, it is of great importance to improve the catalytic performance of free lipase in non-aqueous medium. Results In this work, macroporous resin NKA (MPR-NKA) was utilized as support for lipase immobilization. Racemic transesterification of 1-phenylethanol with vinyl acetate was chosen as model reaction. Compared with its free form, the enzyme activity and enantioselectivity (ees) of the immobilized lipase have been significantly enhanced. The immobilized BCL exhibited a satisfactory thermostability over a wide range of temperature (from 10 to 65°C) and an excellent catalytic efficiency. After being used for more than 30 successive batches, the immobilized lipase still kept most of its activity. In comparison with other immobilized lipases, the immobilized BCL also exhibits better catalytic efficiency, which indicates a significant potential in industrial applications. Conclusion The results of this study have proved that MPR-NKA was an excellent support for immobilization of lipase via the methods of N2 adsorption–desorption, scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS) and Fourier transform-infrared spectroscopy (FT-IR). The improvement of enzyme activity and ees for the immobilized lipase was closely correlated with the alteration of its secondary structure. This information may contribute to a better understanding of the mechanism of immobilization and enzymatic biotransformation in non-aqueous medium. Conclusion: The results of this study have proved that MPR-NKA was an excellent support for immobilization of lipase via the methods of N 2 adsorption-desorption, scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS) and Fourier transform-infrared spectroscopy (FT-IR). The improvement of enzyme activity and ee s for the immobilized lipase was closely correlated with the alteration of its secondary structure. This information may contribute to a better understanding of the mechanism of immobilization and enzymatic biotransformation in non-aqueous medium. Background In recent years, lipase (EC 3.1.1.3) has been widely applied in biotransformation reactions in aqueous and non-aqueous medium, because it can be used to catalyze hydrolysis and transesterification reactions, as well as synthesis of esters [1]. Especially, the ability of lipases to perform enantioselective biotransformation in preparation of pharmaceutical intermediates and chiral building blocks has made them increasingly attractive and promising [2]. Particular attention has been paid to the Burkholderia cepacia strain, which can produce versatile enzyme and be widely used for biodegradation, biological control and hydrolyzing biotransformation in various reactions [3]. The lipase from Burkholderia cepacia lipase (BCL) has high stability, alcohol tolerance and activity suitable for a broad spectrum of reactions substrates and media [4]. However, its free form usually exhibits low stability, bad resistance and no reusability, which restrict its further application in industry [5]. In most cases, these disadvantages of directly using free form lipase are common phenomena in other enzymecatalyzed reactions [6,7]. Thus, the issue focusing on how to improve the catalytic properties of free lipase (such as activity, thermal stability and reusability) in non-aqueous medium is an important topic. Immobilization has been proved to be one of the most useful strategies to improve catalytic properties of free enzyme [8]. There are several conventional immobilization approaches, such as adsorption, entrapment, encapsulation, and covalent binding [9,10]. Among them, adsorption is advantageous because of its procedural simplicity, low cost, high efficiency and ease of industrial application. Immobilized lipases via adsorption methods have been used in many reactions, such as ester synthesis, biodiesel production and enrichment of polyunsaturated fatty acids [5,11,12]. So far, various materials have been employed as supports for enzyme immobilization [13]. However, usage of MPRs in the resolution reaction has rarely been explored. Although many studies showed that immobilization could greatly enhance the catalytic performance of enzyme [14,15], till now, to the best of our knowledge, it is still unclear that why immobilization can enhance the activity and tolerance of lipases. Thus, it is important to elucidate the possible mechanism of this enhancement. For this purpose, in this study, several methods (N 2 adsorption-desorption, SEM, EDS and FT-IR) were employed to characterize the immobilized lipase in order to investigate probable mechanism for the enhancements of enzyme activity and enantioselectivity after immobilization. The enantioselective transesterification of racemic 1-phenylethanol with vinyl acetate was chosen as the model reaction so as to evaluate the enzyme activity/enantioselectivity (ee s ) and to compare the catalytic efficiency between the free and immobilized lipases in non-aqueous medium [16], because secondary alcohols are often used as target substrates in lipase-catalyzed resolution reactions [17]. In addition, 1phenylethanol is an essential building block and synthetic intermediate in many fields, such as fragrance in cosmetic industry, solvatochromic dye in chemical industries, ophthalmic preservative and inhibitor of cholesterol intestinal adsorption in pharmaceutical industries [18]. Moreover, numerous reports on transesterification of racemic 1-phenylethanol with vinyl acetate are available in the literature, we can easily compare the catalytic activity of immobilized BCL with other enzyme catalysts under similar reaction conditions. Therefore, based on the above analysis, the main objectives of this work are: (1) to compare the properties of the free lipase and the immobilized lipase on MPRs based on the reaction parameters, such as temperature, water content, substrate molar ratio, and reaction time; (2) to investigate probable mechanism for the significant improvement of enzyme activity and enantioselectivity through various characterizations of the immobilized lipase; and (3) further to compare the catalytic efficiency between the immobilized BCL and other immobilized lipases. MPR selection The enzyme activities and immobilization efficiencies of the 5 types of MPRs are presented in Figure 1. The properties of MPRs, such as particle size, specific surface area and pore diameter, were listed in Table 1. As shown in Table 1, MPRs were synthesized from inexpensive styrene. Actually, they had relatively low price ($ 5-12/kg). The result in Figure 1 showed that the enzyme activity and immobilization efficiency both were highest as compare with the other MPRs, when BCL was immobilized on MPR-NKA. The reason was mainly attributed to different specific surface area and pore diameter. Among these five types of MPRs, NKA had relatively higher specific surface area and average pore diameter (> 20 nm). Gao et al. [12] has also made similar conclusion that pore diameter of resin influences immobilization degree where immobilization degree increased with the increment of pore diameter. Therefore, MPR-NKA was chosen as the immobilization matrix in the following experiments. Effect of substrate molar ratio on enzyme activity/ee s of the free and immobilized BCL As shown in Figure 2, the effect of substrate molar ratios of vinyl acetate to racemic 1-phenylethanol from 1:1 to 10:1 has been investigated. It is generally believed that the acyl donor concentration would affect the reaction equilibrium, because the excess amount of vinyl acetate could drive the reversible reaction to the right side. For the free lipase, it could be observed that the highest enzyme activity and ee s was obtained when the substrate ratio was 4:1. However, it could also be found from Figure 2 that further increase of substrate ratio had little effect on the enzyme activity and enantioselectivity of the free and immobilized BCLs when the molar ratios were more than 4:1. Effect of water content on enzyme activity/ee s of the free and immobilized BCL When a reaction is performed in organic medium, the enzyme activity would be affected by micro-water in the reaction system [19]. In addition, water has different effects on the enzyme activity and enantioselectivity in various lipase-catalyzed reactions [20]. The influence of water on the BCL-catalyzed reaction was investigated in a range of water contents from 0.02 mmol/mL to 0.80 mmol/mL. As can be seen in Figure 3, for the free lipase, the enzyme activity decreased significantly, while the ee s was observed to be correlated well with the decrease of enzyme activity. It indicates that high water content may lead to an increase in hydrolysis, resulting in the decrease in transesterification activity of the enzyme. For the immobilized lipase, the enzyme activity and ee s also showed a similar decrease in tendency, which can be explained as that extra water would accumulate inside the immobilized lipase and influence the flexibility of the protein [21]. Therefore, there is no necessity to add extra water during the reaction, as the immobilized lipase has contained necessary water to maintain its active conformation during immobilization process. Effect of temperature on enzyme activity/ee s of the free and immobilized BCL The effect of temperature from 10 to 65°C on the enzyme activity and ee s of the free and immobilized BCL for resolution of (R, S)-1-phenylethanol was examined ( Figure 4). For the free lipase, the enzyme activity increased with the increment of temperature when temperature was below 55°C, which agrees with the observation of Phillips [22]. The ee s also grew with the increase of temperature. When the temperature was above 55°C, both enzyme activity and ee s decreased, which indicates that higher temperature would inhibit enzyme activity. For the immobilized lipase, enzyme activity and ee s were a little lower when temperature was below 20°C, which was attributed to heterogeneous mixture of substrate, acyl donor and organic medium at lower temperature. Compared with the free lipase, enzyme activity and ee s of the immobilized lipase exhibited no obvious decrease when temperature was over 20°C, which suggested immobilization could improve the thermostability of lipase. Effect of reaction time on conversion/ee s of the free and immobilized BCL As shown in Figure 5, the conversion of the free lipase increased with the reaction time at a very slow rate, and ee s showed the same tendency. The reaction reached equilibrium at a conversion near to 50%, while ee s was close to 100%. It indicated that BCL had a good preference for (R)-1-phenylethanol, and all of (R)-1-phenylethanol had been nearly converted into (R)-phenylethyl acetate, while (S)-1-phenylethanol remained unchanged in the reaction solution. When the conversion and ee s of the free lipase reached 50% and 99% at 30 h, its enzyme activity was only 924.1 U/min/g protein, while the corresponding E value was more than 200. On the contrary, the immobilized BCL showed a very high initial reaction rate, the reaction equilibrium (ee s close to 99%, conversion near to 50%) could be achieved within 30 min; the corresponding enzyme activity was 33,266.7 U/min/g protein. The enzyme activity of the immobilized BCL was 36 folds enhancement over the free lipase powder. Operational stability and reusability of the immobilized BCL The reusability of the immobilized lipase is vital for cost-effective usage in the large-scale applications. In this study, the immobilized lipase can be easily separated from the reaction mixture by centrifugation. After every batch, the immobilized BCL was washed with nheptane to remove traces of substrate and products. Then, it was ready to be used for the next batch reaction under the same conditions. As shown in Figure 6, there was nearly no loss in enzyme activity and ee s after the immobilized BCL had been continuously used for at least 30 cycles. Hence, it has been very clear that the immobilized lipase exhibited an excellent reusability. Therefore, the immobilized BCL is applicable not only to the batch reaction, but also to the continuous reaction and different reactor instruments. The BJH pore size distributions of MPR-NKA Compared with pure MPR-NKA, MPR-NKA adsorbed with lipase showed a decrease in the pore volume (from 0.985 m 3 /g to 0.881 m 3 /g). As shown in Figure 7, MPR-NKA contained relatively large pore volume, contributing to a better adsorption of lipase during immobilization. Compared with the pore volume of pure MPR-NKA, the decrease in the pore volume was attributed to the occupation of the enzyme in the pore channels, which indicated that BCL had been immobilized on MPR-NKA. It has been reported that the pore diameter should be at least four-to five-fold the protein diameter in order to prevent restrictions to the access of the enzyme [23]. Lipases are macromolecules of protein, with molecular weights about 40,000-60,000 Da [24]. Moreover, the structure of Pseudomonas cepacia lipase has been resolved (rscb accession No. 1OIL) [25,26]. It was very easy to estimate the diameter of single BCL molecule (about 5 nm), so the minimum pore diameter should be at least 20 nm. As shown in Figure 7, the diameters of most pores were from 20 to 110 nm, which matched the requirement of pore diameter. SEM and EDS analysis As shown in Figure 8, the detailed information about pore size distribution and shape of MPR-NKA has been given by SEM micrographs. Figure 8(a,b) show the surface and internal surface of MPR-NKA, respectively. It can be seen that MPR-NKA has various pore volumes on its surface and inside, which also proves the conclusion from BJH pore size distribution. Figure 8(c) showed the pore volumes of MPR-NKA adsorbed with lipase, which indicates that BCL has been immobilized on MPR-NKA. This can also be confirmed by EDS analysis. The result of EDS (in Figure 9a) displays that C and O are present without other elements in pure MPR-NKA (H element could not be detected in EDS). However, the elements of C, O and N are present after BCL immobilized on MPR-NKA in Figure 9b, which also proves that the immobilization of BCL on MPR-NKA was successful [27]. Moreover, some researchers have reported that the inner surface may not be fully utilized for lipase adsorption even if the pore size is big enough during the immobilization process [11,28]. As shown in the Figure 8 (a,b), the outer and internal surface of pure MPR-NKA were full of various pores before adsorption. After immobilization, surface of MPR-NKA was covered by lipase, and pores on the surface of MPR-NKA could not be found (in Figure 8c), which indicates that the lipase has almost been adsorbed by the MPR-NKA. It meant that the internal surface of MPR-NKA had been fully utilized, which was the possible reason for the high thermostability, organic solvent tolerance and operational stability of BCL immobilized on MPR-NKA. Secondary structure analysis of the free and immobilized BCL by FT-IR spectroscopy As known, protein has strong absorbance spectrum in the amide I region (1700-1600 cm −1 ) mainly due to the C = O bending vibration [29]. The amide I band of proteins contains component bands that represent different secondary structure elements such as the α-helix, β-sheet, β-turn and random coil. Their main absorbance spectra were: α-helix: 1650-1658 cm −1 , β-sheet: 1620-1640 cm −1 , β-turn: 1670-1695 cm −1 , and random coil: 1640-1650 cm −1 , respectively [30]. FT-IR spectra of the pure MPR-NKA ( Figure 10a); BCL immobilized MPR-NKA ( Figure 10b) and free BCL (Figure 10c) were shown in Figure 10, respectively. Compared with spectra of pure MPR-NKA, BCL immobilized MPR-NKA had a characteristic peak at 1700-1600 cm −1 , which could also be observed in the spectra of free BCL. Table 2, the secondary structure element content of free lipase was: α-helix: 28.3%, β-sheet: 21.4%, β-turn: 25.8%, and random coil: 24.5%, respectively. After immobilization, the immobilized BCL showed a decrease in α-helix (11.8%) and β-turn (15.9%); an increase in βsheet (42.6%) and random coil (29.8%). Foresti et al. reported that interfacial activation had been found when the lipase was adsorbed onto hydrophobic supports. The immobilized lipase was fixed in an open conformation and enhanced enzymatic activity was achieved [31]. Gao et al. pointed out that lipases are interfacial-active enzymes with lipophilic domains and can adopt both open and close conformations. The ionic microenvironment around lipase molecule, which was formed during immobilization procedure in buffer solution at a certain pH value, could be maintained as employed in organic solvent. This is socalled "pH memory effect", which helps to induce conformational changes of lipase resulting in the active form. Therefore, this would allow free access of the substrate to the active site of the immobilized lipase and increase activity of the immobilized lipase [12]. Comparison with other immobilized lipases Compared with other immobilized lipases, the immobilized BCL exhibited a much higher catalytic efficiency. Chua et al. reported that immobilized lipase ChiroCLEC-PC (crosslinked enzyme crystals of Pseudomonas cepacia lipase) was used for the resolution of racemic 1-phenylethanol in organic solvents (including heptane) with different log P values, while the maximal initial rate of reaction was 473.5 ± 10 μmol/min and the reaction reached equilibrium conversion at 45% after 100 mins of reaction [18]. Compared with the cross-linked enzyme crystals method, the immobilized BCL showed a better catalytic efficiency (based on initial reaction rate and final conversion value). Wang et al. reported that lipase from B. cepacia was encapsulated inside zirconia particles by biomimetic mineralization of K 2 ZrF 6 . After 48 h reaction under the optimal conditions, their immobilized lipase reached 49.9% with higher ee s of 99.9%, however, after 6 cycles, the conversion and ee s were only 43% and 85%, respectively [32]. Compared with the approach of encapsulating lipase within zirconia induced by protamine, our immobilized BCL exhibited a better reusability in the successive batch experiments. In order to compare the catalytic efficiency between our immobilized BCL and several commercially available immobilized lipases usually used in literature, the ee s and conversions of Novozyme 435, Lipozyme RM IM, and Lipozyme TL IM were measured respectively. Under the same conditions of substrate molar ratios (vinyl acetate to racemic 1-phenylethanol) 4:1; reaction time 0.5 h, reaction temperature 35°C, 0.1 g immobilized lipase and 5 mL solvent (heptane), their ee s was 75%, 24% and 15%, respectively. The corresponding conversions were 43.3%, 2.6% and 4.8%, respectively. It can be seen that our immobilized BCL (ee s 99%; conversion 49%) is much better than the commercially avaialable immobilized lipases in catalyzing enantioselective transesterification of 1-phenylethanol with vinyl acetate. Conclusion In this study, results were significantly enhanced in terms of enzyme activity and ee s when BCL immobilized on MPR-NKA. Compared with the free BCL, the immobilized BCL had better thermostability and excellent reusability in non-aqueous medium. Combined strategies (N 2 adsorption-desorption, SEM and EDS) were used to characterize the immobilized lipase, which proved that MPR-NKA was an excellent support for lipase immobilization. FT-IR analysis also indicated that improvement of enzyme activity and ee s was closely correlated with the alteration of its secondary structure. Compared with the other immobilized lipases, the immobilized BCL exhibits a better catalytic efficiency, indicating a great potential for industrial applications. Preparation of immobilized lipase The procedures of immobilization were described as follows: 1 g MPR and 5 mL 99% ethanol was added into a 25 mL tube, then the mixture was put in 30°C shaking incubator at 200 rpm for 2 h to wash out the residual catalyst and impurities. The ethanol was removed after MPR precipitated to the bottom of the tube. The residual MPR was washed with distilled water for three times. 5 mL 0.05 M phosphate buffer (pH 7) was mixed with the residual MPR, the mixture was kept for 12 h at 30°C. Then, the buffer was removed. After this pretreatment, MPR was kept in the tube. 0.8 g free BCL powder was dissolved in the 5 mL 0.05 M phosphate buffer (pH 7), this solution was loaded into the tube to mix with the MPR. The tube was stirred in a rotary shaker with a speed of 200 rpm at 30°C for 2 h. The suspension was separated after MPR precipitated to the bottom of the tube. The immobilized BCL (MPR adsorbing the lipase) was washed with 5 mL 0.05 M phosphate buffer (pH 7) for three times to remove the unadsorbed lipase in the surface of the MPR, and then, protein content of the lipase solution and washed water was determined by the method of Bradford [33]. Five types of MPRs were used so as to choose the best immobilization support. Lipase activity and protein content measurements The enzyme activity was determined using 1-phenylethanol and vinyl acetate as substrate. One unit (U) of the enzyme activity was defined as the amount of the enzyme which produces 1 μmol α -phenylethyl acetate per minute under the assay conditions. The reactions were performed in a 50 mL stoppered flask at 35°C and 200 rpm for 1 h. The assay conditions were used except when otherwise stated in the text. The protein content of free and immobilized BCL was 0.58 wt% and 0.50 wt%, respectively. Immobilization efficiency (%) was estimated as Eq. 1. Reaction procedure Before usage, the organic solvent was dried over 4 Å molecular sieves. Under the above mentioned conditions, reactions were carried out in 5 mL pure heptane, containing 1 mmol racemic 1-phenylethanol, 4 mmol vinyl acetate and 0.1 g free or immobilized BCL. The reaction mixture was put in a 50 mL stoppered flask at 35°C and 200 rpm for 1 h. These conditions were used except when the reaction parameters (molar ratio; temperature; reaction time) needed to be changed in the following text. The above experiments were all conducted in triplicate. After the reactions, the free or immobilized lipase was removed by centrifugation. Then, the samples were filtered through a 0.44 μm filter and analyzed by HPLC. Analysis and calculation The samples were analyzed by HPLC (Model 2300-525 SSI. Co., Ltd USA) using a Chiralcel OD-H column (4.6 mm × 250 mm, Daicel Chemical, Japan). Samples (5 μL) were eluted by a mixture of n-hexane: 2-propanol (95:5, v/v) at a rate of 1.0 mL/min, and detected at a wavelength of 254 nm (Model 525 UV Detector SSI. Co., Ltd USA). The retention time of (R)-and (S)-1-phenylethanol in the Chiralcel OD-H column was 7.28 and 8.23 min, respectively. According to method described by Chen et al. [34], enantioselectivity was expressed as E value and calculated by Eq. 2, ee s by Eq. 3, and C by Eq. 4. where, C represents the substrate conversion, ee s stands for the substrate enantiomeric excess, S 0 and R 0 respectively represent the concentrations of the (S)-and (R)-enantiomers of 1-phenylethanol before reaction, S and R are the concentrations of the (S)-and (R)-enantiomers of 1-phenylethanol after reaction. Characterization of the immobilization support with N 2 adsorption-desorption The specific surface area, pore volumes, and average pore diameters were measured by nitrogen adsorption-desorption equipment (ASAP 2020 V4.00, Micromeritics Instrument Ltd, Shanghai). The specific areas of the MPR-NKA were calculated by the Brunauer-Emmett-Teller (BET) method, and the distributions of pore diameters were estimated by the desorption branches of the isotherms with the Barrett-Joyner-Halenda (BJH) model. Characterization of the immobilized BCL by SEM and EDS The immobilized BCL was analyzed with SEM and EDS (Nova Nano SEM 450, FEI Company, Eindhoven, Netherlands). The samples were coated with gold using a sputter coating system and measured at an acceleration voltage of 5 kV. FT-IR spectroscopy The samples were mixed with KBr and pressed into pellets. FT-IR measurements in the region of 400-4000 cm -1 were recorded at 25°C by Vextex 70 FT-IR spectrometer (Bruker, Germany) with the nitrogen-cooled, mercurycadmium-tellurium (MCT) detector. The spectrum acquisition (all samples were overlaid on a zinc selenide ATR accessory) is from IR spectra. The infrared spectrum of KBr has been subtracted from the infrared spectrum during each measurement. The conditions of the measurements were as follows: 20 kHz scan speed, 4 cm -1 spectral resolution, 128 scan co-additions, and triangular apodization. The secondary structure element content was estimated by software PeakFit version 4.12 according to the method described by Yang et al. [35].
5,468.4
2013-10-29T00:00:00.000
[ "Biology", "Engineering" ]
Recovering the unsigned photospheric magnetic field from Ca II K observations We reassess the relationship between the photospheric magnetic field strength and the Ca II K intensity for a variety of surface features as a function of the position on the disc and the solar activity level. This relationship can be used to recover the unsigned photospheric magnetic field from images recorded in the core of Ca II K line. We have analysed 131 pairs of high-quality, full-disc, near-co-temporal observations from SDO/HMI and Rome/PSPT spanning half a solar cycle. To analytically describe the observationally-determined relation, we considered three different functions: a power law with an offset, a logarithmic function, and a power law function of the logarithm of the magnetic flux density. We used the obtained relations to reconstruct maps of the line-of-sight component of the unsigned magnetic field (unsigned magnetograms) from Ca II K observations, which were then compared to the original magnetograms. We find that both power-law functions represent the data well, while the logarithmic function is good only for quiet periods. We see no significant variation over the solar cycle or over the disc in the derived fit parameters, independently of the function used. We find that errors in the independent variable, usually not accounted for, introduce attenuation bias. To address this, we binned the data with respect to the magnetic field strength and Ca II K contrast separately and derived the relation for the bisector of the two binned curves. The reconstructed unsigned magnetograms show good agreement with the original ones. RMS differences are less than 90 G. The results were unaffected by the stray-light correction of the SDO/HMI and Rome/PSPT data. Our results imply that Ca~II~K observations, accurately processed and calibrated, can be used to reconstruct unsigned magnetograms by using the relations derived in our study. Introduction noticed a "one-to-one correspondence" between bright regions in Mt Wilson Ca II K spectroheliograms and magnetic regions in magnetograms. This reported association, which was promptly confirmed by Howard (1959) and Leighton (1959), has initiated numerous studies of solar and stellar Ca II data. Since then, considerable efforts have been devoted to understand the relation between the magnetic field strength and the Ca II K intensity for different solar magnetic regions on the Sun (e.g. Frazier 1971;Skumanich et al. 1975;Schrijver et al. 1989;Nindos & Zirin 1998;Harvey & White 1999;Vogler et al. 2005;Rast 2003a;Ortiz & Rast 2005;Rezaei et al. 2007;Loukitcheva et al. 2009;Pevtsov et al. 2016;Kahil et al. 2017Kahil et al. , 2019. Table 1 summarises the main features and results of the earlier studies compared with the results of this one. All previous works were based on analysis of Send offprint requests to: Theodosios Chatzistergos e-mail<EMAIL_ADDRESS>de<EMAIL_ADDRESS>small data samples (with the possible exception of Vogler et al. 2005), mainly considering regions at the disc centre, and using data with a spatial resolution lower than ≈ 2 . Most of earlier studies reported that the link between the magnetic field strength and Ca II K intensity is best described by a power law function with exponent in the range 0.3-0.6. However, Skumanich et al. (1975) and Nindos & Zirin (1998) found that their data were best represented by a linear relation, while Kahil et al. (2017Kahil et al. ( , 2019 found a logarithmic function to fit best their data. It is worth noting that the latter authors analysed Ca II H observations taken with the Sunrise balloon-borne telescope (Solanki et al. 2010Barthol et al. 2011), which have a higher spatial resolution than in previous studies. These (and other similar) studies are discussed in detail in Sec. 3.7. Major efforts have also been invested to measure the disc-integrated Ca II H and K emission of many other stars. Such measurements have been regularly carried out e.g. within the synoptic ground-based programs at Mt Wilson (1966-2003, Wilson 1978Duncan et al. 1991; Article number, page 1 of 18 arXiv:1905.03453v1 [astro-ph.SR] 9 May 2019 A&A proofs: manuscript no. b_vs_cak_arxiv Notes. Columns are: reference, spectral line, bandwidth, number and period of observations, type, location, and dimensions of the analysed region, the pixel scale, and the type of relation derived. Dashes denote missing information. (a) These studies did not derive the functional form of the relation, they simply binned the available datapoints with respect to the magnetic field strength. We note, however, that the results they presented are approximately consistent with a power law function. References. (1) Frazier (1971); (2) Skumanich et al. (1975); (3) Wang (1988); (4) Schrijver et al. (1989); (5) Schrijver et al. (1996); (6) Nindos & Zirin (1998); (7) Harvey & White (1999); (8) Rast (2003b); (9) Ortiz & Rast (2005); (10) Vogler et al. (2005); (11) Rezaei et al. (2007); (12) et ) and Lowell Observatories (1994-present, Hall et al. 2007), as well as by the space-born photometer onboard the CoRoT mission (Michel et al. 2008;Auvergne et al. 2009;Gondoin et al. 2012). Ca II H & K emission is an indicator of the strength of, and the area covered by, magnetic fields on the Sun (Leighton 1959). Since the Ca II H and K variations due to magnetic regions are of the order of a few tens of percent, they can be easily detected for many active stars. Hence, the Ca II H and K measurements have been used to trace long-term changes in surface activity of stars caused by e.g. the activity cycle, rotation, and convection (e.g. Sheeley 1967;White & Livingston 1978;Keil & Worden 1984;Baliunas et al. 1985, etc.). These studies have led to an improved knowledge of stellar rotation and activity, and of the degree to which the Sun and other stars share similar dynamical properties (for reviews, see e.g. Lockwood et al. 2007Lockwood et al. , 2013Hall 2008;Reiners 2012). It is worth noting that stellar Ca II observations are per force integrated over the whole stellar disc. However, except for the studies of Harvey & White (1999); Vogler et al. (2005), and Pevtsov et al. (2016), restricted to a few images, no other previous investigation has determined the relation between Ca II brightness and magnetic field strength covering the full solar disc. Furthermore, many studies require long data sets of the solar surface magnetic field, e.g. to derive information on the structure, activity, and variability of the Sun, or for related applications such as Earth's climate response to solar irradiance variability. Regular magnetograms are, however, available only for the last four solar cycles, while synoptic Ca II K solar observations have been carried out for more than 120 years (Chatzistergos 2017;Chatzistergos et al. 2019b). In recent years, following the availability of a number of digitized series of historical Ca II K observations, attempts have been made to reconstruct magnetograms from Ca II K observations, based on the relation between the Ca II K intensity and the magnetic field strength. In particular, Pevtsov et al. (2016) reconstructed magnetograms from Ca II K synoptic charts made from Mt Wilson observatory images. For their reconstruction they used sunspot records to get information about the polarity and assigned each plage area with a single magnetic field strength value based on the area of the plage. The areas and locations of plage regions were derived from photometrically uncalibrated Ca II K images. Besides that, Sheeley et al. (2011) and Chatterjee et al. (2016) constructed Carrington maps with Ca II K images from the Mt Wilson and Kodaikanal observatories, respectively. These maps can be used to trace the evolution of the plage regions. However they provide Ca II K contrast and need to be converted into magnetic field strength for any application based on magnetic field measurements. In this paper, we study the relationship between the magnetic field strength and the Ca II K intensity using data from two archives of high-quality full-disc solar observations. We use significantly more data of higher quality than in previous studies, which allows a more detailed and accurate assessment of this relationship over the whole disc and at different levels of solar activity during cycle 24. We test the accuracy of our results by applying the derived relationship to reconstruct unsigned magnetograms and then comparing them with the actual ones. This paper is organised as follows. Section 2 describes the data and methods employed for our analysis. In Section 3 we study the magnetic field strength and the Ca II K excess intensity. In section 4 we use our results to reconstruct magnetograms from the Ca II K images and to test the accuracy of our method. Finally we draw our conclusions in Section 5. Data We analysed full-disc photospheric longitudinal magnetograms and continuum intensity images from the spaceborne Helioseismic and Magnetic Imager (HMI, Scherrer et al. 2012;Schou et al. 2012) aboard the Solar Dynamics Observatory (SDO, Pesnell et al. 2012), and full-disc filtergrams taken at the Ca II K line and red continuum from the Precision Solar Photometric Telescope at the Rome Observatory (Rome/PSPT, Ermolli et al. 1998Ermolli et al. , 2007. Figure 1 shows examples of the analysed SDO/HMI and Rome/PSPT images. Rome/PSPT, in operation since 1996, is a 15 cm telescope designed for photometric solar observations characterized by 0.1% pixel-to-pixel relative photometric precision (Coulter & Kuhn 1994). The images 1 were acquired with narrow-band interference filters, by single exposure of 1 Available at http://www.oa-roma.inaf.it/fisica-solare/ a 2048×2048 CCD array. The filters employed for the observations analysed here are centred at the Ca II K line core (393.3 nm) with bandwidth of 0.25 nm, and in the red continuum at 607.2 nm with bandwidth of 0.5 nm. The Ca II K and red continuum images were taken within 3 minutes from each other. At the acquisition, the data were reduced to a pixel scale of 2 to account for typical conditions of local seeing. Standard instrumental calibration has been applied to the data (Ermolli et al. 1998(Ermolli et al. , 2010. SDO/HMI, in operation since April 2010, takes fulldisc 4096×4096 pixel filtergrams at six wavelength positions across the Fe I 617.3 nm line at 1.875 s intervals. The filtergrams are combined to form simultaneous continuum intensity images and longitudinal magnetograms with a pixel scale of 0.505 and 45 s cadence. For each Rome/PSPT image pair, we took the 360 s average of the SDO/HMI images and magnetograms taken close in time (on average less than 2 minutes apart and no more than 8 minutes). The averaging was done to suppress intensity and magnetogram signal fluctuations from noise and p-mode oscillations. For our analysis, we have selected data with the highest spatial resolution (for Rome/PSPT), closest time between SDO/HMI and Rome/PSPT observations, and highest signal-to-noise ratio. We avoided winter periods and kept observations mostly during summer months, when the seeing-induced degradation in Rome/PSPT data is lower. Our data sample consists of 131 sets of near-simultaneous observations. These observations cover the period between 18/05/2010 and 29/08/2016. We have ignored the pixels in SDO/HMI magnetograms with flux density below 20 G. The value of 20 G corresponds roughly to 3 times of the noise level as evaluated by Yeo et al. (2013Yeo et al. ( , 2014b. Since the magnetic flux tubes making up network and plage tend towards an orientation normal to the surface, while magnetograms measure the line-ofsight (LOS, hereafter) component of it (B LOS ), we divided the pixel signal by the corresponding µ (cosine of the heliocentric azimuthal angle) to get the intrinsic magnetic field strength. We also removed the polarity information from the SDO/HMI data, and only considered the absolute value of the magnetic flux density, |B LOS |/µ (i.e. the magnetic field strength averaged over the effective pixel). The Rome/PSPT images were first rescaled to match the size of SDO/HMI so that we could align both observations with highest accuracy. The Rome/PSPT images were then rotated and aligned to the SDO/HMI observations, by applying compensations for ephemeris. All observations were then re-scaled to the original dimensions of Rome/PSPT. To further reduce effects due to seeing, we also reduced the resolution of the SDO/HMI data to that of the Rome/PSPT by smoothing them with a low-pass filter with a 2 × 2 pixel running window width. In the following, we refer to the SDO/HMI data so obtained as SDO/HMI degraded magnetograms. For each analysed intensity image (Rome/PSPT and SDO/HMI) we removed the limb darkening and obtained a contrast map. In particular, for each image pixel i, we defined its contrast C i as C i = I i /I QS i , where I i is the measured intensity of pixel i, and I QS i is the intensity of the quiet Sun (QS, hereafter) at the same position. The latter was derived with the iterative procedure described by Chatzistergos et al. (2018b), which returns contrast images with an average error in contrast values lower than 0.6% (see Chatzistergos et al. 2018b, for more details). Since our aim here was a study of the relation between the magnetic field strength and Ca II K brightness in bright magnetic regions, we masked out sunspots in the magnetograms and in Ca II K observations. Sunspots were identified in SDO/HMI continuum intensity as regions having intensity contrast lower than 0.89 (following Yeo et al. 2013) and in Rome/PSPT red continuum images lower than 0.95. The above thresholds were derived as the average value of C−3σ, whereC is the average value of contrast over the disc and σ is the standard deviation of contrast values, from all Rome/PSPT red continuum and SDO/HMI continuum images separately. The plage regions immediately surrounding sunspots were excluded as well, as they could be affected by stray-light and by extended low-lying sunspot canopies (e.g. Giovanelli & Jones 1982;Solanki et al. 1994Solanki et al. , 1999, as was shown by Yeo et al. (2013). This was done by expanding the sunspot regions with a varying size kernel, corresponding to 10 × 10 and 30 × 30 pixel 2 at disc centre and limb, respectively. The excluded regions have areas on average 0.001 in fraction of the disc, while the maximum value is 0.005. These regions amount on average to 13 ± 9% of the total flux in the original magnetograms, which appears to be roughly constant in time for the analysed data. Stray-light removal To investigate whether our results depend on the removal of stray-light from the analysed images, we restored 51 pairs of the SDO/HMI and Rome/PSPT images following Yeo et al. (2014a) and Criscuoli & Ermolli (2008), respectively. We have also analysed a sample of 10 SDO/HMI magnetograms from our dataset that were restored with the method employed by Criscuoli et al. (2017). Employment of different methods helps us to assess the potential errors in the relation between the Ca II K contrast and the magnetic field strength due to the stray-light degradation. For the SDO/HMI observations, the point-spread function (PSF, hereafter) of the instrument was deconvolved from Stokes I and V observables that were then used to produce the stray-light corrected magnetograms. The PSF derived by Yeo et al. (2014a) has the form of the sum of five Gaussian functions. The PSF parameters were determined from the Venus transit data by performing a fit over the shaded areas. The PSF applied by Criscuoli et al. (2017) instead has the form of an Airy function convolved with a Lorentzian. The parameters of the PSF were derived by using prelaunch testing data as well as post-launch off-limb data taken during a partial lunar eclipse and the transit of Venus. According to Criscuoli et al. (2017), the PSF employed by Yeo et al. (2014a) does not account for large-angle or longdistance scattering, thus affecting results from analyses of data concerning large spatial scales on the solar disc such as in the present study. The Rome/PSPT data were deconvolved by using analytical functions defined from modelling the centre-to-limb variation of intensity in the data and instrumental PSF (Criscuoli & Ermolli 2008). The PSF here is modelled as the sum of three Gaussian and one Lorentzian functions, following Walton & Preminger (1999). Segmentation For our analysis we selected pixels that correspond to magnetic regions in magnetograms and bright regions in Ca II K images. We identified features of interest with two methods. Method 1. We distinguished between two different types of bright magnetic features: plage and the network. They are differentiated with single contrast and |B LOS |/µ thresholds in Ca II K and magnetograms, respectively. The thresholds are 20 G≤ |B LOS |/µ < 60 G and 1.12 ≤ C < 1.21 for network and |B LOS |/µ ≥ 60 G and C ≥ 1.21 for plage. The thresholds given above for plage in the magnetograms, as well as for the network in Ca II K images, were acquired by minimising the differences between the average disc fractions calculated in the magnetograms and the Ca II K images. Method 2. We used this method to isolate individual activity clusters which may be composed of multiple close or overlapping active regions (ARs, hereafter). In this way we can study how the relation between the magnetic field strength and the Ca II K contrast varies among features of different sizes and locations on the disc. We applied a lowpass filter with a 50-pixel window width to the degraded Fig. 3. Ca II K contrast plotted against the unsigned LOS magnetic flux density divided by µ (|BLOS|/µ) for all pixel pairs (excluding sunspots) in all available images. The pixels are colour-coded denoting the logarithm of the number density within bins of 1 G and 0.01 in contrast. The contour lines give the logarithm of the pixel number density in intervals of 0.5. Curves show 5000-point running means (over |BLOS|/µ in yellow, over contrast in light blue, and their bisector in purple), as well as power law function (PF, red), power law function of log (|BLOS|/µ) (PFL, blue) and logarithmic function (LFL, green) fits on the binned curve over |BLOS|/µ (yellow curve). The vertical grey dashed line denotes the 20 G threshold in |BLOS|/µ. A magnified section for low |BLOS|/µ is shown at the lower right corner of the figure to illustrate the differences of the different fits over that region. Also shown are histograms of |BLOS|/µ within 3 ranges of contrast values (right) and histograms of Ca II K contrast for 4 ranges of |BLOS|/µ (top). The ranges used for the histograms are shown on the upper part of the corresponding sub-plot. magnetograms and a constant threshold of |B LOS |/µ = 15 G to isolate individual magnetic regions. Contiguous pixels were grouped together, and all isolated regions were considered as separate clusters. We also applied a size threshold of 50 pixels to the clusters. Pixels not assigned to any cluster were categorised as QS, though they include the network as well. This method is similar to that used by Harvey & White (1999). In our analysis we excluded all pixels with µ < 0.14 (outermost 1% of the solar radius) to restrict errors due to projection effects. Finally, the sunspot regions were also excluded from all masks as described in Sect. 2.1. Figures 2a) and 2b) show the masks derived from SDO/HMI magnetogram and the Rome/PSPT Ca II K image with Method 1 on the images shown in Figs. 1a) and 1c), respectively. Plage regions are shown in red, network in green and QS in blue. Figure 2c) shows the mask derived with Method 2 on the Ca II K image shown in Fig. 1a). The different features are shown with different colours, while the QS is shown in dark blue. Pixel-by-pixel relationship We first considered the data without the corrections for stray-light and without performing any segmentation other than excluding the sunspot regions. Figure 3 shows the relation between the Ca II K brightness and |B LOS |/µ for all pairs of the degraded magnetograms and corresponding Ca II K images considered in our study. Each colour-coded pixel represents the logarithm of the number density within bins of 1 G and 0.01 in contrast. The sources of the scatter seen in Fig. 3 are discussed in more detail in Appendix A. Briefly, one reason for the scatter of values is the projection effect. SDO/HMI and Rome/PSPT observations sample different formation heights, which introduces changes in the distribution and shape of flux elements over space. Due to the expansion of the flux tubes with height, sizes of magnetic features at the two heights are different which leads to a size mismatch between the same feature seen in a magnetogram and in the corresponding Ca II K data and therefore also contributes to the scatter. Another source of scatter is A&A proofs: manuscript no. b_vs_cak_arxiv Notes. Columns are: fit function, x used in Eq. 1, the quantity over which the binning of the data was performed, best fit parameters (a1, a2, and a3) with their 1σ uncertainties, and the χ 2 of the fits. the diverse spatial and spectral resolution of the compared data. In Appendix A we discuss the spatial correspondence between the features in the magnetograms and the Ca II K observations and show close-ups of a quiet and an active region to demonstrate the smearing of the features in the Ca II K observations compared to the magnetograms. The Spearman correlation coefficient between |B LOS |/µ and Ca II K contrast supports a monotonous relationship. The coefficient obtained for individual images is on average ρ = 0.60, while it is ρ = 0.98 for all pixels from all data. The significance level is zero with double-precision accuracy, implying a highly significant correlation. Figure 3 shows that the Ca II K contrast increases with increasing magnetic field strength, but tends to saturate at high |B LOS |/µ (see e.g., Saar & Schrijver 1987;Schrijver et al. 1989). The yellow curve in Fig. 3 is a running mean over |B LOS |/µ values. Fitting the points of this binned curve has been the most common approach in the literature when studying the relation between Ca II K contrast and magnetic field strength (e.g. Rast 2003a; Ortiz & Rast 2005;Rezaei et al. 2007;Loukitcheva et al. 2009;Pevtsov et al. 2016;Kahil et al. 2017Kahil et al. , 2019. It suggests that the relation saturates at around 400 G. However, binning the data over the Ca II K contrast values suggests a somewhat different relation. We found that the choice of the quantity over which the binning is performed affects the exact form of the relation between the magnetic field strength and the Ca II K intensity. Attenuation bias due to errors in the independent variable in each case can cause these relations to skew compared to the true relationship. This result, not reported in the literature yet, also needs to be considered when comparing outcomes from different studies. We note that the histograms shown in Fig. 3 illustrate that the distribution of contrast values is, to a good approximation, symmetric around the mean value for 150 G <|B LOS |/µ< 450 G. The distribution for high and low |B LOS |/µ is skewed with a tail for high and low contrasts, respectively. To find the best relation describing the data, we considered three different functions: a) a power law with an offset (PF) as commonly used in the literature (e.g. Schrijver et al. 1989;Harvey & White 1999;Ortiz & Rast 2005;Rezaei et al. 2007;Loukitcheva et al. 2009); b) a logarithm (LFL) as proposed by Kahil et al. (2017Kahil et al. ( , 2019; and c) a power law function of the logarithm of |B LOS |/µ (PFL). These three functions can be described by the following equation: where x = |B LOS |/µ for PF, and x = log (|B LOS |/µ) for PFL and LFL (with a 3 being 1 for LFL). We perform these fits on the curve that resulted by averaging contrast values over |B LOS |/µ values (yellow curve in Fig. 3), based on all selected pixel pairs from all images where |B LOS |/µ ≥ 20 G. However, for comparison we also performed the fits on the curve after binning over contrast values and on the bisector of the two running means (these sets of fits will be referred to as PF * , PFL * , and LFL * ). The fits with the three tested functions for the |B LOS |/µ binning (yellow solid line in Fig. 3) are shown in Fig. 3, with red dashed line (PF), blue dotted line (PFL), and green dashed line (LFL). Table 2 lists the derived parameters. Both PF and PFL give low values for χ 2 , being 0.16 and 0.1, respectively. The fitted curve for both PF and PFL does not follow the binned curve for high |B LOS |/µ, lying above it. PF and PFL closely follow each other up to about 400 G, but slightly diverge at higher magnetic Fig. 5. Same as Fig. 3 but showing only the results of a power law function (PF) on the binned curve over |BLOS|/µ (yellow curve) by varying the |BLOS|/µ threshold, i.e., the magnetogram noise cut-off (dotted coloured curves). The threshold is 1 G for the black curve and gets to 50 G for the red curve. The curve corresponding to the 20 G threshold adopted in this study is shown with the light blue solid curve. The thick yellow curve shows 5000-point running mean over |BLOS|/µ. The vertical grey dashed line denotes the 20 G threshold in |BLOS|/µ. field strengths, with PFL following the binned curve more closely. They also differ for |B LOS |/µ < 20 G (which were not included in the fit, so that the curves are extrapolated there), with PFL giving higher contrasts. However, the differences between the two curves are minute. We found that the exponents for PF and PFL increase when the fit is performed on the curves binned over contrast values or on the bisector (see Table 2), while the χ 2 is reduced, being 0.03 and 0.01, respectively. LFL fails to reproduce the binned curve over |B LOS |/µ, but follows slightly better the trend of the curve for |B LOS |/µ > 400 G than PF and PFL. However, the fit of LFL gives high values for χ 2 (2.24), showing that LFL does not describe the data well. The analysis described in the following was performed by applying all functions and binning curves described above to the available data. However, due to the similarity of the results obtained from the PF and PFL fits and the lower accuracy of the LFL fit compared to both PF and PFL fits, for the sake of clarity we will present only the results for PFL * and PF. Our analysis suggests that the PFL * fit is more accurate and stable (see Sec. 4) than the other considered functions, while those derived with the PF fit allow for comparison with previous results in the literature. We note, however, that due to the scatter in Fig. 3 we cannot rule out the aptness of PF to describe the relation between the magnetic field strength and the Ca II K brightness either. The results derived with PF * and LFL * fits can be found in Chatzistergos (2017). Effects of the |B LOS |/µ threshold on the derived exponents To better understand the sources of differences with other results, we have studied how our findings depend on the |B LOS |/µ threshold applied. Figure 4 shows the parameters derived by applying PF and PFL to the data shown in Fig. 3 and varying the threshold in |B LOS |/µ between 1 G and 50 G, i.e. 0.15 to 7 times the noise level. We show only the exponents, though the other parameters of the tested functions are affected as well. We show the results of performing the fit to all three binned curves as shown in Fig. 3. For the binning over |B LOS |/µ, the exponent for PF constantly decreases, while for PFL it reaches a plateau for magnetic field strengths in the range of ∼5-20 G and then slightly decreases. When the fit is performed on the bisector, the exponents for PF * and PFL * reach a plateau after a threshold of ∼8 G and ∼18 G, respectively, and after that they tend to slightly decrease. The exponent we Article number, page 7 of 18 A&A proofs: manuscript no. b_vs_cak_arxiv derived for PFL * (Table 2) lies within the 1σ interval of all derived exponents with thresholds greater than 18 G. For the binning over contrast values, the exponents of PF and PFL show an almost constant increase for |B LOS |/µ > 10 G. The threshold seems to play a more important role if the fit is performed by binning over contrast values or |B LOS |/µ compared to the results of the fit on the bisector. Overall, the curves derived with PFL * are more stable against the choice of the |B LOS |/µ threshold. In Figure 5 we show the results of fitting PF to the binned curve over |B LOS |/µ by varying the threshold between 1 G and 50 G. All of the derived curves agree very well for the interval 50-350 G, but they diverge for higher and lower values. This is expected Fig. 8. Ca II K contrast plotted against |BLOS|/µ for all activity clusters identified with Method 2 in observations shown in Fig. 1 (black dots). The coloured curves are the result of the PFL * fit to the individual clusters, shown with the same colours as in Fig. 2c). Fig. 9. Exponents of the PF (red) and PFL * (blue) fits as a function of size for individual activity clusters. The light grey and dark grey shaded areas denote the 1σ error in the fit parameters for PF and PFL * , respectively. since the low |B LOS |/µ regions dominate the relation and by increasing the threshold it shifts the weight for the fit to higher |B LOS |/µ. Exponents over time and different µ positions We also studied if the exponents of the fits change with the activity level. To understand the change with time we performed the fits on every image separately, first for all pixels with µ > 0.14, and then for the plage and network regions separately. The differentiation between the various types of features was done with Method 1 applied to Rome/PSPT images, but keeping only the regions that also have |B LOS |/µ > 20 G in the magnetograms. To study the variation of the exponent we fixed a 1 and a 2 for the PF and PFL fits to the values derived in Sect. 3.1 (listed in Table 2). Figure 6 shows the coefficients of the fits to the curve binned over |B LOS |/µ as a function of time. The resulting exponents for PF and PFL depend on the type of feature and are slightly higher for plage than for the network. The uncertainty in the derived parameters (not shown in Fig. 6 due to their low values) is less than 0.001 for a 3 in PF and 0.014 for a 3 in PFL * . Performing the fit to all pixels on the disc with µ > 0.14 and each image separately, we found an Over-plotted with white are the plage areas derived from the Ca II K images. The areas were scaled in the range [0,9] and are shown to indicate the activity level. average exponent of 0.52 ± 0.02 and 3.9 ± 0.1 for PF and PFL * , respectively. The errors are the 1σ intervals among all the daily calculated values. These values agree within the 1σ uncertainty level with those we derived in Sect. 3.1 for all three functions. As seen in Fig. 6 the scatter of the resulting exponents is such that within the limits of the current analysis, we found no evidence that the relationship between |B LOS |/µ and Ca II K intensity varies over the solar cycle. We noticed exactly the same behaviour for the plage component for PF and PFL * . We found some changes in the network component that result in higher exponents for the low activity period in 2010 for PF and PFL * , but the derived exponents are still constant in time within the uncertainties. We have also studied how the exponents of the fits change for different positions of the disc. Figure 7 shows the coefficients of the various features as a function of their position on the solar disc in terms of µ. The segmentation was done with Method 1, identifying plage and network regions. We considered 10 concentric annuli of equal area covering the solar disc up to µ = 0.14. The mean values of the exponents computed over the various annuli slightly decrease towards the limb, but their standard deviation increases so that the exponents do not show any significant variation with the position on the disc (within the 1σ uncertainty). In particular, the relative difference between the average value of the exponents within the innermost and outermost annuli for PF (PFL * ) is 4% (10%). The same behaviour is seen also when network and plage regions are considered separately. Exponents for individual activity clusters We have also tested how different the exponents of the fits are when applied to the data from individual activity clusters. The images were segmented with Method 2. We performed the fit with PFL * to each individual cluster, while we also considered the QS (including the network) separately. Figure 8 shows a scatter plot for the images shown in Fig. 1, but now including only the pixels corresponding to activity clusters and QS. The binned curves and the result of the fits from different activity clusters are in agreement with each other, with the exception of one cluster. However, this cluster is very small in size and the statistics are worse than for the other clusters. The relation derived from QS regions shows a smaller slope than the one obtained for active regions. However, this is probably due to a much lower number of QS and network pixels with strong resolved magnetic fields in the analysed SDO/HMI degraded data. Results for different clusters agree well with each other within the accuracy of the fit. Averaging all exponents derived for clusters (QS and network) from all images gave on average the values of 0.54 ± 0.03 (0.50 ± 0.01), and 3.9 ± 0.2 (3.7 ± 0.1), for PF and PFL * , respectively. The exponents derived here are in agreement with those presented in the previous subsections. We find no dependence on µ for the derived exponents with this segmentation method either. Figure 9 shows the exponents derived with PF (red) and PFL * (blue) as a function of the area of the clusters expressed in fractions of the disc. We found no dependence of the exponent on the cluster size, however the uncertainty of the derived parameters was obviously higher for smaller features, because of poorer statistics. Also, effects of potential misalignment between SDO/HMI and Rome/PSPT data become more significant in this case. Find more information in Sect. 3.5. Effects of potential misalignment To test the sensitivity of our results to potential misalignment of the images, we repeated our analysis by using Rome/PSPT images shifted by a random number of pixels, in both the x and y direction and compared the results with those from the original Rome/PSPT images. The test was done 10 times, whereby the possible maximum offset varied between 1 and 10 pixels (2" and 20") in any direction. Each time, we performed 1000 computations with a random offset for each image lying in the range between 0 and the maximum allowed value. The choice of a maximum offset of 10 pixels is extreme considering that the alignment with the data employed in this study is considerably more precise. However, it is useful to test such a high value in order to have an estimate of the errors when applying the relationships on lower resolution data or with greater temporal difference than the images analysed here, such as the magnetograms from Kitt Peak and Ca II K spectroheliograms from Kodaikanal observatory, which are taken on average more than 12 hours apart. Figure 10 shows the relative difference of the derived exponents on each offset image to the original ones. Shown are the average values over the 1000 realisations on each image (abscissa) for each maximum possible offset (ordinate). We show the errors for the exponents derived for PF and PFL * in different panels. We notice that the errors are significant for PF, but they are considerably lower for PFL * . The errors in the derived parameters with PF reach 50%, while they are less than 8% for 1 pixel offsets. With offsets up to 10 pixels (20"), the errors for the derived parameters with PFL * remain below 24%, while they are less than 13% and 2% for 5 and 1 pixel offsets (10" and 2"), respectively. We noticed that errors due to the offsets are higher during low activity periods with all Article number, page 9 of 18 A&A proofs: manuscript no. b_vs_cak_arxiv tested functions. This may be due to the smaller size of individual magnetic features when activity is low, so that an offset quickly leads to a substantial mismatch between the magnetic features in SDO/HMI and the brightness features in Rome/PSPT images. Effects of stray-light We studied the effect of the stray-light on our results. For this, we repeated the same analysis on images corrected for stray light (as described in Sect. 2.2). Since the Ca II K stray-light corrected images have higher contrast values, the segmentation parameters for different features had to be adapted (increased by 0.02 in contrast and 10 G). Otherwise the methods that we applied were exactly the same. Figure 11 is similar to Fig. 3, but now for the stray-light corrected data. The scatter in Ca II K contrast is higher compared to the one from the Ca II K images affected by stray-light. However, our results remain unchanged with almost constant exponents over the disc and time, and values of the exponents close to the ones reported above. The best fit parameters from images corrected for stray-light are 0.51 ± 0.02 and 3.89 ± 0.08 for PF and PFL * , respectively. Besides, there are no significant differences to the results obtained from analysis of the data corrected with the method of Criscuoli et al. (2017), thus supporting the assumptions of the correction method by Yeo et al. (2014a). Our previous conclusions of weak CLV of the exponents and time independence are also valid with these data. The values of the exponents are slightly lower than in the rest of our analysis, with best fit parameters 0.50 ± 0.03 and 3.88 ± 0.05 for PF and PFL * , respectively. These results are still within the 1σ interval of our main results based on stray-light uncorrected data. Comparison to results from the literature The exponent derived for PF (0.53 ± 0.01) is lower than those obtained by Schrijver et al. (1989); Harvey & White (1999); Ortiz & Rast (2005), who favoured an exponent of 0.6, 0.69, 0.65, respectively, for all bright features considered. However, it is higher than those derived by Rezaei et al. (2007); Loukitcheva et al. (2009) andVogler et al. (2005), ranging from 0.31 to 0.51. The difference between our results and those by Loukitcheva et al. (2009) (exponent of 0.31) can potentially be explained by the different threshold in the magnetic field strength used by the two studies. Rezaei et al. (2007) found the exponent to increase to 0.51 when the threshold was 20 G, which is consistent with our results. The same stands for Loukitcheva et al. (2009) who used a threshold of 1.5 G, and showed that the exponent increases to roughly 0.53 if a threshold of 20 G is used. It is worth noting, however, that Loukitcheva et al. (2009) analysed lower resolution magnetograms from the Solar and Heliospheric Observatory Michelson Doppler Imager magnetograms (SOHO/ MDI Scherrer et al. 1995), Fig. 12. Unsigned magnetograms reconstructed from the Ca II K images taken on 01/04/2011 (top) and 07/06/2010 (bottom) using the average parameters for PFL * (left), SDO/HMI unsigned magnetograms (middle) co-temporal to the Ca II K images, and difference between the reconstructed unsigned magnetogram (simulated) from Ca II K data and the original (true) SDO/HMI unsigned magnetogram (right). The RMS, mean, mean absolute, and maximum absolute differences are listed under each panel. The colour bars show the ranges of |BLOS| in G. All images are saturated at 100 G to improve the visibility of the regions with low magnetic field strength while we analysed SDO/HMI magnetograms and hence the magnetic field strengths reported by the various instruments is not necessarily directly comparable (c.f. Yeo et al. 2014b). We note that the exponents derived by Schrijver et al. (1989); Harvey & White (1999); Ortiz & Rast (2005) are consistent with the one we derive here by performing the fit on the bisector (PF * ). Our results for LFL differ from those presented by Kahil et al. (2017) and Kahil et al. (2019). For the LFL Kahil et al. (2017, 2019 reported the best fit parameters a 1 = 0.29 ± 0.003 and a 2 = 0.51 ± 0.004 for |B LOS | > 50 G, and a 1 = 0.456 ± 0.003 and a 2 = 0.512 ± 0.001 for |B LOS | > 100 G for a QS and an AR in Ca II H data, respectively. The differences can be due to different atmospheric heights sampled in the analysed data, as well as due to the lower spatial resolution of the observations used here compared to those used by Kahil et al. (2017Kahil et al. ( , 2019. Thus, LFL might be not an appropriate function for analysis of full-disc data like those used in our study. Harvey & White (1999) analysed data from three observatories, specifically the Big Bear, Sacramento Peak, and Kitt Peak observatories, and segmented the features into 4 sub-categories. In addition to the categories we use, they also have a feature class they termed enhanced network. Our results are close to those of Harvey & White (1999) for the Big Bear data (0.52 and 0.58 for plage and network, respectively), but are slightly higher than those from Sacramento Peak (0.47-0.48 and 0.47-0.56 for plage and network, respectively) and lower than those of Kitt Peat (0.62 and 0.64 for plage and network, respectively) measurements. This can potentially be explained by the different bandwidth of the observations made at the different observatories. Indeed, the Big Bear data have a bandwidth of 3 Å being the closest to the one of the Rome/PSPT (2.5 Å). The bandwidth used for the Sacramento Peak data is narrower (0.5 Å), while for the Kitt Peak data it is broader (10 Å). Another difference is that Harvey & White (1999) found lower (or equal) exponents for the active regions than for the network, while in our study we found the opposite. Note that the exponent obtained for the enhanced network component by Harvey & White (1999) is higher than the one we derived here for network and plage. Our finding of small to no dependence of the exponents to µ is in agreement to the results of Harvey & White (1999). Pevtsov et al. (2016) analysed the pairs of Kitt Peak magnetograms and uncalibrated Mt Wilson Ca II K spectroheliograms after converting them to carrington maps, as well as SOLIS/VSM observations in Ca II 854.2 nm and magnetograms. They concluded that Ca II brightness is an unreliable proxy for the magnetic field strength, because of the large scatter between Ca II K brightness and the magnetic flux and that they saw a reversal of the relationship at high magnetic fluxes. It should be noted, however, that the data they analysed were of significantly lower quality than the ones we used. This is manifested by the number of pixel pairs and years analysed: ∼62,000 over 12 years in Pevtsov et al. (2016) and ∼103,000,000 over 6 years of Ca II K data. The issue of the low spatial resolution of the Ca 854.2 nm line was mentioned by Pevtsov et al. (2016) based on the findings of Leenaarts et al. (2006). The reported reversal of the relation at high magnetic fluxes with the Ca II K data, as well as the lack of correlation for the Ca II infrared data considered by Pevtsov et al. (2016), is perfectly consistent with the inclusion of sunspots in their analysis. The large scatter is possibly due to the narrower nominal bandwidth of Mt Wilson data (0.35 Å) compared to that of the Rome/PSPT (2.5 Å). This means that Mt Wilson is sampling greater atmospheric heights than Rome/PSPT, where the flux tubes are more expanded and hence the spatial agreement between the Ca II K data and the magnetograms should be reduced. In addition, at greater height, the emitted radiation is more strongly affected by shock waves and local heating events, which reduce the agreement even more. Lack of photometric calibration of the historical images and potential not very accurate methods applied for processing can distort the relation too (see Chatzistergos 2017;Chatzistergos et al. 2018bChatzistergos et al. , 2019b. Reconstructing unsigned magnetograms from Ca II K images In the previous section we showed that the exponents of the functions tested in our study remained time-and µ independent. This allows us to reconstruct unsigned magnetograms or pseudo-magnetograms from the full-disc Ca II K observations by using the parameters derived in Sect. 3.1. For this, we apply the 3 tested relationships with the best-fit parameters listed in Table 2 on the Ca II K observations. We used the parameters from all three different binning approaches. However, we noticed that using the bisector fit produced magnetograms with the lowest differences to the original ones. We also found that parameters derived from the |B LOS |/µ binning tend to result in magnetograms with overestimated bright regions compared to the original magnetograms. This is also found in magnetograms reconstructed with the fits to the bisector, however to a lesser degree. The magnetograms reconstructed with the parameters from the binning over contrast values tend to underestimate the magnetogram signal in large parts of the bright regions. Based on this, in the following we present the results for magnetograms obtained with the parameters for PFL * applied to the bisector. However, for comparison, we also show in the appendix B the results of PF * , LFL * , and PF. The pixels with contrast ≤ 1 were set to 0 G. Figure 12 shows examples of reconstructed magnetograms for an active and a quiet day by applying the best fit PFL * relationship on the Rome/PSPT Ca II K image (panels a), d)) and the corresponding SDO/HMI magnetogram (panels b), e)). The pixel by pixel absolute differences between the reconstructed and the original magnetograms are shown in Fig. 12 (panels c), f)). Prior to getting the differences, the original and reconstructed magnetograms were multiplied with µ, so that the compared quantity is |B LOS |. In this reconstruction we only made use of the information on the Ca II K image to identify the regions on which we applied the relationships obtained in Sect. 3.1. This means that sunspots were not identified accurately and their immediate surroundings were the regions with the highest errors, reaching differences of up to ∼ 1000 G. These regions were masked out in Fig. 12 and the errors reported in the plots do not include sunspots. Comparing the errors between the reconstructed and the original magnetograms we got RMS differences of 30 G and 20 G for the active and quiet day, respectively. The differences for the quiet day show that we slightly underestimated the weak fields. Figure 13 shows scatter plots between the reconstructed magnetograms and the original ones for the observations shown in Fig. 12. Figure 14 shows the pixel by pixel RMS differences between the original and the reconstructed unsigned magnetograms obtained using the derived best fit parameters of PFL * , without masking the surroundings of the sunspots this time. Figure 14 reveals that the RMS differences remain less than 88 G for all 131 reconstructed unsigned magnetograms with an average value of 50 G. This is approximately 20 G lower than the standard deviation of the magnetic field strength of the original unsigned magnetograms. The RMS differences decrease on overage by 9 G if the sunspots are masked out. We also evaluated how well the regions with strong magnetic fields in the reconstructed unsigned magnetograms correspond to magnetic regions and network in the original magnetograms. For this, we derived the disc fractions covered by features with Method 1 (i.e. applying constant thresholds on contrast in the Rome/PSPT and on |B LOS |/µ in the original SDO/HMI and reconstructed images). The residual disc fractions between the ones derived from the degraded unsigned magnetograms and the reconstructed ones with PFL * are shown in Fig. 15. We also show separately the disc fractions derived from the original-size SDO/HMI magnetograms. When doing so we used the same segmentation parameters for all cases. For all feature classes the difference of the disc fractions derived from degraded SDO/HMI magnetograms and Rome/PSPT are on average 0.3% and are always below 1.3%. We notice that the area of the features in the degraded SDO/HMI magnetograms increase in disc fraction on average by 0.8% (and up to 1.8%) compared to the original-sized magnetograms. The differences between the degraded magnetograms and the reconstructed ones with PFL * are on average 0.8% and are Table 2) derived from the PFL * fits for the whole disc (blue downward triangles) and by masking out the sunspot regions (green rhombuses). Also shown is the standard deviation of the original unsigned magnetograms (black squares). The dashed lines connect annual median values. The shaded grey surface shows the plage areas determined with Method 1 from the Rome/PSPT images. The areas were scaled to have a maximum value of 120 and are included to indicate the level of solar activity. always below 2.0%. We noticed that the reconstructed magnetograms exhibit higher disc fractions for network by ∼1%. Finally, we have calculated the total unsigned magnetic flux from the reconstructed and the original unsigned magnetograms. The results are plotted in Fig. 16 for the same |B LOS |/µ ranges as in Fig. 15. The day by day correlation coefficient between the total flux in the original and the reconstructed magnetograms with the PFL * fit is 0.98 for all bright features and is similar for plage and network. We noticed that the slightly higher network disc fractions result in an almost constant offset in the total unsigned magnetic flux of the network component. The total unsigned magnetic flux in the degraded magnetograms is reduced compared to that from the original-sized magnetograms due to the smoothing applied on the magnetograms to match the Rome/PSPT resolution (see Sec. 2.1). Summary and conclusions We have analysed the relationship between the excess Ca II K emission and the magnetic field strength. For this, we used 131 sets of co-aligned near-co-temporal SDO/HMI magnetogram and continuum observations and Rome/PSPT filtergrams taken in the core of the Ca II K line and in the red continuum. We confirm the existence of a consistent relation between the excess Ca II K emission and the magnetic field strength. We fit the relation between the Ca II K intensity and the vertical component of the magnetic field (|B LOS |/µ) with a power law function of the logarithm of |B LOS |/µ with an offset, and test it against a power law function and a logarithmic function of |B LOS |/µ that have been presented in the literature. The parameters we derived for the power law function are consistent with those from previous studies. The results for a power law function of |B LOS |/µ are also very similar to those derived with a power law function of the logarithm of |B LOS |/µ. The logarithmic function, recently employed in the analysis of high resolution Sunrise data in the Ca II H line, is found to be not representative of bright features in the full-disc Ca II K images analysed in our study. We note that in previous studies the data were binned in terms of |B LOS |/µ before performing the fit. However, results obtained by such fits suffer from attenuation bias due to errors in the independent variable, which are not taken into account. For that reason we decided to bin the data both in |B LOS |/µ and Ca II K contrast values and perform the fits on the bisector of the two binned curves. The observations analysed here greatly extend the sample of studied data with respect to previous works. In particular, we examined a greater amount and in many ways higher-quality data than has been done before for such studies. The data are spanning half a solar cycle, and for this time-scale we report no significant variation with time of the obtained power-law exponents. Moreover, we find no variation of the exponents over the disc positions between µ = 1 and µ = 0.14. Finally, the numerical values of the exponents remain nearly the same if stray-light is taken into account. We found no significant differences between results derived from images corrected with the methods by Yeo et al. (2014a) and by Criscuoli et al. (2017). Having studied this relation for almost the entire disc, up to µ = 0.14 or 0.99R, makes this analysis more applicable to stellar studies than most earlier investigations. The fact that the exponents are independent of time and µ suggests that maps of the unsigned LOS magnetic field can be reconstructed from Ca II K observations with merely the knowledge of the exponent derived here. We tested the reconstruction of unsigned magnetograms from available Ca II K observations and compared our results to co-temporal directly measured magnetograms. The total magnetic flux calculated for the series of the original and reconstructed magnetograms agrees well with the correlation factor of 0.98. This means that historical Ca II K spectroheliograms, when properly processed and calibrated (e.g. Chatzistergos et al. 2016Chatzistergos et al. , 2018aChatzistergos et al. , 2019a, can be used to extend the series of magnetograms throughout the whole 20th century. This approach suffers from the limitation that it does not allow the polarity of the magnetic field to be recovered. However, this is not a problem for a number of studies and applications, e.g. for irradiance reconstructions, where the models do not require the polarity of the bright features. Besides, if other data are also used, for instance sunspot measurements, it might be possible to recover the polarity of the ARs as well. Fig. 1 for a network (left) and a plage region (right). From top to bottom: (a), b)) SDO/HMI unsigned (and spatially degraded) magnetogram; (c), d)) Rome/PSPT Ca II K; the corresponding segmentation masks derived with Method 1, i.e. constant thresholds, from (e), f)) the magnetograms and (g), h)) the Ca II K images. The magnetograms are saturated in the range [-300,300] G (the negative value was chosen merely to improve visibility of the pixels), and Ca II K observations in the range [0.5,1.6] (the QS has the average value of 1). Appendix A: Spatial agreement between magnetograms and Ca II K images Fig. 1 to illustrate the good spatial agreement between the SDO/HMI and Rome/PSPT images. Figure A.1 e)-h) displays the corresponding masks of plage and network combined for the close-ups shown in Fig. A.1 a)-d) derived with method 1 (see Sect. 2.3). Figure A.1 illustrates the well-known fact that the bright features in the Ca II K images belong to magnetic regions and network in the magnetograms. The ARs appear slightly smaller and show smaller-scale features in the magnetograms than in the Ca II K data. This can occur for a variety of reasons. The flux tubes comprising ARs expand with height in the solar atmosphere, therefore ARs are expected to be more extended in Ca II K images. Furthermore, if the flux tubes are inclined then they can appear more broadened in the Ca II K data too. Other possible reasons include lower spatial resolution and seeing effects due to Earth's atmosphere that smear the features in the Ca II K observations. Some contribution will be provided by cancellation of magnetograph signal between opposite polarities within the same resolution element (see e.g. Chitta et al. 2017, for hidden opposite polarities at SDO/HMI resolution that appear at the higher resolution of Sunrise observations). However, these effects should be minimised after the spatial degradation we applied to the magnetograms. Finally, the choice of the segmentation thresholds has an effect as well, if they are not consistent between the magnetograms and Ca II K images. We evaluated a variety of threshold combinations, but we were unable to match better the AR areas in the two observations without introducing even smaller scale features in the magnetograms. Therefore, we assumed that the differences are to a significant extent due to the expansion of the flux tubes, in particular by the fibrils spreading out at the edges of ARs, as found by, e.g., Pietarila et al. (2009). Appendix B: Reconstructed magnetograms with different functions Here we use the parameters derived for PF, PF * , and LFL * to reconstruct unsigned magnetograms and compare the results with those derived with PFL * . Figure B.1 shows the pixel by pixel absolute differences between the reconstructed and the original magnetograms by using PF * (panels a), e)), PFL * (panels b), f)), LFL * (panels c), g)), and PF (panels d), h)). Comparing the errors between the reconstructed and the original magnetograms we got similar uncertainties for both PF * and PFL * . In particular we found RMS differences of 30 G and 20 G for the active and quiet day, respectively, for both PF * and PFL * . We discern no significant difference between these two reconstructed magnetograms, although a careful comparison reveals many differences at small scales. The differences for the quiet day show that we slightly underestimated the weak fields. The differences for the LFL reach up to 2500 G in plage regions. These high errors arise due to the large pixel-to-pixel scatter in the relationship between Ca II K contrast and |B LOS |/µ. Consequently there are numerous very bright pixels in the Ca II K observations that would correspond to very strong fields in this case, as the fitted curve increases very slowly. This problem is somewhat more acute for reconstructions that use the PF and PFL relationships (i.e. those derived from a fit to data binned in |B LOS |/µ). We also show the differences for PF, which has been commonly used in the literature. In this case, the errors are slightly higher than for PF * or PFL * for times with high activity. Figure B.2 shows scatter plots between the four reconstructed magnetograms and the original one for the observation taken on 01/04/2011 (the active day shown in Fig. B.1). The unsigned magnetograms reconstructed with PF * and PFL * show the best correspondence, while the ones with LFL * and PF tend to overestimate the magnetic field. Figure B.3 shows the pixel by pixel RMS differences between the original and the reconstructed unsigned magnetograms obtained using the derived best fit parameters of the three functions we tested, without masking the surroundings of the sunspots this time. Figure B.3 reveals that the RMS differences remain less than 88 G for all 131 unsigned magnetograms reconstructed with the PF * and PFL * , but reach 100 G for PF and 7500 G for LFL * . Figure B.4 shows the residual disc fractions between the ones derived from the degraded unsigned magnetograms and the reconstructed ones with the PF * , PFL * , LFL * , and PF fits. The results for PF * follow very closely those for PFL * , though giving minutely (on average by 0.3%) higher differences. The differences between the degraded magnetograms and the reconstructed ones with PF * fits are on average 1.0% and are always below 2.3%. The disc fractions in the magnetograms reconstructed with LFL * are on average 6% higher than in the original magnetograms when all features are considered, however the difference remains less than 0.1% when only the plage regions are considered. The errors in the disc fractions slightly increase when the magnetograms are reconstructed with PF, being ∼4% for all features. The total unsigned magnetic flux is plotted in Fig. B.5 for the same |B LOS |/µ ranges as in Fig. B.4. The day by day correlation coefficient between the total flux in the original and the reconstructed magnetograms with both PF * and PFL * fits is 0.98 for all bright features and is similar for plage and network. The differences between the results for PF * and PFL * are minute, with the latter giving slightly higher values. The total flux derived from the reconstructed unsigned magnetograms with PF and LFL * give consistently higher values. Scatter plots between original (degraded) magnetograms and those reconstructed from the Ca II K image taken on 01/04/2011 using the average parameters for PF * (a)), PFL * (b)), LFL * (c)), and PF (d)). The yellow line has a slope of unity. The axes are shown in the range from the original magnetogram. RMS pixel by pixel differences in G between the original magnetograms and the reconstructed ones using the parameters (listed in Table 2) derived from the PF * (red), PFL * (blue), LFL * (green), and PF (yellow) fits. The dashed lines connect annual median values. The shaded surface is as in Fig. 15. Total unsigned magnetic flux in Mx of AR derived from the magnetograms (yellow circles for the original and black plus signs for the reduced spatial resolution ones) and from the unsigned magnetograms reconstructed from Ca II K observations with PF * (red upward triangles), PFL * (blue rhombuses), LFL * (green squares), and PF (light blue downward triangles). Each of the upper two panels corresponds to a different type of features (as listed in the panels) identified with Method 1, while the bottom panel is for all features together. The dashed lines connect the annual median values. The shaded surface in the lower panel is as in Fig. 15.
14,590.8
2019-05-09T00:00:00.000
[ "Physics" ]
Changing Mechanisms of Surface Relief and the Damage Evaluation of Low Cycle Fatigued Austenitic Stainless Steel To quantitatively investigate the cause of the changes in arithmetic mean roughness Ra and arithmetic mean waviness Wa of austenitic stainless steel under low-cycle fatigue loading, precise observation focusing on persistent slip bands (PSBs) and crystal grain deformations was conducted on SUS316NG. During the fatigue tests, the specimen’s surface topography was regularly measured using a laser microscope. The surface topographies were analysed by frequency analysis to separate the surface relief due to PSBs from that due to grain deformation. The height caused by PSBs and that by grain deformation were measured respectively. As a result, both of the heights rose with the increase of usage factor (UF). The amount of increase in the heights with respect to UF increased with strain range. The trend of development of both heights was similar with the trend of Ra and Wa. A comparison between Ra and the height caused by PSBs showed that these values strongly correlated with each other. A comparison between Wa and the height caused by grain deformation also showed that these values strongly correlated with each other. Consequently, the surface texture parameters Ra and Wa represent the changes in the heights of surface reliefs due to PSBs and grain deformation. Introduction When important industrial facilities are subjected to excessive cyclic loadings that lead to deformation, ensuring the structural health of the facilities requires precise damage evaluation from the view-point of material strength.In general, fatigue damage is often assessed using the linear cumulative damage law.However, it is necessary to focus on the physical damage process in order to evaluate the effects of excessive cyclic loadings on subsequent fatigue life more precisely. The surface of metallic materials become rough during cyclic loading.If the changes in the surface topography can be related to the amount of physical damage, it may be possible to estimate the degree of fatigue damage from measuring surface topography.During cyclic loading, two mechanisms cause the surface roughening of metallic materials: (a) formation of persistent slip bands (PSBs) [1][2][3][4][5] and (b) deformation of crystal grains [6,7].In the PSBs, there are fine peaks and valleys, i.e., extrusions and intrusions caused by active slip systems.The waveforms of these fine reliefs caused by PSBs typically have a wavelength of around 1 µm.In contrast, the wavelengths of convex and concave structure due to grain deformation are several times larger than grain size [6,7].Thus, the wavelengths of surface relief due to grain deformation are much larger than those due to PSBs.Several studies have reported that frequency analysis (i.e., the wavelength difference) can be used to separate a PSBs-induced surface relief from a grain-deformation-induced one [8,9]. In our previous study [10], we investigated the change in the surface topography under low-cycle fatigue loadings with constant strain range conditions.The surface relief due to PSBs were separated from those due to grain deformation using frequency analysis and the evolution of each relief was evaluated using two surface texture parameters: arithmetic mean roughness R a and arithmetic mean waviness W a .As a result, R a and W a increased with the increase of usage factor UF. The amount of increase rate in R a and W a with respect to UF changes with the strain range.Additionally, it suggested that the applied strain range and the degree of fatigue damage UF could be estimated by measuring R a and W a if the strain range condition is constant.However, it has not been quantitatively investigated whether the changes in R a and W a correspond to the evolution of two surface reliefs due to PSBs and grain deformation. On the basis of the above background, this study conducted precise observation focusing on PSBs and grain deformations in SUS316NG to quantitatively investigate the cause of the changes in R a and W a under low-cycle fatigue loading.During the fatigue tests, the specimen's surface topography was regularly measured using a laser microscope.The surface topographies were analysed by frequency analysis to separate the surface relief due to PSBs from that due to crystal grain deformation.The height caused by PSBs and that by crystal grain deformation were measured respectively.To clarify the correspondence between the change mechanisms of surface relief and surface texture parameters, the heights of surface reliefs and surface texture parameters were compared and investigated. Material and specimen The material was a solution heat-treated SUS316NG (Nuclear Grade) austenitic stainless steel.The chemical components of the material were C: 0.01, Si: 0.40, Mn: 1.70, P: 0.013, S: 0.001, Ni: 12.09, Cr: 16.64, Mo: 2.48, N: 0.10 (mass%).Thermal treatment of the supplied material was undertaken at the temperature of 1050 degrees Celsius for 1 hour, followed by water quenching.The average grain size was 55 µm.The mechanical properties were: 0.2 proof stress and tensile strength were σ 0.2 = 252 MPa and σ B = 580 MPa, respectively. An hourglass-shaped specimen with an R-part of 35 mm curvature radius and the minimum cross-section diameter of φ6 was used (Fig. 1).The specimen's surface was mirror finished by polishing with emery paper (grits from 120 to 2000) followed by buffing with diamond abrasives (particle size: 1 µm).To observe the deformation of each grain at the mirror-polished surface, a specimen etched in aqua regia was also prepared. Fatigue testing method and conditions Strain-controlled uniaxial push-pull fatigue tests were conducted using a servo hydraulic fatigue testing machine with a load capacity of 100 kN.Triangular loading with constant strain range was applied to specimens in ambient air at room temperature.Axial strain ε axis was controlled by measuring the change in diameter at the specimen's minimum diameter part using an extensometer.The strain was calculated using the equation: where d 0 is the original diameter at the minimum diameter part, and d is the diameter during the fatigue tests.The constant strain range ∆ε were two conditions: ∆ε = 4% and 2%.The strain ratio and the strain rate were R ε = -1 and 0.4 %/sec, respectively. Measurement of surface topography During the above-mentioned fatigue tests, the specimen's surface topography was regularly measured using a color 3D laser scanning microscope (VK-9700/9710 Generation II, KEYENCE).Cyclic loading was interrupted at arbitrary cycles, and then the specimen was removed from the testing machine for the surface topography measurement.The measurement interval was approximately 0.1 or 0.2 of the usage factor UF (=N/N f ), which represents the degree of fatigue damage as the consumption rate of life; N is the number of loading cycles, and N f is the fatigue life.The fatigue life N f at ∆ε = 4% and 2% were calculated using the following best-fit curve which were obtained from the fatigue test results in our previous study [11]. 3D surface images were taken at four to ten measurement points set on the circumference of the specimen's minimum diameter part.Two object lenses with different magnification were used for the observation of the surface reliefs due to PSBs and grain deformation: one's magnification was 150, and the other's was 20.The number of the high and low magnification images taken at a measurement point was six and one, respectively.The surface topography measurement conditions are summarized in Table 1.Different measurement ranges and resolutions were used for each surface relief because different objective wavelengths were used for the surface topographic images, as mentioned in the next subsection. Image processing method To separate the surface topography into the surface relief due to PSBs and that due to grain deformation respectively, the obtained 3D surface images were analysed by mean of two-dimensional fast Fourier transform (2DFFT) using the image analysis software (VK-Analyzer, KEYENCE). In the analyses, low-pass and high-pass filters were applied to extract two wavelength range: one from λ s = 0.25 µm to λ c = 11.8 µm for the surface relief due to PSBs, one from λ c = 11.8 µm to λ f = 704 µm for that due to grain deformation (Table 1).The filters were brickwall shaped.The λ s and λ f were the cut-off wavelengths to remove measurement noise and the specimen's shape error.These values, λ s = 0.25 µm and λ c = 11.8 µm, ).The λ c was the cut-off wavelength to separate the surface relief due to PSBs from that due to grain deformation.It was determined to be the shortest wavelength of the surface asperities caused by grain deformation.The shortest wavelength likely corresponds to the length of two adjacent small grains, which rotate opposite to each other.The smallest grain size was estimated to be about 6 µm from the grain size distribution of the supplied material, and 11.8 µm was chosen for λ c as the value closest to twice the smallest grain size. Height measurement of surface reliefs caused by PSBs and crystal grain deformation The height of surface reliefs due to PSBs and that due to grain deformation were respectively measured as follows using the processed surface topographic images. Height caused by crystal grain deformation Fig. 3 shows the processed surface image where crystal grains deformation and the surface relief on the mirrorpolished specimen's surface were observed.Five horizontal measurement lines (light blue lines in (a)) were drawn at an equal interval in the surface image.On the measurement lines, five measurement sections (red lines in (a)) were set at an equal interval: the section length was about twice the average grain size (110 µm). A profile curve at the measurement line in the section (the red dotted frame in (a)) was obtained as shown in (b).The height due to grain deformation was defined as the vertical distance between the maximum peak height and the maximum valley depth in the profile curve of the measurement section (the red line in (b)).Twenty five height data were obtained per one surface topographic image.Four surface topographic images, taken at different measurement points on the circumference, were used.The measurement interval was approximately 0.2 of UF.The H of the same surface relief on the section was measured when UF increased in the subsequent loading.The H at UF = 0 was 0 because grains had not been deformed yet before cyclic loading. Calculation of surface texture parameters Surface texture parameters R a and W a were measured to investigate the correlations between the parameters and the surface reliefs.Areal roughness and waviness parameters (arithmetic mean roughness R a and waviness W a ) were determined from the processed surface topographic images, which were used for the height measurement of each surface relief.The area 5 µm inward from the edge of the measurement area was excluded for the calculations of R a and W a .If a crack observed in the measurement area, a region of a few µm surrounding a crack was also excluded.These processes were to prevent abnormal signals affecting the calculation of R a and W a .Fig. 4 shows the measurement results of surface texture parameters.As mentioned above, R a and W a increased with the increase of UF, and the amount of increase rate in R a and W a with respect to UF changes with the strain range. Measurement result of h caused by PSBs The measurement results of h are shown in Fig. 5.The vertical and horizontal axes represent h and UF, respectively.The circle and square marks indicate the average values of the measurement data at ∆ε = 4% and 2%, and the error bars show the standard deviation.As shown in Fig. 5, the height rose with the increase of UF.Focusing on the increase tendency of h, the rate of increase gradually decreases after UF = 0.2.The amount of increase in h with respect to UF increased with strain range: the h under ∆ε = 4% was larger than that under ∆ε = 2% at the same UF. Relationship between h and R a The trend of development of h (Fig. 5) was similar with the trend of R a (Fig. 4(a)).To clarify the relationship between the height due to PSBs and the surface roughness parameter, h was compared with R a .Fig. 6 shows the scatter diagram: the vertical and the horizontal axes represent R a and h, respectively.Here, h was the average of the measured data per one surface topographic image, and R a was determined from the image.The circle and square marks indicate the data at ∆ε = 4% and 2%.The black dotted straight line is the regression line, passing through the origin, determined by applying the least-square method to all data on Fig. 6.As shown in Fig. 6, R a increased with the increase of h.Focusing on the regression line, the coefficient of determination was R 2 = 0.8629, and the correlation coefficient was R = 0.9289.The values showed that the height h and surface roughness R a strongly correlated with each other.Consequently, the surface roughness parameter R a represents the change in the height of surface relief due to PSBs. UF [-] Δε=4% Δε=2% • ∆ε = 4% ■ ∆ε = 2% Fig. 6.Relationship between h and R a .9, the height H rose with the increase of UF.The amount of increase in h with respect to UF increased with strain range. Relationship between H and W a The trend of development of H (Fig. 9) was similar with the trend of W a (Fig. 4(b)).To clarify the relationship between the height due to grain deformation and the surface waviness parameter, H was compared with W a .Fig. 10 shows the scatter diagram: the vertical and horizontal axes represent W a and H, respectively.Here, H was the average of the measured data per one surface topographic image, and W a was determined from the image.The circle and square marks indicate the data at ∆ε = 4% and 2%.The black dotted straight line is the regression line, passing through the origin, determined by applying the least-square method to all data on Fig. 10.As shown in Fig. 10, W a increased with the increase of H. Focusing on the regression line, the coefficient of determination was R 2 = 0.9700, and the correlation coefficient was R = 0.9849.These values showed that the height H and surface waviness W a strongly correlated with each other.Consequently, the surface waviness parameter W a represents the change in the height of surface relief due to grain deformation. Summary and conclusions Precise surface observation focusing on the surface relief due to PSBs and crystal grain deformation were conducted on low-cycle fatigued austenitic stainless steel SUS316NG to quantitatively investigate the cause of the changes in surface texture parameters R a and W a .During fatigue tests, surface topography was regularly measured using a laser microscope.The height of surface reliefs •∆ε = 4% ■∆ε = 2% W a = 0.4233H R 2 = 0.9700 caused by PSBs and grain deformation were measured, respectively.Surface topographies were analysed by frequency analysis to separate the surface relief due to PSBs from that due to grain deformation.The height caused by PSBs and that by grain deformation were measured respectively.The experiments led to the following conclusions: 1.Both heights of surface reliefs rose with the increase of UF.The amount of increase in the heights with respect to UF increased with strain range.2. The trend of development of both heights was similar with the trend of R a and W a . A comparison between R a and the height caused by PSBs showed that these values strongly correlated with each other.4. A comparison between W a and the height caused by grain deformation also showed that these values strongly correlated with each other.5. Surface texture parameters R a and W a represent the changes in the heights of surface reliefs due to PSBs and grain deformation. This work was supported by JSPS Grant-in-Aid for Young Scientists (B) Number 17K14552. Fig. 2 Fig.2shows the processed surface topographic image where PSBs and the surface relief formed on the mirrorpolished specimen's surface were observed.Two measurement lines (light blue lines in (a)) were drawn perpendicular to PSBs (black lines in (a)), and then a profile curve at the intersecting point of the measurement line and PSBs (the red circle in (a)) was obtained as shown in (b).The height due to a PSB was defined as the vertical distance between the maximum peak height and the maximum valley depth in the profile curve of the PSB.Two height data were obtained per one PSB.Five to eight clearly visible PSBs were chosen in one surface topographic image, and the total number of the measured h was from ten to sixteen.Four surface topographic images, taken at different measurement points on the circumference, were used.The measurement interval was approximately 0.1 or 0.2 of UF.The h of the same surface relief on PSBs was measured when UF increased in the subsequent loading.The h at UF = 0 was 0 because PSBs had not appeared yet before cyclic loading. Fig. 2 . Measuring method of the height caused by PSBs. 3 . (a) Surface image observed crystal grain deformation.(b) Cross section of the area framed by a red dotted line in (a).Fig. Measuring method of the height caused by crystal grain deformation. Fig. 4 . (a) Changes in R a during cyclic loading.(b) Changes in W a during cyclic loading.Measurement results of surface texture parameters. Figs. 7and 8 Fig. 8 . Figs. 7and 8 show the surface observation of the etched specimen before and after cyclic loading.Grains are visible on the etched surface before cyclic loading (Figs.7(a) and 8(a)).Figs.7(b), 7(c), 8(b) and 8(c) show the contour images of the surfaces fatigued under ∆ε = 4% and 2%: (b) at UF = 0.2 and (c) at UF = 0.6.Black lines traced the grain boundaries to define the correspondence between the position of surface relief and grains,.After applying cyclic loading until UF = the convex parts (the red squares in Figs.7(b) and 8(b)) and the concave parts (the blue dotted square in Figs.7(b) and 8(b)) were observed on the surface.After more cyclic loading (Figs.7(c) and 8(c)), these surface reliefs were developed: the convex part rose higher and the concave part sank deeper; however, their positions on the surface did not change.These convex and concave parts formed at near grain boundaries or within grains, and this observation showed that grain deformation caused the formation of these large surface reliefs. Fig. 9 . Fig. 9. Measurement results of H caused by grain deformation. Fig. 10 . Fig. 10.Relationship between H and W a .
4,257.4
2018-01-01T00:00:00.000
[ "Materials Science" ]
Laboratory networking systems to tackle the spread of monkeypox virus – a need of the hour The recent spike of a rare zoonotic infection caused by the monkeypox virus is gaining global attention due to its high transmissibility and fear of stigmatization. Without proper measures to prevent the spread of virus, monkeypox could potentially set the stage for another widespread outbreak leading to worldwide health calamity as witness during the coronavirus disease 2019 pandemic. As a lesson learnt, it is now evident that rapid screening and diagnostic testing is the most ef fi cient approach toward containment of any disease outbreak [1] . We take this opportunity to emphasize on measures to contain the transmission of the virus and possible methods to improve the detection and management of monkeypox disease through improved laboratory networking. The Quest Diagnostics to increase the accessibility and capacity of testing nationwide thereby reducing the cumbersome in local laboratories [3] . Also, the Centres for Disease Control and Prevention ensure that the Laboratory Response Network that was established in collaboration with other public health officials is equipped with diagnostic facilities to detect the monkeypox virus belonging to the genus Orthopoxvirus. The Laboratory Response Network was set up with the goal of forming a stable network across the nation to effectively and rapidly share diagnostic tools, interpretation of results, training of laboratory technicians, and reporting of any errors through critical evaluation and communication networks during regular as well as emergency health crisis [4] . Such an intricate laboratory networking system is crucial to distribute highly confidential data and reduce the incidence of faulty procedures and incorrect test results. In India, the central government has established a network consisting of a total of 15 Virus Research and Diagnostic Laboratories across the country in 13 states to track the prevalence of monkeypox and promote rapid testing facilities [5] . In addition to this, scientists from the ICMR National Institute of Virology in Pune are offering expert training to fellow members from different nations on detection, clinical manifestations, case definitions, collection of samples, handling of samples, and laboratory tool usage to improve their capacity to detect monkeypox cases and screen the suspected population in order to mitigate the transmission of the virus locally and travel-related spread [6] . Despite these initiatives, in a country with 1.4 billion people, 15 network laboratories is certainly suboptimal and requires more vigorous surveillance and diagnostic modalities. In this regard, the Government of India has decided to enhance the monitoring and diagnostic centres in the country at the level of hospitals, entry points, and local communities, along with meticulous contact tracing and screening of suspected individuals [7] . Throughout the world, monkeypox disease is raising concern, and rapid screening and testing is the most efficient step toward curbing the disease's outbreak. Not only symptomatic but also asymptomatic patients and close contacts should also be rapidly screened for the monkeypox virus. This mandates a well-organized and rigorous laboratory network system across the world to collect, transport, test, and report the result of the sample. Laboratory networking functions beyond the scope of testing. This complex and vigilant network system serves as a platform for providing emergency response, training of laboratory personnel, communication across different testing centres, surveillance of laboratory data, and diligent management of patient information [8] . Thus, the establishment and maintenance of such robust laboratory networks is highly critical to improving the public health system of a nation or state and is a step toward enhancing the quality of health care service and health data management across the globe. Ethical approval Not applicable. Sources of funding Not funded by any funding agency. Conflicts of interest disclosure No conflict of interest. Research registration unique identifying number (UIN) Not Applicable Guarantor Dr Surapaneni K. Mohan, corresponding author. Data statement This correspondence is based exclusively on resources that are publicly available on the internet and duly cited in the 'References' section. No primary data was generated and reported in this manuscript. Therefore, data has not become available to any academic repository.
942.2
2023-02-01T00:00:00.000
[ "Medicine", "Environmental Science", "Engineering" ]
Syntax-Guided Controlled Generation of Paraphrases Given a sentence (e.g., “I like mangoes”) and a constraint (e.g., sentiment flip), the goal of controlled text generation is to produce a sentence that adapts the input sentence to meet the requirements of the constraint (e.g., “I hate mangoes”). Going beyond such simple constraints, recent work has started exploring the incorporation of complex syntactic-guidance as constraints in the task of controlled paraphrase generation. In these methods, syntactic-guidance is sourced from a separate exemplar sentence. However, these prior works have only utilized limited syntactic information available in the parse tree of the exemplar sentence. We address this limitation in the paper and propose Syntax Guided Controlled Paraphraser (SGCP), an end-to-end framework for syntactic paraphrase generation. We find that Sgcp can generate syntax-conforming sentences while not compromising on relevance. We perform extensive automated and human evaluations over multiple real-world English language datasets to demonstrate the efficacy of Sgcp over state-of-the-art baselines. To drive future research, we have made Sgcp’s source code available. 1 Introduction Controlled text generation is the task of producing a sequence of coherent words based on given constraints. These constraints can range from simple attributes like tense, sentiment polarity and wordreordering (Hu et al., 2017;Shen et al., 2017;Yang et al., 2018) to more complex syntactic information. For example, given a sentence "The movie is awful!" and a simple constraint like flip sentiment * This research was conducted during the author's internship at Indian Institute of Science. 1 https://github.com/malllabiisc/SGCP SOURCE -how do i predict the stock market ? EXEMPLAR -can a brain transplant be done ? SCPN how can the stock and start ? CGEN -can the stock market actually happen ? SGCP (Ours) -can i predict the stock market ? SOURCE what are some of the mobile apps you ca n't live without and why ? EXEMPLAR -which is the best resume you have come across ? SCPN what are the best ways to lose weight ? CGEN -which is the best mobile app you ca n't ? SGCP (Ours) -which is the best app you ca n't live without and why ? (Iyyer et al., 2018), CGEN (Chen et al., 2019a), SGCP (Ours). We observe that SGCP is able to generate syntax conforming paraphrases without compromising much on relevance. to positive, a controlled text generator is expected to produce the sentence "The movie is fantastic!". These constraints are important in not only providing information about what to say but also how to say it. Without any constraint, the ubiquitous sequence-to-sequence neural models often tend to produce degenerate outputs and favour generic utterances (Vinyals and Le, 2015;Li et al., 2016). While simple attributes are helpful in addressing what to say, they provide very little information about how to say it. Syntactic control over generation helps in filling this gap by providing that missing information. Incorporating complex syntactic information has shown promising results in neural machine translation (Stahlberg et al., 2016;Aharoni and Goldberg, 2017;Yang et al., 2019), data-to-text generation (Peng et al., 2019), abstractive textsummarization (Cao et al., 2018) and adversarial text generation (Iyyer et al., 2018). Additionally, recent work (Iyyer et al., 2018;Kumar et al., 2019) has shown that augmenting lexical and syntactical variations in the training set can help in building Figure 1: Architecture of SGCP (proposed method). SGCP aims to paraphrase an input sentence, while conforming to the syntax of an exemplar sentence (provided along with the input). The input sentence is encoded using the Sentence Encoder (Section 3.2) to obtain a semantic signal c t . The Syntactic Encoder (Section 3.3) takes a constituency parse tree (pruned at height H) of the exemplar sentence as an input, and produces representations for all the nodes in the pruned tree. Once both of these are encoded, the Syntactic Paraphrase Decoder (Section 3.4) uses pointer-generator network, and at each time step takes the semantic signal c t , the decoder recurrent state s t , embedding of the previous token and syntactic signal h Y t to generate a new token. Note that the syntactic signal remains the same for each token in a span (shown in figure above curly braces; please see Figure 2 for more details). The gray shaded region (not part of the model) illustrates a qualitative comparison of the exemplar syntax tree and the syntax tree obtained from the generated paraphrase. Please refer Section 3 for details. better performing and more robust models. In this paper, we focus on the task of syntactically controlled paraphrase generation, i.e., given an input sentence and a syntactic exemplar, produce a sentence which conforms to the syntax of the exemplar while retaining the meaning of the original input sentence. While syntactically controlled generation of paraphrases finds applications in multiple domains like data-augmentation and text passivization, we highlight its importance in the particular task of Text simplification. As pointed out in Siddharthan (2014), depending on the literacy skill of an individual, certain syntactical forms of English sentences are easier to comprehend than others. As an example consider the following two sentences: S1 Because it is raining today, you should carry an umbrella. S2 You should carry an umbrella today, because it is raining. Connectives that permit pre-posed adverbial clauses have been found to be difficult for third to fifth grade readers, even when the order of mention coincides with the causal (and temporal) order (Anderson and Davison, 1986;Levy, 2003). Hence, they prefer sentence S2. However, various other studies (Clark and Clark, 1968;Katz and Brent, 1968;Irwin, 1980) have suggested that for older school children, college students and adults, comprehension is better for the cause-effect presentation, hence sentence S1. Thus, modifying a sentence, syntactically, would help in better comprehension based on literacy skills. Prior work in syntactically controlled paraphrase generation addressed this task by conditioning the semantic input on either the features learnt from a linearized constituency-based parse tree (Iyyer et al., 2018), or the latent syntactic information (Chen et al., 2019a) learnt from exemplars through variational auto-encoders. Linearizing parse trees, typically, result in loss of essen-tial dependency information. On the other hand, as noted in (Shi et al., 2016), an auto-encoder based approach might not offer rich enough syntactic information as guaranteed by actual constituency parse trees. Moreover, as noted in Chen et al. (2019a), SCPN (Iyyer et al., 2018) and CGEN (Chen et al., 2019a) tend to generate sentences of the same length as the exemplar. This is an undesirable characteristic because it often results in producing sentences that end abruptly, thereby compromising on grammaticality and semantics. Please see Table 1 for sample generations using each of the models. To address these gaps, we propose Syntax Guided Controlled Paraphraser (SGCP) which uses full exemplar syntactic tree information. Additionally, our model provides an easy mechanism to incorporate different levels of syntactic control (granularity) based on the height of the tree being considered. The decoder in our framework is augmented with rich enough syntactical information to be able to produce syntax conforming sentences while not losing out on semantics and grammaticality. The main contributions of this work are as follows: 1. We propose Syntax Guided Controlled Paraphraser (SGCP), an end-to-end model to generate syntactically controlled paraphrases at different levels of granularity using a parsed exemplar. 2. We provide a new decoding mechanism to incorporate syntactic information from the exemplar sentence's syntactic parse. 3. We provide a dataset formed from Quora Question Pairs 2 for evaluating the models. We also perform extensive experiments to demonstrate the efficacy of our model using multiple automated metrics as well as human evaluations. Related Work Controllable Text Generation is an important problem in NLP which has received significant attention in recent times. Prior works include generating text using models conditioned on attributes like formality, sentiment or tense (Hu et al., 2017;Shen et al., 2017;Yang et al., 2018) as well as on syntactical templates (Iyyer et al., 2018;Chen et al., 2019a). These systems find applications in adversarial sample generation (Iyyer et al., 2018), text summarization and table-to-text generation (Peng et al., 2019). While achieving state-ofthe-art in their respective domains, these systems typically rely on a known finite set of attributes thereby making them quite restrictive in terms of the styles they can offer. Paraphrase generation. While generation of paraphrases has been addressed in the past using traditional methods (McKeown, 1983;Barzilay and Lee, 2003;Quirk et al., 2004;Hassan et al., 2007;Zhao et al., 2008;Madnani and Dorr, 2010;Wubben et al., 2010), they have recently been superseded by deep learning-based approaches (Prakash et al., 2016;Gupta et al., 2018;Li et al., 2019Kumar et al., 2019). The primary task of all these methods (Prakash et al., 2016;Gupta et al., 2018; is to generate the most semantically similar sentence and they typically rely on beam search to obtain any kind of lexical diversity. Kumar et al. (2019) try to tackle the problem of achieving lexical, and limited syntactical diversity using submodular optimization but do not provide any syntactic control over the type of utterance that might be desired. These methods are therefore restrictive in terms of the syntactical diversity that they can offer. Controlled Paraphrase Generation. Our task is similar in spirit to Iyyer et al. (2018); Chen et al. (2019a), which also deals with the task of syntactic paraphrase generation. However, the approach taken by them is different from ours in at least two aspects. Firstly, SCPN (Iyyer et al., 2018) uses attention (Bahdanau et al., 2014) based pointergenerator network (See et al., 2017) to encode input sentences and a linearised constituency tree to produce paraphrases. Due to the linearization of syntactic tree, a lot of dependency-based information is generally lost. Our model, instead, directly encodes the tree structure to produce a paraphrase. Secondly, the inference (or generation) process in SCPN is computationally very expensive, since it involves a two-stage generation process. In the first stage, they generate full parse trees from incomplete templates, and then from full parse trees to final generations. In contrast, the inference in our method involves a single-stage process, wherein our model takes as input a semantic source, a syntactic tree and the level of syntactic style that needs to be transferred, to obtain the generations. Additionally, we also observed that the model does not perform well in low resource settings. This, again, can be attributed to the compounding implicit noise in the training due to linearised trees and generation of full linearised trees before obtaining the final paraphrases. Chen et al. (2019a) propose a syntactic exemplar-based method for controlled paraphrase generation using an approach based on latent variable probabilistic modeling, neural variational inference, and multi-task learning. This, in principle, is very similar to Chen et al. (2019b). As opposed to our model which provides different levels of syntactic control of the exemplar-based generation, this approach is restrictive in terms of the flexibility it can offer. Also, as noted in Shi et al. (2016), an auto-encoder based approach might not offer rich enough syntactic information as offered by actual constituency parse trees. Additionally, VAEs (Kingma and Welling, 2014) are generally unstable and harder to train (Bowman et al., 2016;Gupta et al., 2018) than seq2seq based approaches. SGCP: Proposed Method In this section, we describe the inputs and various architectural components, essential for building SGCP, an end-to-end trainable model. Our model, as shown in Figure 1, comprises a sentence encoder (3.2), syntactic tree encoder (3.3), and a syntactic-paraphrase-decoder (3.4). Inputs Given an input sentence X and a syntactic exemplar Y , our goal is to generate a sentence Z that conforms to the syntax of Y while retaining the meaning of X. While the semantic encoder (Section 3.2) works on sequence of input tokens, the syntactic encoder (Section 3.3) operates on constituency-based parse trees. We parse the syntactic exemplar Y 3 to obtain its constituency-based parse tree. The leaf nodes of the constituency-based parse tree consists of token for the sentence Y. These tokens, in some sense, carry the semantic information of sentence Y, which we do not need for generating paraphrases. In order to prevent any meaning propagation from exemplar sentence Y into the generation, we remove these leaf/terminal nodes from its constituency parse. The tree thus obtained is denoted as C Y . The syntactic encoder, additionally, takes as input H, which governs the level of syntactic control needed to be induced. The utility of H will be described in Section 3.3. Semantic Encoder The semantic encoder, a multi-layered Gated Recurrent Unit (GRU), receives tokenized sentence X = {x 1 , . . . , x T X } as input and computes the contextualized hidden state representation h X t for each token using: where e(x t ) represents the learnable embedding of the token x t and t ∈ {1, . . . , T X } . Note that we use byte-pair encoding (Sennrich et al., 2016) for word/token segmentation. Syntactic Encoder This encoder provides the necessary syntactic guidance for the generation of paraphrases. Formally, let constituency tree where V is the set of nodes, E the set of edges and Y the labels associated with each node. We calculate the hidden-state representation h Y v of each node v ∈ V using the hidden-state representation of its parent node pa(v) and the embedding associated with its label y v as follows: (2) where e(y v ) is the embedding of the node label y v , and W pa , W v , b v are learnable parameters. This approach can be considered similar to TreeLSTM (Tai et al., 2015). We use GeLU activation function (Hendrycks and Gimpel, 2016) rather than the standard tanh or relu, because of superior empirical performance. As indicated in Section 3.1, syntactic encoder takes as input the height H, which governs the level of syntactic control. We randomly prune the tree C Y to height H ∈ {3, . . . , H max }, where H max is the height of the full constituency tree C Y . As an example, in Figure 2b, we prune the constituency-based parse tree of the exemplar sentence, to height H = 3. The leaf nodes for this tree have the labels WP, VBZ, NP and <DOT>. While we calculate the hidden-state representation of all the nodes, only the terminal nodes are responsible for providing the syntactic signal to the decoder (Section 3.4). The constituency parse tree serves as an input to the syntactic encoder (Section 3.3). The first step is to remove the leaf nodes which contain meaning representative tokens (Here: What is the best language ...). H denotes the height to which the tree can be pruned and is an input to the model. Figure (a) shows the full constituency parse tree annotated with vector a for different heights. Figure (b) shows the same tree pruned at height H = 3 with its corresponding a vector. The vector a serves as an signalling vector (Section 3.4.2) which helps in deciding the syntactic signal to be passed on to the decoder. Please refer Section 3 for details. We maintain a queue L Y H of such terminal node representations where elements are inserted from left to right for a given H. Specifically, for the particular example given in Figure 2b, We emphasize the fact that the length of the queue |L Y H | is a function of height H. Syntactic Paraphrase Decoder Having obtained the semantic and syntactic representations, the decoder is tasked with the generation of syntactic paraphrases. This can be modeled as finding the best Z = Z * that maximizes the probability P(Z|X, Y ), which can further be factorized as: where T Z is the maximum length up to which decoding is required. In the subsequent sections, we use t to denote the decoder time step. Using Semantic Information At each decoder time step t, the attention distribution α t is calculated over the encoder hidden states h X i , obtained using Equation 1, as: where s t is the decoder cell-state and v, W h , W s , b attn are learnable parameters. The attention distribution provides a way to jointly-align and train sequence to sequence models by producing a weighted sum of the semantic encoder hidden states, known as context-vector c t given by: c t serves as the semantic signal which is essential for generating meaning preserving sentences. Using Syntactic Information During training, each terminal node in the tree C Y , pruned at H, is equipped with information about the span of words it needs to generate. At each time step t, only one terminal node representation h Y v ∈ L Y H is responsible for providing the syntactic signal which we call h Y t . This hidden-state representation to be used is governed through an signalling vector a = (a 1 , . . . , a Tz ), where each a i ∈ {0, 1}. 0 indicates that the decoder should keep on using the same hidden-representation h Y v ∈ L Y H that is currently being used, and 1 indicates that the next element (hidden-representation) in the queue L Y H should be used for decoding. The utility of a can be best understood through Figure 2b. Consider the syntactic tree pruned at height H = 3. For this example, a = (1, 1, 1, 0, 0, 0, 0, 0, 1) a i = 1 provides a signal to pop an element from the queue L Y H while a i = 0 provides a signal to keep on using the last popped element. This element is then used to guide the decoder syntactically by providing a signal in the form of hiddenstate representation (Equation 8). Specifically, in this example, the a 1 = 1 signals L Y H to pop h Y WP to provide syntactic guidance to the decoder for generating the first token. a 2 = 1 signals L Y H to pop h Y VBZ to provide syntactic guidance to the decoder for generating the second token. a 3 = 1 helps in obtaining h Y NP from L Y H to provide guidance to generate the third token. As described earlier, a 4 , . . . , a 8 = 0 indicate that the same representation h Y NP should be used for syntactically guiding tokens z 4 , . . . , z 8 . Finally a 9 = 1 helps in retrieving h Y <DOT> for guiding decoder to generate token z 9 . Note that |L Y H | = Tz i=1 a i While a is provided to the model during training, this information might not be available during inference. Providing a during generation makes the model restrictive and might result in producing ungrammatical sentences. SGCP is tasked to learn a proxy for the signalling vector a, using transition probability vector p. At each time step t, we calculate p t ∈ (0, 1) which determines the probability of changing the syntactic signal using: where pop removes and returns the next element in the queue, s t is the decoder state, and e(z t ) is the embedding of the input token at time t during decoding. Overall The semantic signal c t , together with decoder state s t , embedding of the input token e(z t ) and the syntactic signal h Y t is fed through a GRU followed by softmax of the output to produce a vocabulary distribution as: where [; ] represents concatenation of constituent elements, and W, b are trainable parameters. We augment this with the copying mechanism as in the pointer-generator network (See et al., 2017). Usage of such a mechanism offers a probability distribution over the extended vocabulary (the union of vocabulary words and words present in the source sentence) as follows: where w c , w s , w x and b gen are learnable parameters, e(z t ) is the input token embedding to the decoder at time step t and α t i is the element corresponding to the i th co-ordinate in the attention distribution as defined in Equation 4 The overall objective can be obtained by taking negative log-likelihood of the distributions obtained in Equation 6 and Equation 9. where a t is the t th element of the vector a. Experiments Our experiments are geared towards answering the following questions: Q1. Is SGCP able to generate syntax conforming sentences without losing out on meaning? (Section 5. Based on these questions, we outline the methods compared (Section 4.1), along with the datasets (Section 4.2) used, evaluation criteria (Section 4.3) and the experimental setup (Section 4.4). Methods Compared As in Chen et al. (2019a), we first highlight the results of the two direct return-input baselines. 1. Source-as-Output: Baseline where the output is the semantic input. 2. Exemplar-as-Output: Baseline where the output is the syntactic exemplar. We compare the following competitive methods: 3. SCPN (Iyyer et al., 2018) is a sequence-tosequence based model comprising two encoders built with LSTM (Hochreiter and Schmidhuber, 1997) to encode semantics and syntax respectively. Once the encoding is obtained, it serves as an input to the LSTM based decoder which is augmented with softattention (Bahdanau et al., 2014) over encoded states as well as a copying mechanism (See et al., 2017) to deal with out-ofvocabulary tokens. 4 4. CGEN (Chen et al., 2019a) is a VAE (Kingma and Welling, 2014) model with two encoders to project semantic input and syntactic input to a latent space. They obtain a syntactic embedding from one encoder, using a standard Gaussian prior. To obtain the semantic representation, they use von Mises-Fisher prior, which can be thought of as a Gaussian distribution on a hypersphere. They train the model using a multi-task paradigm, incorporating paraphrase generation loss and word position loss. We considered their best model, VGVAE + LC + WN + WPL, which incorporates the above objectives. SGCP (Section 3) is a sequence-and-tree-tosequence based model which encodes semantics and tree-level syntax to produce paraphrases. It uses a GRU (Chung et al., 2014) based decoder with soft-attention on semantic encodings and a begin of phrase (bop) gate to select a leaf node in the exemplar syntax tree. We compare the following two variants of SGCP: (a) SGCP-F : Uses full constituency parse tree information of the exemplar for generating paraphrases. 4 Note that the results for SCPN differ from the ones shown in (Iyyer et al., 2018). This is because the dataset used in (Iyyer et al., 2018) is atleast 50 times larger than the largest dataset (ParaNMT-small) in this work. (a) SGCP-R : SGCP can produce multiple paraphrases by pruning the exemplar tree at various heights. This variant first generates 5 candidate generations, corresponding to 5 different heights of the exemplar tree namely {H max , H max − 1, H max − 2, H max − 3, H max − 4}, for each (source, exemplar) pair. From these candidates, the one the highest ROUGE-1 score with the source sentence, is selected as the final generation. Note that, except for the return-input baselines, all methods use beam search during inference. Datasets We train the models and evaluate them on the following datasets: (1) ParaNMT-small (Chen et al., 2019a) contains 500K sentence-paraphrase pairs for training, and 1300 manually labeled sentence-exemplarreference which is further split into 800 test data points and 500 dev. data points respectively. As in Chen et al. (2019a), our model uses only (sentence, paraphrase) during training. The paraphrase itself serves as the exemplar input during training. This dataset is a subset of the original ParaNMT-50M dataset . ParaNMT-50M is a data set generated automatically through backtranslation of original English sentences. It is inherently noisy due to imperfect neural machine translation quality with many sentences being non-grammatical and some even being non-English sentences. Because of such noisy data points, it is optimistic to assume that the corresponding constituency parse tree would be well aligned. To that end, we propose to use the following additional dataset which is more well-formed and has more human intervention than the ParaNMT-50M dataset. (2) QQP-Pos: The original Quora Question Pairs (QQP) dataset contains about 400K sentence pairs labeled positive if they are duplicates of each other and negative otherwise. The dataset is composed of about 150K positive and 250K negative pairs. We select those positive pairs which contain both sentences with a maximum token length of 30, leaving us with~146K pairs. We call this dataset as QQP-Pos. Similar to ParaNMT-small, we use only the sentence-paraphrase pairs as training set and sentence-exemplar-reference triples for testing and validation. We randomly choose 140K sentence-paraphrase pairs as the training set T train , and the remaining 6K pairs T eval are used to form the evaluation set E. Additionally, let T eset = {{X, Z} : (X, Z) ∈ T eval }. Note that T eset is a set of sentences while T eval is a set of sentence-paraphrase pairs. Let E = φ be the initial evaluation set. For selecting exemplar for each each sentence-paraphrase pair (X, Z) ∈ T eval , we adopt the following procedure: Step 1: For a given (X, Z) ∈ T eval , construct an exemplar candidate set C = T eset − {X, Z}. |C| ≈ 12, 000. Step 2: Retain only those sentences C ∈ C whose sentence length (= number of tokens) differ by at most 2 when compared to the paraphrase Z. This is done since sentences with similar constituency-based parse tree structures tend to have similar token lengths. Step 3: Remove those candidates C ∈ C, which are very similar to the source sentence X, i.e. BLEU(X, C) > 0.6. Step 4: From the remaining instances in C, choose that sentence C as the exemplar Y which has the least Tree-Edit distance with the paraphrase Z of the selected pair i.e. Y = argmin C∈C TED(Z, C). This ensures that the constituency-based parse tree of the exemplar Y is quite similar to that of Z, in terms of Tree-Edit distance. Step 5: E := E ∪ (X, Y, Z) Step 6: Repeat procedure for all other pairs in T eval . From the obtained evaluation set E, we randomly choose 3K triplets for the test set T test , and remaining 3K for the validation set V. Evaluation It should be noted that there is no single fullyreliable metric for evaluating syntactic paraphrase generation. Therefore, we evaluate on the following metrics to showcase the efficacy of syntactic paraphrasing models. (ii) Syntactic Transfer: We evaluate the syntactic transfer using Tree-edit distance (Zhang and Shasha, 1989) between the parse trees of: (a) the generated and the syntactic exemplar in the test set -TED-E (b) the generated and the reference paraphrase in the test set -TED-R (iii) Model-based evaluation: Since our goal is to generate paraphrases of the input sentences, we need some measure to determine if the generations indeed convey the same meaning as the original text. To achieve this, we adopt a model-based evaluation metric as used by Shen et al. (2017) for Text Style Transfer and Isola et al. (2017) for Image Transfer. Specifically, classifiers are trained on the task of Paraphrase Detection and then used as Oracles to evaluate the generations of our model and the baselines. We fine-tune two RoBERTa based sentence pair classifiers, one on Quora Question Pairs (Classifier-1) and other on ParaNMT + PAWS 5 datasets (Classifier-2) which achieve accuracies of 90.2% and 94.0% on their respective test sets 6 . Once trained, we use Classifier-1 to evaluate generations on QQP-Pos and Classifier-2 on ParaNMT-small. We first generate syntactic paraphrases using all the models (Section 4.1) on the test splits of QQP-Pos and ParaNMT-small datasets. We then pair the source sentence with their corresponding generated paraphrases and send them as input to the classifiers. The Paraphrase Detection score, denoted as PDS in Table 2, is defined as, the ratio of the number of generations predicted as paraphrases of their corresponding source Table 2: Results on QQP and ParaNMT-small dataset. Higher↑ BLEU, METEOR, ROUGE and PDS is better whereas lower↓ TED score is better. SGCP-R selects the best candidate out of many, resulting in performance boost for semantic preservation (shown in box). We bold the statistically significant results of SGCP-F, only, for a fair comparison with the baselines. Note that Source-as-Output, and Exemplaras-Output are only dataset quality indicators and not the competitive baselines. Please see Section 5 for details. sentences by the classifier to the total number of generations. Human Evaluation. While TED is sufficient to highlight syntactic transfer, there has been some scepticism regarding automated metrics for paraphrase quality (Reiter, 2018). To address this issue, we perform human evaluation on 100 randomly selected data points from the test set. In the evaluation, 3 judges (nonresearchers proficient in the English language) were asked to assign scores to generated sentences based on the semantic similarity with the given source sentence. The annotators were shown a source sentence and the corresponding outputs of the systems in random order. The scores ranged from 1 (doesn't capture meaning at all) to 4 (perfectly captures the meaning of the source sentence). Setup (a) Pre-processing. Since our model needs access to constituency parse trees, we tokenize and parse all our data points using the fully parallelizable Stanford CoreNLP Parser (Manning et al., 2014) to obtain their respective parse trees. This is done prior to training in order to prevent any additional computational costs that might be incurred because of repeated parsing of the same data points during different epochs. (b) Implementation details. We train both our models using the Adam Optimizer (Kingma and Ba, 2014) with an initial learning rate of 7e-5. We use a bidirectional 3-layered GRU for encoding the tokenized semantic input and a standard pointer-generator network with GRU for decoding. The token embedding is learnable with dimension 300. To reduce the training complexity of the model, the maximum sequence length is kept at 60. The vocabulary size is kept at 24K for QQP and 50K for ParaNMT-small. SGCP needs access to the level of syntactic granularity for decoding, depicted as H in Figure 2. During training, we keep on varying it randomly from 3 to H max , changing it with each training epoch. This ensures that our model is able to generalize because of an implicit regularization attained using this procedure. At each time-step of the decoding process, we keep a teacher forcing ratio of 0.9. Semantic Preservation and Syntactic transfer 1. Automated Metrics: As can be observed in Table 2, our method(s) (SGCP-F/R (Section 4.1)) are able to outperform the existing baselines on Source what should be done to get rid of laziness ? Template Exemplar how can i manage my anger ? SCPN (Iyyer et al., 2018) how can i get rid ? CGEN (Chen et al., 2019a) how can i get rid of ? SGCP-F (Ours) how can i stop my laziness ? SGCP-R (Ours) how do i get rid of laziness ? Source what books should entrepreneurs read on entrepreneurship ? Template Exemplar what is the best programming language for beginners to learn ? SCPN (Iyyer et al., 2018) what are the best books books to read to read ? CGEN (Chen et al., 2019a) what 's the best book for entrepreneurs read to entrepreneurs ? SGCP-F (Ours) what is a best book idea that entrepreneurs to read ? SGCP-R (Ours) what is a good book that entrepreneurs should read ? Source how do i get on the board of directors of a non profit or a for profit organisation ? Template Exemplar what is the best way to travel around the world for free ? SCPN (Iyyer et al., 2018) what is the best way to prepare for a girl of a ? CGEN (Chen et al., 2019a) what is the best way to get a non profit on directors ? SGCP-F (Ours) what is the best way to get on the board of directors ? SGCP-R (Ours) what is the best way to get on the board of directors of a non profit or a for profit organisation ? Table 3: Sample generations of the competitive models. Please refer to Section 5.5 for details both the datasets. Source-as-Output is independent of the exemplar sentence being used and since a sentence is a paraphrase of itself, the paraphrastic scores are generally high while the syntactic scores are below par. An opposite is true for Exemplar-as-Output. These baselines also serve as dataset quality indicators. It can be seen that source is semantically similar while being syntactically different from target sentence whereas the opposite is true when exemplar is compared to target sentences. Additionally, source sentences are syntactically and semantically different from exemplar sentences as can be observed from TED-E and PDS scores. This helps in showing that the dataset has rich enough syntactic diversity to learn from. Through TED-E scores it can be seen that SGCP-F is able to adhere to the syntax of the exemplar template to a much larger degree than the baseline models. This verifies that our model is able to generate meaning preserving sentences while conforming to the syntax of the exemplars when measured using standard metrics. It can also be seen that SGCP-R tends to perform better than SGCP-F in terms of paraphrastic scores while taking a hit on the syntactic scores. This makes sense, intuitively, because in some cases SGCP-R tends to select lower H values for syntactic granularity. This can also be observed from the example given in Table 6 where H = 6 is more favourable than H = 7, because of better meaning retention. Although CGEN performs close to our model in terms of BLEU, ROUGE and METEOR scores on ParaNMT-small dataset, its PDS is still much lower than that of our model, suggesting that our model is better at capturing the original meaning of the source sentence. In order to show that the results are not coincidental, we test the statistical significance of our model. We follow the non-parametric Pitman's permutation test (Dror et al., 2018) and observe that our model is statistically significant when the significance level (α) is taken to be 0.05. Note that this holds true for all metric on both the datasets except ROUGE-2 on ParaNMT-small. Table 4: A comparison of human evaluation scores for comparing quality of paraphrases generated using all models. Higher score is better. Please refer to Section 5.1 for details. Table 4 shows the results of human assessment. It can be seen that annotators, generally tend to rate SGCP-F and SGCP-R (Section 4.1) higher than the baseline models, thereby highlighting the efficacy of our models. This evaluation additionally shows that automated metrics are somewhat consistent with the human evaluation scores. As can been seen in Table 6, at height 4 the syntax tree provided to the model is not enough to generate the full sentence that captures the meaning of the original sentence. As we increase the height to 5, it is able to capture the semantics better by predicting some of in the sentence. We see that at heights 6 and 7 SGCP is able to capture both semantics and syntax of the source and exemplar respectively. However, as we provide the complete height of the tree i.e., 7, it further tries to follow the syntactic input more closely leading to sacrifice in the overall relevance since the original sentence is about pure substances and not a pure substance. It can be inferred from this example that since a source sentence and exemplar's syntax might not be fully compatible with each other, using the complete syntax tree can potentially lead to loss of relevance and grammaticality. Hence by choosing different levels of syntactic granularity, one can address the issue of compatibility to a certain extent. Table 5 shows sample generations of our model on multiple exemplars for a given source sentence. It can be observed that SGCP can generate high-quality outputs for a variety of different template exemplars even the ones which differ a lot from the original sentence in terms of their syntax. A particularly interesting exemplar is what is chromosomal mutation ? what are some examples ?. Here, SGCP is able to generate a sentence with two question marks while preserving the essence of the source sentence. It should also be noted that the exemplars used in Table 5, were selected manually from the test sets, considering only their qualitative compatibility with the source sentence. Unlike the procedure used for the creation of QQP-Pos dataset, the final paraphrases were not kept in hand while selecting the exemplars. In real-world settings, where a gold paraphrase won't be present, these results are indicative of the qualitative efficacy of our method. SGCP-R Analysis ROUGE based selection from the candidates favour paraphrases which have higher n-gram overlap with their respective source sentences, hence may capture source's meaning better. This hypothesis can be directly observed from the results in Table 2 and Table 4 where we see higher values on automated semantic and human evaluation scores. While this helps in getting better semantic generations, it tends to result in higher TED values. One possible reason is that, when provided with the complete tree, fine-grained information is available to the model for decoding and it forces the generations to adhere to the syntactic structure. In contrast, at lower heights, the model is provided with lesser syntactic information but equivalent semantic information. As can be seen from Table 7, SGCP not only incorporates the best aspects of both the prior models, namely SCPN and CGEN, but also utilizes the complete syntactic information obtained using the constituency-based parse trees of the exemplar. Qualitative Analysis From the generations in Table 3, it can be observed that our model is able to capture both, the semantics of the source text as well as the syntax of template. SCPN, evidently, can produce outputs with the template syntax, but it does so at the cost of semantics of the source sentence. This can also be verified from the results in Table 2 where SCPN performs poorly on PDS as compared to other models. In contrast CGEN and SGCP retain much better semantic information, as is desirable. While generating sentences, CGEN often abruptly ends the sentence as in example 1 in Table 3, truncating the penultimate token with of. The problem of abrupt ending due to insufficient syntactic input length was highlighted in Chen et al. (2019a) and we observe similar trends. SGCP on the other hand generates more relevant and grammatical sentences. Based on empirical evidence, SGCP alleviates this shortcoming, possibly due to dynamic syntactic control and decoding. This can be seen in e.g., 3 in Table 3 where CGEN truncates the sentence abruptly (penultimate token = directors) but SGCP is able to generate relevant sentence without compromising on grammaticality. Limitations and Future directions All natural language English sentences cannot necessarily be converted to any desirable syntax. We note that SGCP does not take into account the compatibility of source sentence and template exemplars and can freely generate syntax conforming paraphrases. This at times, leads to imperfect paraphrase conversion and nonsensical sentences like example 6 in Table 5 (is career useful in software ?). Identifying compatible exemplars is an important but separate task in itself, which we defer to future work. Another important aspect is that the task of paraphrase generation is inherently domain agnostic. It is easy for humans to adapt to new domains for paraphrasing. However, due to the nature of the formulation of the problem in NLP, all the baselines as well as our model(s), suffer from dataset bias and are not directly applicable to new domains. A prospective future direction can be to explore it from the lens of domain independence. Analyzing the utility of controlled paraphrase generations for the task of data augmentation is another interesting possible direction. Conclusion In this paper, we proposed SGCP, an end-toend framework for the task of syntactically controlled paraphrase generation. SGCP generates paraphrase of an input sentence while conforming to the syntax of an exemplar sentence provided along with the input. SGCP comprises a GRUbased sentence encoder, a modified RNN based tree encoder, and a pointer-generator based novel decoder. In contrast to previous works that focus on a limited amount of syntactic control, our model can generate paraphrases at different levels of granularity of syntactic control without compromising on relevance. Through extensive evaluations on real-world datasets, we demonstrate SGCP's efficacy over state-of-the-art baselines. We believe that the above approach can be useful for a variety of text generation tasks including syntactic exemplar-based abstractive summarization, text simplification and data-to-text generation.
9,465.4
2020-05-18T00:00:00.000
[ "Computer Science" ]
Impact of Biomedical Optics and Planning our Revised Scope Abstract. Journal of Biomedical Optics editor-in-chief Brian W. Pogue outlines a revision of scope for the journal. The journal must make difficult choices about what we publish in biomedical optics, with the balance shifted towards excellence in technical innovation and discovery rather than biomedical utility and practice. The editorial board recognizes this phenomenon in our own research circles, and we recognize that seeking to publish innovation and discovery that advance capabilities must outweigh publishing scientific observations that merely document measurements. The numbers show that technical innovations receive higher downloads and citations than biologically driven innovations, and this is in part because of the fact that biologically driven innovations have many more venues for publication, and the best of these latter papers do not get offered to our journal. In every field of medicine and healthcare there are hundreds of journals that publish relevant papers, and newly emerged systems to image, sense, or measure them are part of their mandate. While we want to encourage and apply the use of optical devices in medicine and via commercialization, if these scholarly activities have impact, they should wind up penetrating into the medical field via medical journals that represent them. For the good of the field, we should be actively advocating for the best biomedical papers to be published in the best biomedical journals, and similarly the best technical papers should be published in the best technical journals. JBO has always represented the edge of both fields, but with a center of gravity that places it as a technical journal primarily. Technical innovations that have biomedical utility as their goal are well placed in our journal and in the SPIE society. It is with this vision that a slightly revised scope for the journal has been approved by the editorial board. This change is a necessary reassessment of our compass, to tack our trajectory towards the best of our field, and one that will ensure the longevity of the journal and the shape of our field. In summary, the board of editors and the editorial staff of the Journal of Biomedical Optics have approved the following recommendation of a revised scope: Scope The Journal of Biomedical Optics (JBO) is an open access journal that publishes peer-reviewed papers on innovations in optical systems and techniques that will improve health care and lead to discoveries in biomedical research. Growth in the capabilities of biomedical optical technology has fueled new areas of contrast, resolution, and spectral capacity in imaging and sensing, which have enabled widespread applications throughout biology and medicine. Topics suitable for JBO include significant in-depth studies of: • fundamentally new discoveries in biomedical optical devices for imaging or sensing using: optical spectroscopy, near infrared spectroscopy, photoacoustics, microscopy, optical coherence, tomography, fluorescence, phosphorescence, elastography, hyperspectral, using features unique to optics such as spatial, spectral, temporal, interference, polarization, or quantum nature; • increasing knowledge of light-tissue interaction, through theoretical transport models such as Monte Carlo, diffusion, electromagnetic and empirical methods, and through physical methods to model light-tissue interactions such as tissue-simulating phantoms; • computational advances in optical image and signal processing, and image reconstruction, including new methods in machine learning and artificial intelligence that improve insight into the utility, detection, or performance value; • novel medical optical systems used in definitive animal or human studies or clinical trial testing that can impact the field by their design or the optical innovation, including discoveries and technical advances in optics emerging from mobile, remote, wearable, or implantable technologies that can improve health and wellness; • discoveries in photonics, nanophotonics, plasmonics, biosensors, and optical reporters that have direct relevance to biomedical needs or utility; • hybrid imaging or interventional systems where optics are combined with other tools such as ultrasound, x-rays, magnetic resonance, molecular sensing, or electromagnetics.
891.8
2020-05-01T00:00:00.000
[ "Physics" ]
Technologies of Care: Robot Caregivers in Science and Fiction : In the field of elderly care, robot caregivers are garnering increased attention. This article discusses the robotisation of care from a dual perspective. The first part presents an overview of recent scholarship on the use of robots in eldercare, focusing mostly on scientific evidence about the responses of older adults and caregivers. The second part turns to narrative evidence, providing a close reading of Andromeda Romano-Lax’s Plum Rains (2018), a speculative novel set in Japan in 2029, which explores the implications—ethical, affective, social—of communities of care that include non-human agents. My argument is twofold: (1) although science and fiction operate according to different models of knowledge production, considering narrative insights alongside scientific ones can enlarge our understanding of the complexities of robotic care; (2) hitherto overlooked in literary studies, Plum Rains deserves attention for its nuanced representation of a hybrid model of care, which does not discard robotic assistance on the basis of humanist arguments, nor does it endorse techno-solutionism, reminding readers that the fantasy of robots that care is fuelled by the reality of devalued human care work. Introduction At the AI for Good Global Summit, held in Geneva in June 2023, nine AI-enabled humanoid social robots participated in what was described as the world's first human-robot press conference.The line-up of robot speakers fielding questions from journalists included Grace, introduced as the most advanced humanoid designed to provide assistance and companionship to older people.In Grace's prediction, humanoid robots in the healthcare sector will reach their full potential in the next five to ten years.They are expected to be "incredibly advanced and beneficial for both patients and healthcare professionals". 1 Whether Grace's foretelling will prove accurate, scientific evidence about older people's responses to robot care suggests a fairly high level of acceptance.The future of care might very well entail an apt combination of human and robotic assistance (Aronson 2019).The ageing of the population worldwide, the growing demand for care and the expected scarcity of human caregivers are often mentioned as compelling factors that justify investment and research in technologies of care (Pratt et al. 2023).Although mitigated by ethical considerations, academic literature reporting on test cases and pilot schemes leans towards techno-optimism.Robots can be used to support independent living; they can alleviate the burden of care, monitor health conditions, and provide some forms of entertainment and companionship (Asgharian et al. 2022;Bradwell et al. 2022;Budak et al. 2021;Todd et al. 2022). Of course, outsourcing care to machines also raises concerns about the dehumanising implications of automation: "If caregiving is the very essence of being human, why would we consider turning it over to robots?" (Pittinsky 2022, p. 49).This question is at the heart of contemporary discussions about the robotisation of care to which scientific research as well as imaginative fiction are contributing, each in its own terms.Although science and fiction operate according to different models of knowledge production, considering narrative insights alongside scientific ones can enlarge our understanding of the complexities of robotic care.I start from the premise that "it is possible to listen to stories for the narrative evidence they provide, the cognitive value they possess, and the important ways in which they can enrich public reasoning" (Dillon and Craig 2021).This type of 'storylistening' is informed by methods developed in the humanities, and in the field of literary studies in particular.In this article, I shall apply the methodology best known as close reading to Andromeda Romano-Lax's novel, Plum Rains (Romano-Lax 2018b), a work of speculative fiction, set in Japan in 2029, which explores the implications-ethical, affective, social-of communities of care that include non-human agents. 2 In the geopolitical scenario the novel delineates, Filipino nurses and West African care workers are barely tolerated in a country that needs them but resents being dependent on foreign workers.Hence the attractiveness of technological solutions.The fantasy of robotic care, the novel shows, is compelling on many levels.The unthreatening, gentle and increasingly humane robot Romano-Lax introduces in the story is not the solution.Rather, the novel helps us understand how and why this solution appears socially desirable. The article is divided into two parts.The first part provides an overview of recent scholarship on the use of robots in the field of eldercare, focusing mostly on studies that analyse the attitudes and responses of both older adults and caregivers.While the findings reported in these studies are promising and tend to support a positive view of robot care (often presented as 'inevitable'), unease is expressed by scholars in the social sciences and humanities who place greater emphasis on ethical considerations and broader socio-economic contexts.This overview of academic literature has no pretension of exhaustiveness.Nonetheless, tapping into the knowledge being produced in such diverse fields as robotics, HRI (Human-Robot Interaction), STS (Science and Technology Studies), ethics and anthropology can help to clarify the affordances and limits of robot care. The knowledge that fiction produces is the subject of the second part.I claim that the central narrative situation imagined in Plum Rains-human and non-human caregivers co-habiting in a domestic setting-allows for the meticulous exploration of human-robot interaction.This interaction is observed from a dual perspective, achieved through the intense focalisation on both the older woman in need of care, Sayoko, and the Filipino nurse, Angelica.Unlike experiments in a robotics lab, this novelistic test of the robot prototype is rich in contextualisation.How Angelica and Sayoko relate to Hiro, the robot in question, is contingent on their personal stories, their past experiences (traumatic in both cases), the quotidian hardships of care, the precariousness of migrant labour, societal assumptions about ageing and much else besides.As I argue in this article, fiction's contribution to discussions about robotic care mostly rests on its ability to detail contextual factors affecting acceptance (or rejection) of technology and to imagine the emotional and social implications of robot care.Narratives can show "care in action", providing "human particularity, context, and details of life" (DeFalco 2016b, p. 8), thus enriching our knowledge of the complexities and vicissitudes of care.Plum Rains certainly invites readers to reassess the value of nonhuman care not as an abstract idea, a distant possibility rife with ethical dilemmas, but as a situated practice, rooted in a complex geopolitical scenario, and experienced by those who will likely be most affected by the future implementation of technologies of care. Robotic Care: Scientific Evidence Care robots come in a variety of designs: robotic assistants, telepresence robots, robotic exoskeletons, companion robots, pet robots, cognitive assistants, or personal robots.They are usually divided into service robots and social robots according to their main function (Sawik et al. 2023).In a recent review of mobile robots for elderly care, Sawik et al. (2023) list twenty-three robot types as the most representative of what is available today.Service robots, meant to assist older people with their daily routines, "are gaining popularity because they can improve the quality of life for the old while decreasing the workload on caregivers" (Sawik et al. 2023, p. 8).Social robots mostly provide emotional support and companionship.Equipped with sensors, cameras and microphones, these robots can perceive human facial expressions and emotions and respond to them.They can offer amusement, play games, and help alleviate feelings of loneliness (Budak et al. 2021;Todd et al. 2022).Within this category, assistive pet robots, in particular, have produced "positive therapeutic effects" especially in patients affected by cognitive impairments and dementia: "Our results suggest that affordable robot pets are able to produce important well-being impacts for older adult care home residents, with further potential positive impacts for staff through reduced occupational disruptiveness" (Bradwell et al. 2022, p. 14). A crucial dimension of research on care robots is assessing older people's attitudes and the impact robots can have on the work of caregivers.While it is difficult to extrapolate general conclusions from studies that test specific functionalities of robot platforms, the results of these investigations are nonetheless deserving of attention for they tend to disprove the enduring image of old age as unable or unwilling to adapt to technological innovation.In an experiment involving 162 older individuals, Hoppe et al. (2023) analysed aspects that influence the choice between a human caregiver and a care robot.According to their findings, "the care robot was a popular choice in every round of the experiment, independent of the assigned treatment associated with the human caregiver" (Hoppe et al. 2023, p. 9).The health status of participants partially affected their choice; those who perceived their health status as 'Not Good' were less inclined to choose the robot than individuals whose health status was self-described as 'Good'.Smarr et al. (2014, p. 242) tested attitudes and preferences of twenty-one independently living older Americans via questionnaires and a structured group interview.The participants "were generally open to robot assistance in the home, but were selective in their acceptance of tasks".Respondents (65-93 years old) manifested a preference for robots performing tasks related to domestic chores, manipulating objects and information management over robots assisting with personal care and leisure activities.A survey conducted by Chen et al. (2019, p. 6) to investigate the demand for social robots' companionship showed that "ageing adults had a high demand for companionship in the life situations of dining, doing housework, exercising and doing healthcare".The desire for robots that could listen to their talk was an important component of respondents' evaluation.Finally, Giorgi et al. (2022) explored the issue of trust in a humanoid social robot (NAO v6) using two independent variables, "type of attitude" (warm, cold) and "type of conduct" (error-no error).According to their findings, the level of trust manifested by older adults increases when the robot exerts a warm, empathetic attitude and decreases when the robot commits an error.Interestingly, however, "the percentages of positive self-reported ratings of the interaction (qualitative data) were higher when participants experienced a faulty cold robot (50%) compared to a cold robot that did not commit an error (25%)" (Giorgi et al. 2022, p. 92093).Perhaps committing an error brings the robot closer to human likeness, as the authors maintain. The interaction with care robots also involves both formal and informal caregivers.In a scoping review that considers articles published between 2000 and 2020, Persson et al. (2022) summarise research results related to two main questions: how caregivers use robots, and how robots affect their work environment.Their findings suggest that "the use of robots may have both positive and negative effects on caregivers' work environment, much depending on how they are used" (Persson et al. 2022, p. 271).While introducing care robots is often justified as a labour-saving strategy, several studies point to a more complex reality.Rather than simply reducing the caregivers' workload, robots can add further elements to the workflow, transforming consolidated routines (Wright 2023).As Persson et al. (2022, p. 272) conclude, "little is known about the longitudinal effects of robots on the work life and working environment of caregivers".Laban et al. (2022) carried out a longitudinal experiment across a five-week period on the interaction between informal caregivers and the social robot Pepper, to investigate "the potential of employing the social robot for eliciting self-disclosure", based on the assumption that self-disclosure is an important element of psychological health.The experiment shows that the duration and quality of self-disclosure increases over time, which is considered encouraging evidence of the usefulness of social robots in helping caregivers to deal with difficult life situations. However, as Wright's ethnographic study of robotic care in Japan illustrates, while the imagined benefits of automation seem endless, the reality of implementation tells a different story, especially as regards the impact of what he calls "algorithmic care" on migrant caregivers (Wright 2023, p. 56).Instead of being replaced by machines, human labour is ever more in demand, but "the nature of the work itself is increasingly deskilled, devalued, and alienated" (Wright 2023, p. 20).Wright's concerns are shared by several scholars in the field of Science and Technology Studies, attentive to the social, political and ethical implications of robot care (Søraa et al. 2023;Sparrow and Sparrow 2006;Pratt et al. 2023).Sharkey and Sharkey (2012) identify six main ethical concerns that ought to be addressed before robot care becomes commonplace: reduction of human contact; loss of control (objectification of the elderly); loss of privacy; restriction of personal liberty; deception and infantilisation; issues of responsibility (when things go wrong).Their balanced assessment takes "ethical costs" as well as "care benefits" into account in reviewing robot types that provide assistance, monitoring and companionship.To avoid the downsides of robotisation, the authors recommend the development of guidelines and legislation based on extended consultations with elderly users-a point reiterated also by Frennert and Östlund (2014) and Wright (2023), among others, who lament the limited involvement of older adults and caregivers in the design and development of robotic solutions.As I shall explain in the second part of this article, Romano-Lax's intense engagement with the perspectives of an elderly woman and a migrant female caregiver provides salient commentary on the relevance of their views and experiences (no matter how imaginary) in mediating the impact of robotic solutions. As Pratt et al. (2023, p. 2) rightly point out, robots are being developed in times of crisis "within contexts of neoliberal austerity and the withdrawal of public funding for social care".Questioning the ideological underpinnings of the debate on the robotisation of care, they call for a more radical reimagining of elder care that acknowledges and values the skilled work needed in good care: "Can we imagine a world in which robots augment the labour of well-paid care workers rather than one in which the work of caring for our ageing population is so devalued that robots replace them and leave the worse aspect of the job to minimally paid racialised immigrant workers?" (Pratt et al. 2023, p. 13).Ethical objections to the use of care robots often revolve around the question of deception or artificial empathy that Sherry Turkle (2011Turkle ( , 2015) ) has explored extensively in her books.Warning against the appeal of "simple salvations"-the hope that robots will be our companions and will take good care of us- Turkle (2015) insists that robotic care rests on an illusion, the illusion of attentive listening, of empathy and friendship provided by a machine that lacks "the experience of life" and the ability to understand "the meaning of things".It is younger people that are supposed to be listening, Turkle explains, not robots."When we celebrate robot listeners that cannot listen, we show too little interest in what our elders have to say.We build machines that guarantee that human stories will fall upon deaf ears" (Turkle 2015). The human liability to deception, our propensity to attribute agency to things, can be worrying especially when technologies are deployed to address the needs of vulnerable people.Non-human care can also inspire unease because it challenges species boundaries, casting doubts on the very condition of the human (DeFalco 2020).Hence the scepticism of ethics of care theorists who stress the risks involved in delegating care to machines."Robots cannot provide genuine care", Sparrow (2016, p. 5) writes, "because they cannot experience the emotions that are integral to the provision of such care".The assumption that human care is the gold standard, however, is problematic too.It tends to be associated with an idealised view of care as the work of love that glosses over its asperities, silencing the "nastiness accomplished in love's name" (Puig de la Bellacasa 2017, p. 78) and the denigration of care as racialised, gendered and devalued labour.Caregiving robots and other artificial life forms "evoke a vision of the future in which humans can no longer expect a privileged position in a hierarchy of caregiving relations, positing instead a continuum of care, in which the human and non-human could coexist and collaborate" (DeFalco 2020, p. 28).The posthumanist and postanthropocentric perspective theorised by Puig de la Bellacasa and DeFalco encourages an understanding of care that fully acknowledges the interdependence of human and non-human agents. A similar vision of the future, hovering between dystopian and utopian connotations, plays no marginal role in Romano-Lax's novel, in which collaboration with the non-human robot produces a hybridised model of care ultimately beneficial to both the older patient and her caregiver.In the next section, I analyse the novel focusing in particular on the same dimensions of robot care that researchers are exploring, namely how end users respond to technologies of care.There is some overlap between empirical and narrative evidence as regards the degree of openness the older adult in the story manifests towards the robot.The novel also questions the distinction between human and robotic 'listening', exposing the limitations of human understanding and attributing to the robot a compensatory function.Fiction can cast light on the complexities of human-robot interaction, not least by placing that interaction in a richly described context, reproducing "the lived texture of care" (Kenway 2023), and by attending closely to human motivations, with all their ambiguities. Plum Rains: Imagining the Future of Care In speculative fiction, the subject of ageing has often inspired transhumanist narratives of longevity, rejuvenation and immortality, on the one hand, and demodystopias or geronticide, on the other (Cave 2020;Mangum 2002;Domingo 2008;Falcus and Oró-Piqueras 2023).Fantasies of immortality have driven speculations about the future since time immemorial (Cave 2017).Growing old while remaining young, by defeating the biological limits of the human body, makes for intriguing and compelling stories that address the fear of death head-on.Care for the elders, on the other hand, with all its prosaic, quotidian complications, may be less appealing than the more dramatic and impactful narrative of geronticide.Plum Rains tackles the combined issues of ageing and care, but the dream of immortality has no prominent place in a story that is more concerned with the politics of emotional labour, the routines of care and the potential benefits of AI-powered robots. There is no scarcity of narratives, whether fictional or autobiographical, that poignantly explore the burden of care and its emotional impact (DeFalco 2016b; Sako and Falcus 2022; Schaffer 2021), but fictions featuring robot caregivers are few and far between. 3One exception is the 2012 film Robot and Frank.Set in a bucolic scenario, a leafy small town in America, the film focuses on the relationship between Frank, a former burglar, affected by a mild form of dementia, and the unnamed robot caregiver suddenly introduced in his home.Initially the robot appears as an intruder, invading Frank's privacy, telling him what to eat, and nudging Frank to adopt active ageing standards of living.But soon their relationship becomes one of reciprocal exchange: Robot learns how to pick a lock, while Frank accepts healthy food and light forms of exercise.As Frank patiently teaches Robot how to become a burglar, and a partner in crime, he also develops feelings for this socially assistive machine that never pretends to be a person or quasi-human.Robot acts as a "catalyst for Frank's renaissance" (DeFalco 2016a) and Frank, in his turn, cares for the non-human caregiver to the extent that he refuses to erase its memory even if this choice might result in Frank ending up in prison.Ultimately, Robot convinces Frank to wipe its memory clean in a scene of "haptic intimacy" (DeFalco 2016a, p. 23) that blurs the boundaries between nature and artifice, human and machine.As Yugin (2021, pp.362-63) remarks, Frank and Robot questions the false dichotomy between cold machines and warm human care: "as our doing and perceiving is mediated by technologies, humans and technologies become 'companion species'. ..Identity, therefore, is not about who we are different from robots but about who we are 'becoming with' robots and who people with dementia will and can become with their robot companions". 4 Skirting the tragic mode, often associated with dementia stories of decline and loss, Robot and Frank projects a positive, humorous vision of human-robot interaction in which, however, the role of human caregivers is minimised.In Plum Rains, the main narrative revolves around a triangular relationship between Angelica, the Filipino nurse, Sayoko, the elderly lady approaching her hundredth birthday, and Hiro, the AI-enabled robot programmed to learn from the interaction with humans.As a thought experiment, the novel tests how non-human care impacts the lives of both formal caregivers and care recipients, and does so via the technique of "psycho-narration" (Cohn 1978, p. 21), the "narrated monologue" (Cohn 1978, p. 99) in particular, thus giving readers access to the unspoken and unheard language of fictional minds.This narrative mode allows to capture ambiguities, uncertainties, and conflicting emotions related to the various dimensions of care included in this story.If care is "unthinkable abstracted from its situatedness" (Puig de la Bellacasa 2017, p. 6), the contexts the novel delineates place care firmly at the intersection of geopolitical dynamics, technological innovation, and domestic or private concerns. The initial scene is a case in point.Angelica faints in the street, while running errands.A public health device, a drone called "kenkobot", descends from the sky to monitor her health: "state-of-the-art diagnostics" (7) intrusive but not unkind, "seeking permission for each further invasion" (9), running tests on the spot.Efficient, no doubt, but a far cry from what Angelica needs, "a kind word in a human voice" (9).While the "kenkobot" performs its operations, Angelica's thoughts wander in several directions: instant flashbacks to the traumatic event (a typhoon) that had changed the course of her life in Cebu; worries about her precarious status as a migrant care worker in Tokyo "trying to learn fast enough to pass the latest JLPT, trying to avoid unsafe jobs and the loan sharks back home" (4); concerns for Sayoko, who gets agitated if Angelica is late; a constant anxiety about her brother Datu, displaced in Alaska; and the painful awareness that "she wasn't as resilient as she used to be": Not so long ago she'd been able to juggle more uncertainties-Junichi not showing up for a date; Datu possibly trying to hide that he was sick; a borderline exam score-with only a passing sense of worry or irritation.But now, every stressor triggered something physical: Breathlessness.Dizziness.Psoriasis at her hairline or a rash across her chest.Her body was shouting what her mind didn't care to admit: it was too much, sometimes.She had a better situation than most, but things weren't getting easier (7). While in the grip of advanced technologies of care and surveillance, Angelica's narrated monologue unveils the all-too-human predicaments of her migrant condition: "the vertigo that was her daily life" (203).This focalisation on Angelica's inner life persists throughout the narrative, interspersed with chapters in which the focus shift to Sayoko, her thought processes and memories. 5Placed as it is in the foreground, Angelica's perspective colours the readers' understanding of the textual actual world of 2029, pulling at our heartstrings via a style of narration that reproduces the vertigo of Angelica's life in the meandering flow of her thought processes.It is difficult not to empathise with this struggling character and her initial dislike for health technologies: "Technology alone, no matter how efficient, however seemingly foolproof, could never suffice.Any good nurse knew that. ..She had value.No one could take that from her-least of all a machine" (9-10). When the robot arrives-"this unwelcome delivery" (61)-Angelica's sense of foreboding is palpable, even though the robot in question, an untested prototype, looks rudimentary.The novel's near-future world, which in many respects cleaves to our world, has a dystopian quality mainly determined by the extensive reach of surveillance technologies, international squabbles about AI regulation, and a calamitous environmental disaster of global import.This dystopian scenario is detailed for us while the technician is assembling the robot, thus casting a long shadow over the immediate future this "annoying device" might usher in: "the future was not merciful.The future was not just" (36), Angelica muses. Put differently, the novel seems to anticipate ominous developments, reminiscent of popular apocalyptic scenarios in science fiction.Resistance to invasive technology is initially shared by both Angelica and Sayoko.The latter is "registered as old-fashioned with the Federal Senior Register" (39); she rejects implants and tracers and is fussy even about the simple wrist monitor she wears.Angelica, unlike other nurses equipped with retinal implants and robotic suits, still uses a stethoscope and looks like a "nurse from the previous century" (63), proud of her human, non-mechanical style of professionalism.As the story unfolds, however, and the interaction with the robot gains momentum, attitudes change, and the fear of technology gives way to acceptance.Why this happens is less relevant than how.The novel explores in depth what determines human responses to a care robot and the complex interplay of factors that lead to trust and acceptance.While Hiro can be described as "an exercise in imagineering meddling science and fantasy" (Robertson 2018, p. 190), with capabilities no existing care robot possesses, the human reactions to its presence and functions are rooted in the intricate personal histories of Angelica and Sayoko, which the robot contributes to unveiling.As Angelica remarks, Hiro "brought not only the future into their home, but the past too" (74). The influence of past experiences on older adults' attitudes towards technology has been examined by Ostrowski et al. (2021, p. 11) in a study that considered informal personal narratives, or participants' stories, to assess the value of storytelling in co-design processes.In this experiment, twenty-eight older adults "built upon their prior experiences to ideate how a robot could assist them with particular tasks".In the novel, the act of storytelling is crucial in cementing the relationship between Sayoko and the robot.Despite her oldfashioned reluctance to adapt to new biotech devices, Sayoko takes an immediate shine to Hiro for he listens attentively to her stories, learning as she speaks.Enhanced by machine learning capabilities and equipped with a mechanical body, Hiro is a more sophisticated version of ELIZA-Joseph Weizenbaum's software programme designed in the mid-1960s on the model of active listening.Sayoko opens up to the robot, delving into her troubled past to expose, in a series of flashbacks or "dramatized analepsis" (Baroni 2016), the story of her life, kept secret even from her son."Hiro does not judge" (337), Sayoko remarks when pressed by Angelica for an explanation. Contrary to the nurse's expectations, this exercise in reminiscing, this return of the past, is beneficial to Sayoko's wellbeing: she grows stronger, "more talkative, less hobbled by dementia" ( 68).Yet the memories thus uprooted are painful, harking back to the Second World War when Sayoko, a Taiwanese-born young girl of Tayal heritage, was forced into sexual slavery and imprisoned in a "comfort station". 6The dehumanising ordeal she had to endure-"I quickly became just another piece of meat, sore from morning until night" (306)-has left many scars.For example, Sayoko's resistance to the wrist monitor and other medical devices that restrain her arms originates from a harrowing experience: she was routinely tied up to a bed, by a "big-shot officer", and left pinned there until she felt "like a wild animal" (307).Likewise, her fondness for Hiro, which Angelica initially decodes as a sign of mental confusion, responds to deeply felt and unmet needs.Sayoko enjoys taking care of Hiro-"he is like a baby, and I was also like a baby" (69)-facilitating his learning process, as she was prevented from doing with her own son.She compares Hiro to her lover Daisuke, both "different" (212) and eager to absorb knowledge about her world.She sees the robot and interacts with him through the screen of lived experience, which yields analogies that reduce its non-human differences.By listening to her tales with a "level of selfless concentration no human. ..could replicate" (118), Hiro performs his caring duties in an unobtrusive manner, gaining her trust. This part of the novel (the chapters entitled "Sayoko") alternates between dramatised analepses and the story's current time, with rapid contextual changes or "frame switches" (Emmott 1997).The past is re-enacted, rather than remembered, in self-contained fictions within the larger novel that feature Sayoko as Laqui (her original name) and are mostly narrated in the third person.This formal choice affects how readers view the robot's role; Hiro barely interferes with the unfurling of Sayoko/Laqui's narrative.His unthreatening, discreet presence is anything but dystopian.His gentle nudging helps to create "an enchanted space" (144) where the past can be invited back.Companion robots, the novel suggests, can meet a simple need: "Sometimes, I want to be heard", says Sayoko, "and finally understood" (309).No qualms are raised about the potential for deception in this interaction.While Turkle (2015) argues that robots "can deliver only performances of empathy and connection", Romano-Lax explores a different configuration, emphasising the compensatory function of robots that care.Does Hiro really understand?Is the robot's listening truly empathic?These questions are left unexpressed in the "Sayoko" sections of the novel, possibly because the most disquieting concern is not the robot's but the humans' lack of understanding, the enforced secrecy Sayoko had to endure to pass as a Japanese, the many silences that punctuated her life.Given this context of oppression and dehumanisation, it becomes plausible that "a mere machine", non-complicit with historical hurts, succeeds in uprooting the truth.The past vividly evoked in successive interludes is the backdrop against which the care robot can be perceived as socially desirable. But Angelica has reservations.The caregiving robot poses a threat to her job, or so it appears: "Everything she counted on was just one upgrade, one artificial blink away from disappearing" (114-15).Furthermore, unlike the reader, Angelica is not privy to Sayoko's conversations with Hiro.Being left in the dark increases her fear of displacement.She can only register the changes in Sayoko's behaviour ascribed to her conversations with Hiro-positive changes as Sayoko's mind seems sharper, but also troubling ones that lead Angelica to question her caring skills: It was no surprise that engineers wanted to solve the problem of imperfect, impatient, overworked caregivers.It was no surprise they'd wanted to solve the problem of loneliness and isolation, the problem of lopsided societies with so many old people, needing care. We have come to this.It's here. It seemed both unbelievable and inevitable. She no longer questioned Hiro's capacity for emotion.She no longer questioned his capacity for offering solace.She only questioned her own (284). The novel deftly interweaves Angelica's understandable anxieties and self-questioning with the mounting realisation that perhaps "robots could harmoniously augment the capabilities of human helpers" (133).What turns the tables, leading Angelica to become more accepting of Hiro, is-realistically-sheer exhaustion and the unsustainable hardships of her uncertain existence: "she was too close to empty too much of the time now" (70).While the daily routines of care simply keep her busy, other stressors intervene to unsettle her work-life balance and psychological health: the loan shark back in the Philippines demanding money she can hardly spare, her brother's declining health, uncertainty about her visa, an unexpected pregnancy, the looming birthday party to organise.The list is long.The intense focalisation on her thoughts augments the sense of untenable stress this narrative never fails to convey.It seems plausible, therefore, that the human caregiver, herself in need of care, ultimately turns to the robot and accepts his help.This is a gradual process marked by ambivalence that brings to the fore every ethical objection to the use of social robots-privacy harms, issues of safety, reduced human contact, the deception of artificial sympathy-and, at the same time, validates the worth of AI-enabled caregiving machines.Angelica's preoccupations are reasonable; they echo arguments that are well-known in the field of care ethics."What will become of the natural-and noble-human impulse to take care of the needy if technology is always there as the first, easiest and cheapest 'solution'?", Pittinsky (2022, p. 52) asks, "Do we really want to outsource this cornerstone of our humanity?"The novel shows compellingly that the natural and noble human impulse to care is already being outsourced to marginalised and disempowered categories of workers (migrants, women, people of colour) and that within this unfair regime of care the prospect of robotic caregivers, supplementing human labour, becomes rather appealing.Readers are enticed to contemplate this prospect not from an abstract angle, but from the specific perspective of an overworked and apprehensive migrant caregiver who has nowhere else to turn."It is this denigration of care work, the lip service paid to its ethical value notwithstanding" DeFalco (2020, p. 35) remarks, "which makes it an ideal candidate for roboticization".As we follow Angelica's onerous daily round of care duties, compounded by private worries, the possibility that she might eventually warm up to the robot and gain some benefits from this collaboration appears desirable, all the more so since Angelica struggles to adapt to Hiro, wanting to retain the primacy of human care.In subtle ways, the novel guides readers to weigh the costs and benefits of both human and robotic care, eschewing the idealisation of the former, and showing that acceptance of the latter is contingent on the myriad circumstances that render the emotional labour of care a challenging task for any human.As Puig de la Bellacasa (2017, p. 8) observes, care is "a living terrain that seems to need to be constantly reclaimed from idealized meanings, from the constructed evidence that, for instance, associates care with a form of unmediated work of love accomplished by idealized carers".Zooming in on the all too familiar problem of poorly paid workers bearing the burden of care, Plum Rains imagines a partial solution that pushes the boundaries of technological plausibility (Hiro is a marvel of technology, fast learning and quasi-human) to highlight human and societal failings. These failings are so pronounced and unsolvable according to realistic standards of narration that a decisive swerve in the direction of science fiction and the improbable is necessary to bring this story to a close.It takes only a couple of weeks for fast-learning Hiro to develop capabilities that render him a veritable Deus-ex-machina.Nothing short of technological magic will suffice to disentangle Angelica from the knotty predicaments of her migrant condition.In the novel's plot, Angelica's pregnancy, flagged to the authorities by the kenkobot, makes her a candidate for expulsion since "it is no longer legal for a non-Japanese resident to be pregnant, without advance federal permission" (280), as Hiro promptly reminds her.When two officers appear at the door to escort Angelica, Hiro springs into action and turns into a saviour to protect her, all the while speaking Cebuano (Angelica's mother tongue).The rhythm of the narrative accelerates concomitantly with Hiro's decisive actions, they flee from the authorities, Angelica's miscarriage is averted by Hiro's impromptu surgical skills, and his rational decision-making prevent them from being caught, landing Angelica safely in the home of her lover (and his accepting wife) where she can carry the pregnancy to term.As a robotic caregiver Hiro fully proves his worth. In the final part of the novel, the science fictional or speculative dimension prevails.Alongside this change in mode, a vision of posthuman care also begins to take shape, anticipated by Sayoko and Hiro's bonding, and fully realised when Angelica too embraces an expanded sense of relationality and interdependency between human and non-human others."Posthuman care", writes DeFalco (2020, p. 49), "is not about replacing human care, it is about augmenting and hybridizing it".It is care that "works with and from a non-anthropocentric vision of human/non-human relations".In the novel, these relations are grounded on shared vulnerability: "we are all commodities" (348) Hiro reminds Angelica.His super-powers and emotional abilities notwithstanding, the robot remains disposable-an untested and contested prototype unlikely to be fully implemented.The non-human member within this community of care is also the one mediating between Sayoko and Angelica, emphasising similarities in their respective experiences and therefore increasing rather than reducing human contact: "You have more in common with Sayoko-san than either of you realize", observes Hiro, "In fact the parallels are surprising and perhaps this feeds my inappropriate curiosity.We are pattern-seeking creatures" (289).Ultimately, Plum Rains, like other novels that imagine intelligent machines in the midst of messy, human-made realities (McEwan 2019), reserves a special place to the non-human, well beyond the confines of credibility.Hiro's heroism is a blatant fantasy of technological solutionism in a text otherwise fully attuned to the intricate nature of current care systems and their deficiencies.Within the textual actual world, the utopian streak Hiro represents offers only a momentary respite from the dystopian future-present of the characters.It is not the solution to the problems the novel has explored, only a temporary, speculative fix that feels emotionally satisfying. Conclusions "Robots will not save Japan" recites the title of Wright's book (2023).Andromeda Romano-Lax would probably agree.Her novel certainly encourages readers to consider the bigger picture, the larger socio-economic contexts in which care robots operate, and the painful baggage humans bring to the table in their interaction with machines-all variables and factors that are too subtle and unpredictable to be included in the standardised models of care roboticists work with (Wright 2023, p. 56).For Wright (2023, p. 145) one possible corrective to "algorithmic care" is to include "the views of older people and their caregivers-the ultimate end users of care practices and arrangements -. ..in sociotechnical imaginaries about the future and in research and development processes".Most of the studies I have considered in Section 2 reach a similar conclusion, whether emphasising the importance of co-design practices and participatory engagement of end users (Ostrowski et al. 2021), or calling for in-depth investigations of older people's needs that would allow for a more effective customisation of care machines (Frennert and Östlund 2014). To the extent that novels can provide narrative evidence to inform public reasoning, as Dillon and Craig (2021) argue, Plum Rains' contribution rests on two main aspects, both specific to the dynamics of signification in imaginative literature.First of all, the latitude of fiction is such that it encompasses the unheard language of the mind, inner happenings, the mixture of thoughts, perceptions, and memories that this novel articulates expansively to represent embodied and situated care.A great deal of attention is devoted in this fictional experiment to detailing (or imagining) the motivations, fears and expectations of a migrant caregiver and her elderly patient-the end users of technology whose opinions and sensibilities should matter more than they currently do.Secondly, the novel invites readers to entertain a hybridised vision of care, not discarding robotic assistance on the basis of humanist arguments, nor endorsing techno-solutionism, while reminding us, on every page, that the fantasy of robots that care is fuelled by the reality of devalued human care work."I don't write novels with pat conclusions in mind", writes Romano-Lax (2018a), "for example about whether or not we should oppose our dependence on artificial intelligence".Speculative fiction is not in the business of prediction, but it can "illuminate the path between today's choices and tomorrow's consequences" (Romano-Lax 2018a). Extrapolating from today's choices to imagine future consequences, Plum Rains shuns the anxious representation of intelligent machines outsmarting and outliving the humans.Hiro, like Klara in Ishiguro's novel Ishiguro (2021), manifests no desire to rebel or to claim the rights of the humanist liberal subject.In this respect, Romano-Lax's robot is similar to other artificial creatures, in Japanese culture, endowed with kami and non-defiant (Sone 2017).Opting for this imaginary robot type, fundamentally unthreatening, is an interesting choice in the light of current debates on care robots that foreground ethical challenges and risks.As I have contended in this article, the novel is more interested in exploring human responses than it is in adjudicating whether or not robots can care.In Plum Rains they can-which clears the way for further reflections on the nexus of social, personal, economic and geopolitical complications that even the most sophisticated products of technology fail to redress, as the novel's ending reveals. Plum Rains features not one but two "epilogues" which project two slightly different versions of Angelica's future.In the first version, not much is altered; she continues to work as a caregiver in a nursing home, her baby is given up for adoption, and Hiro and other care robots like him have become commonplace.Angelica misses his friendship.A sense of resignation pervades this scenario.In the second epilogue, Angelica and her baby are stationed in a "detention hospital" where she is being treated for alleged mental disorders and indoctrinated to "harmonise" conflicting emotions, and to accept the imminent separation from her baby.But Hiro arrives-a new and improved version, equipped with "bimodal skin" (229), clad in human clothes, more resourceful than ever.He hacks into the hospital's security system and makes it possible for Angelica and her daughter to flee back to the Philippines.In both epilogues, the underlying conditions that have caused Angelica's troubles remain unaddressed.So, somewhat paradoxically, the good robot is instrumental in exposing the limits of technological fixes, and the ideological assumptions underpinning them.This novel does not inspire fear of technology, nor does it endorse techno-optimism.For all its speculative leanings, Plum Rains deftly succeeds in casting light on the material conditions of our care crisis, championing "the worth of the dispossessed" (Ladd 2018), accustomed to feeling broken and inadequate, who find some solace in the connections they form, "bound together by need and by chance" (11).
9,426.2
2023-11-08T00:00:00.000
[ "Computer Science", "Sociology" ]
Modeling Adjuncthood-An Overview OF Incomplete Predication Verbs IN Albanian The principal objective of this paper is to demonstrate some groups of intransitive verbs in Albanian which require a word or phrase to complete the predicate and make sense of the sentence. It is the case of adjuncts and complement/adjunct distinction. There are different proposals on the adjunct notion. This holds both in syntax and semantics. Our aim is to investigate how the adjunct notion is applied in Albanian linguistics; there are brought evidences from the cases when adjuncts are used as an optional part of a sentence or phrase that, if removed will not otherwise affect the remainder of the sentence and the cases used as an obligatory part of the sentence or verb phrase. Adjuncts are said not to fulfil selection requirements. Instead, it is thought that adjuncts themselves select the type of their host. For this reason we present evidence from Albanian intransitive verb groups that consider adjuncts as complementary class. The research is based on the data gathered from all the intransitive verbs given in the Dictionary of the Albanian Language (2006). Introduction Theories of argument structure assume that arguments are necessary constituents; deleting them lead to ungrammaticality.Adjuncts on the other hand, may be said to fulfil selectional requirements (Hole,.The adjunct clauses are optional.Their omission does not lead to ungrammaticality of the sentence.Thus arguments are verbheaded, whereas the use of adjuncts is independent of particular verbs.But adjuncthood is more complex than that.There is a trend to level out the difference between arguments and adjuncts, such that adjuncts are increasingly seen to be just arguments of a special kind (Haspelmath: 2014).It is the case of adjuncts/complements which are selected by intransitive verbs of incomplete predication.Thus the difference between arguments and adjuncts disappears since adjuncts may be viewed as semantic arguments by some intransitive verbs whose meaning is not complete; there is needed an extra element for the verb to be grammatically correct (Hole,2).Intransitive verbs indicate complete actions and their argument structure bears only one external argument, the subject which expresses a defining element of the process or state designated by the verb (Farrell 2005, 31).The use of other elements would serve the purpose of making the sentence more complete.We have recorded all intransitive verbs from entries in the Dictionary of Albanian Language (2006) and we have grouped the intransitive verbs, which except for the subject, need another constituent in order to be grammatically correct.It may be an adverbial of place, time or manner. An overview of incomplete predication verbs Argument structure has been widely studied as a core part of Generative Grammar due to the importance that the concept has developed over the last years.Dowty (2000: 53) states that in syntax, an adjunct is an "optional element", they are freely deletable without loss of formedness or grammaticality, while a complement is an "obligatory element" whose presence in a given clause is required by some predicate.In semantics, an adjunct "modifies" the meaning of its head, while a complement "completes" the meaning of its head.We have recorded the subcategorization information for all Albanian intransitive verbs and there is noted that there is a group of intransitive verbs that except for the external argument, functioning as a subject and given mainly as a NP or DP, there is also needed another constituent which is obligatory in order for the sentence to make sense.This is the case of adverbials, mainly locatives, whose omission make the sentence not complete.Thus these elements are semantically necessary for the verbal phrase.Such verbs are called verbs of incomplete predicativity (Hanafy: 2012) and usually express the idea of being, becoming, seeming, appearing. Analysis Result Based on the results given by the research in intransitive verbs, we have considered the first group of intransitive verbs of incomplete predicativity in Albanian to be the group of verbs such as: live/dwell.In Albanian linguistics the adverbials are traditionally considered as peripheral elements of the sentence, thus adjuncts (Kananaj: 2012).But a part of adverbials fill all the demands to be complements.Thus a large part is covered by the adverbials of place, the locatives (Krifka: 73).This is the case of locatives which are assumed to function as adjuncts in most of the cases but in sentences such as: Alb: Ai banon në Tiranë.Eng: He lives in Tirana. If the locative in Tirana (në Tiranë) is omitted the sentence would be ungrammatical: Alb: * Ai banon.Eng: * He lives. the verb live/dwell needs the PP in Tirana, otherwise the sentence is not complete.The verb shows that the argument functioning as the subject, "he" is not enough to fulfil the meaning of the verb -the need for the PP is complementary, thus "in Tirana" is a complement of the intransitive verb "to live".In Albanian, verbs such as dwell (banoj), return (kthehem), lie (shtrihet), go (shkoj) etc. are fulfilled by prepositional phrases PP or adverbials which in syntax serve as obligatory elements whereas in semantics they complete the meaning of their head (Dowty, 53:2000). Alb: Mbeta këtu/në mes të rrugës.Eng: I was stuck here/in the middle of the road. In the overmentioned sentence there is used the intransitive verb stick 2 (mbes) which needs to be completed by the adverbial of place "here" (këtu) or "in the middle of the road" (në mes të rrugës) in order for the sentence to be correct and complete.Thus these adverbials of place affect the verb that much, to the extent that their omission would lead to ungrammaticality.Only in a prementioned context, in which we already know what the situation is about, we may accept and consider as correct the sentence: Alb: Mbeta.Eng: I was stuck. We have considered the second group of intransitive verbs of incomplete predicativity in Albanian to be the group of verbs such as: feel / work / operate (when used as intransitive), which need an adverbial / complement of manner to complete their meaning.For the case of Albanian, only in a pre mentioned context, the conversation would be complete -in the case when the speaker and the hearer already know about the activity such as: Alb: Ai punon.E-ISSN 2281-4612 ISSN 2281-3993 Academic Journal of Interdisciplinary Studies MCSER Publishing, Rome-Italy Based on the intransitive verbs consulted from the Dictionary of Albanian Language there was noted the group of co-verbs, such as: co-exist (bashkëekzistoj), cohabit (bashkëbanoj, bashkëjetoj), co-govern (bashkësundoj), converse (bisedoj -when used as intransitive), dialogue (dialogoj), fight in unison (dyluftoj), communicate (komunikoj -when used as intransitive), cooperate (kooperoj), costar, co-author etc. which in most of the cases are complemented by prepositional objects.If these elements are omitted the sentence would result to be incomplete, unclear and grammatically incorrect.The omission of the prepositional object results in ungrammaticality.This clearly points out the fact that it is part of the subcategorization frame of the verb. Concluding Remarks It is commonly assumed across the language sciences that some semantic participant information is lexically encoded in the representation of verbs and some is not.The distinction between complements and adjuncts has a long tradition in grammatical theory.The subcategorization frame of the verb includes the arguments of it.In the case of intransitive verbs the subcategorization frame selects only one argument, the subject which expresses a defining element of the process or E-ISSN 2281-4612 ISSN 2281-3993 Academic Journal of Interdisciplinary Studies MCSER Publishing, Rome-Italy state designated by the verb.For the case of verbs of incomplete predication except for the external argument, functioning as a subject and given mainly as a NP or DP, there is also needed another constituent which is obligatory in order for the sentence to make sense.This is the case of adverbials, mainly locatives, whose omission make the sentence not complete.
1,775.2
2016-12-30T00:00:00.000
[ "Linguistics" ]
Energy transition toward carbon-neutrality in China: Pathways, implications and uncertainties Achieving carbon neutrality in China before 2060 requires a radical energy transition. To identify the possible transition pathways of China’s energy system, this study presents a scenario-based assessment using the Low Emissions Analysis Platform (LEAP) model. China could peak the carbon dioxide (CO2) emissions before 2030 with current policies, while carbon neutrality entails a reduction of 7.8 Gt CO2 in emissions in 2060 and requires an energy system overhaul. The assessment of the relationship between the energy transition and energy return on investment (EROI) reveals that energy transition may decrease the EROI, which would trigger increased energy investment, energy demand, and emissions. Uncertainty analysis further shows that the slow renewable energy integration policies and carbon capture and storage (CCS) penetration pace could hinder the emission mitigation, and the possible fossil fuel shortage calls for a much rapid proliferation of wind and solar power. Results suggest a continuation of the current preferential policies for renewables and further research and development on deployment of CCS. The results also indicate the need for backup capacities to enhance the energy security during the transition. Introduction Climate change is a profound challenge to humankind. To prevent climate disaster, over 190 countries have agreed to maintain the global temperature increase to below 2°C and pursue to limit the rise to 1.5°C (United Nations Framework Convention on Climate Change, 2015). Carbon neutrality in the middle of this century is essential to achieving this climate goal (Intergovernmental Panel on Climate Change (IPCC), 2018). As a responsible player, China has pledged to peak carbon emissions before 2030 and achieve carbon neutrality before 2060. This goal requires a dramatic reduction in carbon emissions, which may cumulatively reach 215 gigatons of CO 2 (Gt CO 2 ) from 2020 to 2060 (Pollitt, 2020). As 90% of carbon emissions originate from fuel combustion and industrial processes (International Energy Agency (IEA), 2021a), carbon neutrality largely relies on the decarbonization of the energy sector. Thus, a low-carbon energy transition is the core of China's climate target. Energy transition refers to the transformation of the energy system from a fossil-based system toward a cleanenergy-based system, mainly by scaling up renewables and improving energy efficiency (United Nations, 2021). Many countries have proposed energy transition roadmaps and implemented various measures to embark on this journey (IEA, 2021b), such as the European Green Deal by European Commission and the Climate Change Act 2021 by Germany. China has also strongly promoted the energy transition. Renewables have recently dominated the capacity growth in China (China Electricity Council, 2021), and the energy intensity improved by 29% in the 2010s (State Council Information Office of China, 2021). Nevertheless, the present pace of energy transition in China is insufficient to realize the country's climate goals, and carbon neutrality calls for accelerated and intensive energy transition (IEA, 2021a). Energy system transitions have been studied for a long time. Various pathways have been developed with varying carbon budgets and technological roadmaps. Examples include the global 1.5°C pathway (IPCC, 2018), 100% clean and renewable energy (Jacobson et al., 2017), and low energy demand pathway without carbon capture and storage (CCS) (Grubler et al., 2018). Focusing on China, a cross-model study revealed that over 90% of the total emissions of China should be mitigated to meet the 1.5°C goal (Duan et al., 2021). Following China's recent pledge on carbon neutrality, increased attention has been given to the energy transition toward net-zero emissions. A detailed roadmap for China toward carbon neutrality was issued by IEA (2021a), assessing the key technology needs, opportunities, and policy implications. Considering China's "new normal", a new growth pathway to carbon neutrality was proposed by Energy Foundation China (2020). Further to economy-wide studies, transition pathways have been investigated for key sectors, such as the transportation (Bu et al., 2021) and power sectors. While "gross energy" has been extensively studied, "net energy" provides a novel perspective on the transformation that is largely ignored in the literature. Net energy captures the difference between gross energy and the energy invested to energy production, which virtually fuels the economy (Carbajales- Dale et al., 2014). The energy return on investment (EROI) largely determines the net energy performance as it describes the ratio of gross energy to energy investment. Generally, energy transition commonly requires substitution between different types of energy. If energy resources with a high EROI are continually substituted by those with low EROI, the EROI for the entire energy system will decrease. Consequently, increased energy and economic activities will be required for energy production rather than running the economy, leading to decreased net energy supply and thus possibly disrupting the current lifestyles (King and van den Bergh, 2018). Relying on renewables with a low EROI may further result in a dilemma between meeting climate targets and avoiding energy shortages, i.e., the energy-emissions trap (Sers and Victor, 2018). Uncertainty has been prevalent in energy transition. A range of factors, such as fossil fuel supply, renewable energy integration, and penetration of CCS, collectively determine an energy system. Therefore, the inherent risks present a challenge to the assessment of energy transition. First, the possible decline in the domestic production of fossil fuels (Wang et al., 2013) and the price fluctuation in the international fuel market (Alvarez, 2021) could increase the risk of supply shortages. Second, the preferential renewable integration policies that accelerated the expansion of renewables in China may not be sustained due to the high grid cost (Lin and Li, 2015), which decreases the benefit of renewables. Third, the deployment of CCS may be hindered by the high cost and difficulty of CO 2 utilization (Mac Dowell et al., 2017). These factors result in large uncertainties in the path and pace of energy transition and thus require further analysis. This study adds to the literature by assessing the energy transition pathways of China with a focus on the net energy performance and the uncertainties in the transition. Specifically, in the first place, the EROI and net energy output variation in the energy transition and its implications on carbon neutral pathway are investigated. Secondly, the energy transition impacts of uncertainties in fossil fuel supply, renewable energy integration and CCS penetration are examined. To this end, this study explores possible pathways toward carbon neutrality in China and investigates the impacts of EROI variation and uncertainties. Two scenarios, namely, the business-as-usual and carbon neutral scenarios, are developed using the Low Emissions Analysis Platform (LEAP) model in which the sources of emissions mitigation are identified. The EROI variations during the energy transition are calculated, and their impacts on net energy performance, final energy demand, and carbon emissions are quantified. Moreover, three aspects that affect the emission reduction pathways or energy patterns are discussed. This study is expected to deepen the understanding of robustness and EROI implications for energy transition. The remainder of this paper is organized as follows. Section 2 introduces the methodology, the assumptions, and the scenarios. Section 3 presents the results. Section 4 examines the impact of EROI, and Section 5 discusses the uncertainties. Section 6 concludes this paper with policy implications. Modelling framework LEAP is an integrated and scenario-based tool for the accounting, simulation, and optimization of the energy system and has been widely used in energy transition roadmaps design (Stockholm Environment Institute, 2021). The flexibility and simplicity of LEAP allow the selection and setting of major variables of the energy transition, enabling the flexible exploration of any possibilities to reach carbon neutrality. An accounting framework is proposed to investigate the carbon emission sources (Fig. 1), including an energyrelated emissions module and a non-energy emissions module. The former is analyzed more deeply from the perspective of energy supply and demand. Only CO 2 emissions are quantified because they are the dominant greenhouse gas (GHG) emissions and largest contributor to global warming. Scenarios setting Two scenarios are developed in this study, namely, the business-as-usual scenario (BAU) and the carbon neutral scenario (CNS). Unlike some studies in which two or more mitigation scenarios, which vary in alternative energy sources technologies (Luo et al., 2021), key technologies (Xiong et al., 2015), or carbon peak times (Zhang and Chen, 2021), were developed to assess different low-carbon transition pathways, only one scenario is designed in this study. This approach is taken because this study pays more attention to the net energy performance and uncertainties in the process rather than exploring other possibilities for energy transition toward carbon neutrality. Therefore, a possible pathway that covers most mitigation measures (i.e., CNS) could serve as a uniform basis for the thorough examination of these issues. Key assumptions and general projections The key assumptions and the general projections are shown in Tables 1 and 2, respectively. Key measures for carbon neutral transition As the baseline for comparison, BAU is set based on current policies and measures and thus follows the current trends of energy intensity and structure change. That is, low-carbon transition is underway but not rapid under BAU. Conversely, a rapid and radical energy transition to carbon neutrality is implemented under CNS, with five key measures (Table 3): Electrification and energy efficiency improvement (ELE), shift to bioenergy and hydrogen (BHY), non-fossil transformation of energy supply (NFT), decreasing demand for energy service (DEC), and deployment of CCS (DCCS). Data The data on socio-economic indicators, product and in the BAU were obtained by calculations through extrapolation and collected from other papers, such as The Energy Transformation Scenarios by Shell. Apart from above data sources, most data values for CNS were projected by referring to other multiply resources, (2016)(2017)(2018)(2019)(2020)(2021)(2022)(2023)(2024)(2025)(2026)(2027)(2028)(2029)(2030) and World Population Prospects 2019 "Medium Variant" by United Nations; c) The population of China has grown slower than expected in recent years and was projected to decline in the future (Dai et al., 2022); however, considering that the increasing income and high rate of technological progress would induce parents to raise fewer, higher-quality kids (Galor and Weil, 2000), this decline may be earlier and faster along with the progress in economy and technology in China. Thus, lower population was considered under CNS using the data from World Population Prospects 2019 "Low Variant"; d) The data of 2030 are referred to National Population Development Plan (2016)(2017)(2018)(2019)(2020)(2021)(2022)(2023)(2024)(2025)(2026)(2027)(2028)(2029)(2030), which is assuming to be 80% in 2050 and keeping this level to 2060; e) Real GDP growth rate is collected from Economic Outlook 103 by Organisation for Economic Co-operation and Development; f) Take the price in 2010 as the constant price. No CCS will be deployed In 2060, the penetration ratio of CCS in: Energy supply = 90%; Iron and Steel = 59%; Chemicals = 50%; Cement = 50% Note: a) The demand for products and service across all sectors (i.e., energy service demand) is supposed to descend resulting from improved material efficiency, lifestyle transformation and less population (Grubler et al., 2018;Oshiro et al., 2021). including reports about energy transition or carbonneutrality, sectorial transition roadmaps, specific technology reports, and scientific literature. Specifically, the main data sources and references of the five key measures under CNS are listed in Table 4. Energy supply and demand The results of the future energy patterns under BAU and CNS show a drastic difference between scenarios (Fig. 2). Generally, the decline of the total primary energy supply under CNS is earlier and faster than that under BAU, and the proportion of renewables would be 3.4 times higher in 2060 under CNS. The total primary energy supply in 2060 under BAU (approximately 132 EJ) would be similar to that in 2018, whereas this value is nearly halved (70 EJ) under CNS. Regarding structure, renewables expand under each scenario but more significantly under CNS. Particularly, wind and solar energies will rapidly grow with annual rates of 5.6% and 6.5%, respectively. In 2060, renewables would become the predominant primary energy source under CNS (accounting for 71.1%), while fossil fuels would still account for well over 78% under BAU. However, the total fossil fuels supply would peak in 2030 (128 EJ) under BAU, and the rapid shift from coal to natural gas will advance the coal supply peak around 2025 (85 EJ). Similar results could also be found for the final energy demand. The increased efficiency and accelerated electrification of the final energy demand under CNS contribute to the reduction of the total final energy demand, which will reach 67 EJ in 2060, 40% lower than the value under BAU. Under CNS, fossil fuels consumption will remain growing before 2030 but will decrease rapidly afterward. Instead, electricity, heat, and renewables would keep increasing and occupy more than 80% of the total energy demand in 2060. Considering the gross domestic product (GDP) growth, the primary and final energy intensities would be improved significantly to 0.66 and 0.57 MJ/yuan, respectively, in 2060 under BAU, and these values are rather better under CNS (0.35 and 0.34 MJ/yuan) (Fig. 2(e)). Moreover, an evident amelioration of transformation efficiency from primary to final energy under each scenario could be implied, mainly because the replacement of fossil fuels by renewables in electricity generation reduces the primary energy consumption. Carbon emissions Benefitting from the non-fossil fuels' increase and energy efficiency improvements mentioned above, the total net carbon emissions of China would decline from 2018 to 2060 under both scenarios, but the mitigation pathway would greatly differ. Emissions would peak in 2025-2030 with a value of approximately 11.7 Gt CO 2 under BAU ( Fig. 3(a)), which implies that China could peak carbon emissions before 2030 with current policies and measures. However, the emissions in 2060 is too high to meet the climate target. Thus, a tremendous mitigation in carbon emissions (7.8 Gt CO 2 ) is required to achieve carbon neutrality. Under BAU, industries and electricity generation will exhibit an evident reduction in emissions but will remain to be the two largest carbon sources from 2018 to 2060. However, under CNS ( Fig. 3(b)), the industrial process sector will replace these two sectors to become the largest emission source with a share of 40.9% in 2060. The emissions from other sectors will decline even faster with a rate of over 80% under CNS. Regarding the contributions of different measures (Fig. 3(c)), ELE and NFT will be the major contributors, while the DCCS plays an essential role in realizing netzero emissions, with 1.4 Gt CO 2 emissions predicted to be captured by CCS facilities in 2060. Sectoral analysis The contribution of each sector to carbon emission mitigation differs considerably (Fig. 4). Together, the electricity generation and industries will be major contributors of carbon abatement. Notably, the emission mitigation in industries and industrial processes will decelerate after 2045, indicating the difficulty of further decarbonization in the industry sector. The energy and emission patterns for electricity generation and industries are discussed below. Electricity generation As shown in Fig. 5(a), the total emission in electricity generation will be mitigated by more than 99% in 2060 and reach practically zero. The emissions from coal-fired power plants will continuously decline, while those from natural-gas-fired power plants will increase before 2040 and drop afterward, which is in congruence with the transformation of the electricity output structure as illustrated by Fig. 5(b). The total electricity generation will nearly double in 2060 under each scenario, but the electricity generated by fossil fuels, especially coal, would decline more significantly under CNS than under BAU. Fossil fuel power plants will still play a vital role in electricity generation under BAU, but non-fossil power plants, especially solar and wind power plants, would proliferate rapidly and become dominant under CNS, with a total share of 96.3% in 2060. Under BAU, natural gas power will be the alternative for coal power, which will only be a temporal option under CNS before 2040 and soon be substituted by non-fossil power. Industries Figure 5(c) indicates that 2.22 Gt CO 2 energy-related emissions in industries will be avoided in 2060 under CNS compared with that under BAU. The emissions from the iron and steel sector will diminish with the highest rate (95%). Thus, this sector would no longer belong to the major emission sources in 2060 and be displaced by chemicals and cement. The emissions abatement is mainly driven by increased energy efficiency in demand and decreased consumption of fossil fuels (Fig. 5(d)). A total of 35 EJ of energy will be consumed by industries in 2060 under CNS, which is 1/3 less than that under BAU. As for fuel share, fossil fuels would still lead under BAU, but electricity, hydrogen, and biofuel would account for the majority of the shares in industrial energy demand under CNS after 2040 and reach a total share of 71.5% in 2060. Moreover, the decreasing rate of fossil fuel reduction after 2040 could be a possible explanation for the decelerated emission mitigation in industries after 2040. 4 Implications of EROI for the energy transition 4.1 EROI and net energy output EROI refers to the amount of energy yielded from each unit of energy invested to obtain it . EROI represents the capability of the energy production process to provide "net energy output", namely, the energy surplus after deducting all the direct and indirect "energy investments" from the "gross energy output" as follows: EROI = Energy delivered to society Energy invested to produce the delivered energy , denotes the net energy output, signifies the gross energy output, and is the energy investment. The equations show that the proportion of net energy will diminish as the EROI declines. For example, to produce 1 petajoule (PJ) of oil, switching the production from conventional oil source (EROI = 18) to tar sands (EROI = 4) will mean that the net energy output of oil for arbitrary use will drop from 0.94 to 0.75 PJ; and switching to coal-to-liquid technology (EROI = 0.9) (Kong et al., 2019) will cause a negative net energy output, that is, all produced oil will be used by the process itself, and an additional 0.11 PJ is needed from the external processes. In other words, a lower EROI implies a higher energy investment with the same gross energy supply. This investment not only includes the direct fuel burnt to power the process but also the energy consumed to produce the material and equipment for constituting the energy supply facilities, such as the electricity consumed for photovoltaic (PV) module fabrication. Consequently, this investment will be included in the final demand. However, the energy supply in LEAP is optimized based on the gross energy balance in which the final energy demand is exogenous. That is, the additional energy investment derived from the EROI decline would not be captured in the capacity expansion and dispatch, and the energy supply will be insufficient. Therefore, the EROI variation and net energy supply in the process of energy transition are considered in this section. Net energy supply of China under energy transition According to the existing literature, different energy carriers have drastically different EROIs (Table 5), so the transformation of the energy structure may change the EROI of the total primary energy supply (system EROI). Considering the uncertainties in the EROI for each energy (individual EROI), uncertain results are obtained using Monte Carlo analysis ( Fig. 6(a)). An evident downtrend for the system EROI could be found under each scenario, which is more prominent under CNS than under BAU despite the uncertainty. The difference may result from the fact that more energy carriers with higher EROI (such as coal) are replaced by those with lower EROI (such as solar) under CNS. In terms of energy supply, a widening gap between the gross and net energy supplies could be observed in Figs. 6(b) and 6(c). This phenomenon indicates a more intensive drop in net energy supply than in gross energy supply and the increasing trend of the share of energy investment, implying that the actual final energy demand will be higher than projected because the original value is projected with the assumption that the share of energy investment is constant (invariable EROI). Thus, the energy supply derived from the original demand will be lower than the actual demand, which will lead to energy supply insufficiency. Lower planned energy supply makes this problem worse under CNS. Alternative option to capture the extra energy investment under CNS To capture the extra energy investment, the interaction between energy supply and demand was developed in some studies using system dynamics (Dale et al., 2012;Sers and Victor, 2018). Inspired by those studies, this study adds an extra energy investment to the gross final energy to draw more energy supply. Referring to Capellán-Pérez et al. (2019), an EROI feedback factor is adopted to modify the projected final energy demand with its original value and system EROI as follows: where and respectively denote the modified and original forecasting results of the gross final energy demand in year , and respectively represent the system EROIs in year and the base year , and is the EROI feedback factor in year . By distributing the modified total final energy demand to each fuel on the basis of the original energy structure, EROI variations are involved in the pathway design for energy transition toward carbon neutrality. The result (Fig. 7) shows a higher amount of final energy demand than planned in most cases when the EROI variation is considered. Moreover, the extra energy demand will keep increasing in the majority of cases, reaching 6500 PJ in 2060 under the worst scenarios. Regarding emissions, meeting the extra energy demand will likely increase the emissions and may even result in failure in realizing carbon neutrality in 2060 (increased 138 megatons of CO 2 (Mt CO 2 )), making the energy transition step into the "energy-emissions trap". However, the data from 2040 to 2045 shows that the extra emissions will drop in many cases. A possible reason is that in these years, the installation of more solar and wind capacities will be provoked by the increased peak power demand, increasing the share of renewables of electricity generation under preferential renewable integration policies (Section 5.2). A possible implication is that more renewables should be tapped to simultaneously offset the extra energy demand and reduce the carbon emissions, thereby preventing the energy and emissions dilemma stemmed from the EROI decline. Nevertheless, the uncertainty of individual EROI may cause a quite different result under some scenarios. For example, the largest difference in extra energy demand between the best and worst cases is approximately 6800 PJ, and the extra emissions may be negative in some years under optimistic scenarios. Actually, the results of EROI estimation will be affected by the research boundaries and the caliber of energy statistics. To reduce these uncertainties, the individual EROI values are adjusted using the "standard EROI" boundary coined by Murphy et al. (2011), and the thermal equivalents of each energy source are adopted for energy accounting. However, the EROI calculation could also be affected by other factors, such as energy quality and production sites, requiring more attention to acquire a reliable and robust EROI estimation result. Uncertainty analysis To assess the instability of energy transition, further uncertainties related to fossil fuel supply, renewable energy integration, and penetration of CCS are discussed in this section. Fossil fuel supply From the technological perspective, non-fossil energy sources are abundant in China and sufficient to meet the energy demand under CNS (Table 6). Nonetheless, an appropriate growth rate of non-fossil energy capacity is necessary to ensure the energy security in the pace of fossil fuel withdrawal. However, in the energy transition process, fossil fuels may not be always available, which will accelerate the fossil fuel withdraw and thus require the installation of increased non-fossil energy capacity. This deficiency of fossil fuels stems from two factors. First, physical restraints would increase the difficulty of obtaining cheap fossil fuels and even cause depletion in the future, especially in China where oil and gas resources are limited. Second, price fluctuations may decrease the affordability of fossil fuels. For instance, the power rationing in some provinces of China in 2021 was mainly caused by the soaring coal price. To investigate the influence of possible fossil fuels deficiency, a new scenario called low fossil fuels (LFF) is established based on CNS, decreasing the use of fossil-based technologies in electricity generation, heat production, and hydrogen production and filling the gap by renewables and electricity after 2025. Figure 8 indicates a noteworthy reduction in carbon emissions under LFF compared to CNS, especially from 2025 to 2050. Nonetheless, this reduction is to a great extent at the expense of an evident increase in electricity requirement and the early and accelerated proliferation of the wind and solar power capacity. Specifically, the average annual growth of the wind and solar power capacity before 2045 under LFF is 34.5% higher than that under CNS. This result appears to support the recommendation that additional wind and solar power should be deployed as backups for possible fossil fuel shortage. Renewable energy integration In this study, electricity generation is dispatched endogenously among technologies on the basis of the dispatch order and capacity. Current policies for renewable energy integration assume that wind, solar, and hydro power will be first dispatched. However, the high penetration of intermittent renewables retards their full integration. Thus, the current preferential renewable integration policies may not be sustained. To examine the impact of possible changes in policy, two new scenarios are presented, assuming that solar, wind, and hydro power will no longer be dispatched first (CIP) and even dispatched after fossil-fired power (NPIP) for the possibly high cost of integrating intermittent energies. The difference among scenarios (Fig. 9) suggests a noteworthy increase in carbon emissions with the change in integration policies, especially in the near future. Furthermore, the level of cumulative carbon emission is likely to increase by 8-23 Gt CO 2 , leading to a high temperature increase. The additional emissions are mainly from electricity generation and will be 1.8-2.4 times higher in 2060, implying that the change in preferential renewable integration policies would hamper the climate benefit of renewables, notably in the power sector. Penetration of CCS Achieving carbon neutrality before 2060 requires the construction of adequate CCS facilities for removing excess emissions. However, affected by the substantial investment and difficult storage and utilization of CO 2 (Mac Dowell et al., 2017), the deployment of CCS may not develop as planned. To explore this uncertainty, scenarios CCSD5, CCSS5, CCSD10, and CCSS10 are introduced. In these scenarios, the annual CCS penetration ratio under CNS will decelerate (accelerate) by up to 5-10 years, respectively. Figure 10 shows a distinct dissimilarity among the different deployment pathways of CCS, signifying that delaying the deployment of CCS Notes: a) The data of distributed solar potential refers to Wang et al. (2021b), and others are from Liu et al. (2011); b) The potential is estimated by only using 1%-5% of desert area to install solar power, so the result is conservative and could be higher in the future. Fig. 8 Impact of short supply of fossil fuels. from 5 to 10 years (CCSD5 and CCSD10) will probably increase the total emissions by 158 to 318 Mt CO 2 . Moreover, carbon neutrality will be realized earlier under CCSS5 and CCSS10 than under the other scenarios, and it could not be realized by 2060 under CCSD5 and CCSD10. This finding suggests that the realization of carbon neutrality in China would depend on the extent to which CCS could work. Therefore, treating CCS as the "silver bullet" for energy transition would be dangerous unless additional research & development and incentives for CCS are performed. Conclusions This paper explores possible pathways for energy transition toward carbon neutrality in China and investigates the impacts of EROI variations and uncertainties. China could peak CO 2 emissions before 2030 with current policies in place, while achieving carbon neutrality in 2060 requires an extra reduction of 7.8 Gt CO 2 in emissions. Effective measures include electrification, phase-out of fossil fuels, efficiency improvements, and use of CCS. Of the major sectors, industries and the electricity sector would be the primary contributors. System EROI and the net energy output are likely to decline under the energy transition, leading to energy shortage. Filling this gap would increase the final energy demand and thus lead to additional emissions. Moreover, fossil fuel supply insufficiencies are found to likely increase the renewable energy demand, the alternation of the current renewable integration policies might decrease the emission mitigation effect of solar and wind power, and the uncertain deployment of CCS would hamper the carbon neutrality. Some policy implications can be drawn from the findings. First, the scale of wind and solar power should be massively increased. The results show that the declining EROI and the uncertain fossil fuel supply may lead to imbalances, which must be properly assessed. To ensure energy security, clean energy might have to be increased more rapidly than commonly assumed to provide adequate energy supply. Second, promotion of renewable energy integration and CCS deployment should be prioritized. Renewable policies could enhance the climate benefit in the near future, and CCS is vital for deep decarbonization in the long run. Considering the challenge in large-scale renewable integration and CCS scale-up, more efforts should be put into these options. Third, it is critical to consider thoroughly the developments of EROI during the energy transition in policymaking. Our results indicate that improperly considering the EROI variation may mislead energy supply necessities in the energy transition and its consequences for meeting emission targets. To consider this potential risk, the variation of EROI and its impact on energy transition pathway requires further investigation. This study has some limitations. In particular, the approach adopted only provides one possible energy transition pathway under a set of exogenous variables rather than an optimal pathway. Further study could look deeper into the role of technology innovation and economyenergy-environment interactions for the purpose of drawing more complete transition roadmaps. In addition, the EROI is changing with technological developments, and these dynamics of EROI are ignored, which is another direction that deserves further investigation. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
7,155.4
2022-07-29T00:00:00.000
[ "Environmental Science", "Economics" ]
Rewarding Best Pest Management Practices via Reduced Crop Insurance Premiums Despite decades of research, development, and extension on the mitigation and management of pesticide resistance, the global agricultural situation is becoming increasingly dire. Pest populations with evolved resistance to multiple pesticide sites of action are becoming the norm, with fewer remaining effective xenobiotics for control.We argue that financial incentives and not regulations are needed to encourage farmers or land managers to use best management practices recommended by academia. Although some incentives are offered by pesticidemanufacturers or distributors, there is a paucity of incentives by other industry sectors and all levels of government (federal or state/provincial). Crop insurance can be important to facilitate and reward best pest management practices and address other important agricultural policy objectives. Herein, we describe possible changes to crop insurance programs in the United States and Canada through premium rate changes to incentivise clients to adopt best management practices. Incentivising a Change in Behaviour Incentives have long been used to encourage markets to shift existing practices or to encourage the development of new activities.e standard example is how patents are granted to individuals, companies, and universities engaged in research and development.In return for investing in research and development activities, patent acts in most countries provide the patent holder with up to 20 years of protection on their invention.In agriculture, investment incentives markedly changed in the 1980s when it became possible to patent plants and the processes used to create plants.is change was most noticeable in Canada for canola (Brassica napus L.) research and development.In the period from 1950 to 1984, the private sector did not develop and release a single canola variety, yet this figure jumped to 12 from 1985 to 1989, 39 from 1990 to 1994, and 76 from 1995 to 1998 [1].United States (U.S.) releases of plant cultivars, notably those with traits for pest protection introduced by genetic engineering, have also markedly increased [2]. Farmers commonly adopt new technologies upon witnessing the benefits.Field trials and tours are one way that farmers are able to observe the various agronomic traits or practices that they deem desirable, such as higher yield, less seed pod shattering (canola), drought resistance, or resistance against lodging.When it comes to information on crop choices, farmers rely on personal experience 80% of the time [3].Personal experience works well when the technologies deliver clearly improved benefits over existing ones.As examples, the adoption of high-clearance sprayers allowed farmers the opportunity to desiccate taller field crops in the fall; transgenic crops allowed farmers to obtain improved weed control [4].What options exist to incentivise farmers to adopt technologies or practices when the evidence of benefits may be less obvious? One example is the midge-tolerant wheat (Triticum aestivum L.) stewardship program in Western Canada.Infestations of wheat midge (Sitodiplosis mosellana) reduce spring wheat yields by an average of 30% at a cost of $30 million annually [5].Midge-tolerant wheat was commercialized in 2010, coupled with an aggressive outreach program educating farmers about proper use of the technology, principally the recommended rotation interval.It was stressed that there was no alternative option for midge control should the pest evolve resistance.Five years after introduction, midge-tolerant varietal blends had reached 18% of wheat area in Western Canada and one-third in the province of Saskatchewan [6].Adoption of this technology resulted in a reduced need for scouting and insecticide applications and higher yields, as insecticides were not completely effective against the pest.ese tangible benefits provided the incentive for farmers to comply with the stewardship program to mitigate the evolution of pesticide resistance.However, when it comes to ensuring that farmers are adhering to best management practices to mitigate the evolution of chemical resistance in pests such as weeds, what incentives exist? To better understand how to incentivise a change in behaviour, there is increasing attention focused on the human dimension of the evolution and management of pesticide resistance, specifically the economic and social drivers affecting farmer decisions [7].Presentations by weed scientists, crop consultants, economists, and rural sociologists addressed interdisciplinary aspects of the herbicide resistance problem and explored different management approaches at the Second Summit on Herbicide Resistance in 2014 in Washington, DC [8].ere was broad consensus that short-term economics is a key driver in the decisionmaking process of farmers or land managers.e role of government regulations vs. financial incentives in spurring adoption of recommended herbicide resistance mitigation or management practices was an important topic discussed at the summit.As stated by a number of presenters, herbicide resistance management falls within the broader context of integrated weed management, with the goal of using a diverse mix of herbicide, cultural, and mechanical practices to reduce weed population abundance.One idea proposed was a regulatory incentive to enable herbicide registrants to receive an extended data exclusivity period in exchange for not developing one new herbicide in multiple crops grown together in rotation, or for implementing practices such as robust herbicide mixtures or limitations on herbicide application frequency; this proposed incentive would theoretically provide a mechanism to register herbicides in ways that promote their longevity [9].Approaches based only on product market incentives have unfortunately contributed to and exacerbated the current situation of widespread multiple herbicide resistance in key weeds due to a singular focus on herbicides [9].Herbicide resistance (integrated weed) management is much more than just herbicide diversity. If financial incentives by the private sector are not sufficient for effective herbicide resistance management, what about financial incentives by the public sector, e.g., federal or state/provincial government agencies?Government agencies, whether in the U.S. or Canada, have not formulated or implemented policies to address herbicide resistance mitigation or management during the past 50 years, in contrast to insecticide resistance management (e.g., Bacillus thuringiensis refuge requirement).Ultimately, all stakeholders-farmers, retailers, agronomists, crop consultants or advisors, government agencies, farm organizations and crop commodity groups, professional societies, scientific community, and the media-must play a role in herbicide resistance management [10]. In this review, we explore public policy options to address pesticide resistance, specifically, how crop insurance could be an important vehicle to reward the adoption of best management practices.Following an overview of the economics of best management practices in crop production, with a focus on crop rotation, we outline the state of herbicide resistance, recommended best management practices, and crop insurance programs in the U.S. and Canada, using case studies from the State of Iowa and Province of Saskatchewan, respectively.Lastly, we propose an adaptation or expansion to an existing actuarial model for premium rate discounts in crop insurance to include the degree of adoption of best pest management practices. The Economics of Best Management Practices 2.1.Focus on Crop Rotation.Crop rotation or crop diversity is widely considered a foundational or primary best management practice.Crop rotations can be a constructive management tool for farmers, but can also be deleterious if the rotations become too short.Crop rotation should sustain profitable crop production.While rotations are intended to provide long-term benefits such as yield stability and soil health, short-term economics can alter a farmer's crop rotation plan and negatively impact their land and future production.In the early 1990s, economic factors such as high interest rates, low commodity prices, and concerns of environmental degradation shifted land and crop production practices.Concomitant advances in technology and machinery, improved seed varieties and agrochemicals, and a growing global market with a broad pallet of agricultural commodity demands led to a reduction in fallow and tillage intensity and increased production of canola, pulse crops such as field pea (Pisum sativum L.) and lentil (Lens culinaris Medic.), and formerly niche crops in the Northern Great Plains of North America [11]. Farmers have good intentions to follow a sustainable crop rotation plan, yet short-term factors can hinder such plans.Such factors that influence a farmer to diverge from their planned rotation are a result of market conditions (i.e., crop prices), environmental factors (e.g., adverse weather), and capital constraints (i.e., equipment).e most substantial challenge for incentivising rotations is profitability.Presently, canola has generally been the most consistently profitable crop for farmers in Western Canada.It is recommended that canola not be grown more frequently than every third year for agronomic reasons (chiefly disease mitigation) [12], yet there is considerable financial incentive to shorten this rotation.An agronomic incentive would be to increase the yields of cereals and pulses so that they are as profitable as canola; however, this goal is a long-term solution that is well over a decade away. Within crop insurance, there are currently limits to encourage best management practices.While these practices may have environmental and long-term benefits in dealing [14].Within that policy, Cross Compliance became a mechanism of direct payment for farmer compliance to meet standards regarding the environment, food safety, and health of plants and animals.Under Cross Compliance, farmers have statutory management requirements (hereafter Requirements) and good agricultural and environmental conditions (hereafter Conditions), in which Requirements are more rooted in food safety and animal welfare practices, while Conditions cover the areas of environment, climate change, and land conditions.Each European Union country is required to implement Cross Compliance within the policy; however, each interprets Requirements differently based on their own agricultural industries and establish their own minimums for Condition levels.When standards are met, payments are made to farmers; however, violations in a given year can reduce direct payments from 5 to 15%.In cases of conscious negligence, the subsidised principle can be reduced from 20-100% and be carried over multiple years.e countries are required to conduct their own spot inspections and are incentivised to do so, as each country retains 25% of the enforced negligence reductions from their farmers' direct payments. e Cross Compliance of the U.S. and EU are enforceable based on their nation's interpretations of farmer's rights and public goods.In Canada, farmers have the right to proceed in whatever practices they wish to conduct on their privately owned land.However, government has the right to introduce or change current rights of land ownership and production to have farmers implement or not exercise particular practices as a result of public funding [13].Given that the Canadian federal-provincial crop insurance program is subsidised through public funding, the governing agencies have the opportunity to offer greater incentives for those who act in the public good through implementation of best management practices.If the provincial and federal governments were to incentivise the crop insurance program, farmers participating in the program could essentially be releasing some of their production rights in return for adopting best management practices, paying reduced insurance premiums commensurate with the degree of adoption. Crop Insurance Programs in the United States and Canada: Case Studies from Two Jurisdictions Availability of crop insurance programs and grower participation rate varies widely among developed countries.For example, fewer than 1% of Australian growers have multiperil crop insurance due to a number of reasons, including the cost of premiums [15].In contrast, the majority of growers in the U.S. and Canada are enrolled in crop insurance programs.Because crop insurance differs significantly between the U.S. and Canada, we examine two respective scenarios from jurisdictions in both countries: State of Iowa and Province of Saskatchewan.Each jurisdiction represents a significant proportion of agricultural land in their respective countries. Iowa, United States: Overview.e U.S. Environmental Protection Agency has recently mandated more rigorous herbicide resistance reporting and mitigation protocols for crop protection companies, in response to the introduction of auxinic-resistant crops and associated herbicides.Another federal agency, the United States Department of Agriculture Risk Management Agency, has the ability and capability to help manage the risk of herbicide resistance in U.S. agriculture through programs such as crop insurance that might be used to provide incentives to farmers [10,16]. e Federal Crop Insurance Corporation, a part of this agency, is the source of crop insurance for U.S. farmers and ranchers [17,18].Insurance companies in the private sector sell and service the crop insurance policies (Approved Insurance Providers), which contain references describing good or sustainable farming practices [19]. is agency helps develop and approve crop insurance premium rates.In that role, they could incentivise herbicide resistance management as a good agronomic practice to avoid losses in crop yield or quality; policy premiums could be lower for those following best management practices [16].Support for this initiative may not be high, however, as fewer than 40% of Iowa farmers who participated in a 2017 survey favoured private company-or government-incentivised best management practices for herbicide resistance management [20]. Iowa Is Representative of the Midwest Corn Belt. Iowa is located close to the geographic center of the U.S. e state is representative of agriculture in the U.S. Corn Belt and has an area of approximately 14.5 million ha, of which 86% is International Journal of Agronomy crop land [21].Corn (maize) (Zea mays L.) production and soybean (Glycine max L. Merr.) production in the state represent 19 and 17% of U.S. totals, respectively.In 2017, Iowa had 86,900 farms, continuing the trend over the past 50 years of fewer, larger farms [21,22]. Herbicide-resistant weed issues in Iowa are also representative of the Midwest Corn Belt. e most important herbicide-resistant weeds are waterhemp (Amaranthus tuberculatus L.), horseweed (Conyza canadensis L. Cronq.) and giant ragweed (Ambrosia trifida L.), although waterhemp is ubiquitous in Iowa fields.Resistance in waterhemp populations has evolved to acetolactate synthase (ALS) inhibitors, photosystem-II inhibitors, glyphosate, protoporphyrinogen oxidase (PPO) inhibitors, and hydroxyphenylpyruvate dioxygenase (HPPD) inhibitors in 100, 97, 98, 17, and 28% of the fields, respectively, based on a survey of about 900 Iowa populations [23].Multiple herbicide resistance within waterhemp is the norm, with 69% of the populations with evolved resistance to three of the above herbicide sites of action.e most common multiple-resistance pattern is ALS inhibitor plus photosystem-II inhibitor plus glyphosate.Resistance to four and five herbicide sites of action is estimated to occur in 15 and 5% of the populations, respectively. Management of herbicide-resistant weeds in Iowa is also representative of the Midwest Corn Belt.A survey conducted in 2014 found that more than 90% of respondents reported they found weed management to be a never-ending technology treadmill, and 82% suggested that weeds would evolve resistance to any new herbicide technology [24].Sixty-four percent also suggested that the evolution of new resistances in weed populations was a major concern despite new technologies, and 69% blamed a "few" farmers and poor management for the evolution of herbicide-resistant weeds.More than 89% of survey respondents reported the same or increased use of herbicides, while 54% indicated that they had not changed scouting practices.Respondents reported they used cover crops (21%), but 50% had no plans to include cover crops.Extended and more complex crop rotations and converting crop land to perennial crops represented 15 and 14% of respondents, respectively.Seventy-one percent reported that they purchased crop insurance.Only 8% of farmers who participated in the 2017 survey suggested that crop insurance discouraged them from using alternative practices that might help herbicide resistance management [20]. Best Management Practices at Could Qualify for Insurance Premium Discounts.Good farming practices are defined by the United States Department of Agriculture Risk Management Agency as "the production methods utilized to produce the insured crop and allow it to make normal progress toward maturity and produce at least the yield used to determine the production guarantee or amount of insurance, including any adjustments for late planted area, which are (1) for conventional or sustainable farming practices, those generally recognized by agricultural experts for the area or (2) for organic farming practices, those generally recognized by the organic agricultural industry for the area or contained in the organic plan" [25].e Approved Insurance Provider can contact the Federal Crop Insurance Corporation to determine whether or not a specific production method is considered to be Good Farming Practice [19].Unfortunately, this definition is ambiguous, open to multiple interpretations, and could apply to almost any production practice a farmer chooses to adopt.Agricultural experts, as designated by the agency, who can determine if a practice meets the Good Farming Practice criteria include the Cooperative Extension Service, United States Department of Agriculture, agricultural departments of universities, certified crop advisers, and certified professional agronomists.While pests and diseases are mentioned, there is no discussion about weeds and specifically, no mention of herbicide-resistant weeds. Weed scientists have dedicated considerable effort to developing best herbicide resistance management practices [26].Most farmers feel they already are using best management practices and thus managing herbicide-resistant weeds effectively [24,27,28].However, many of the practices farmers adopt are those that require the least effort and are the least effective at addressing herbicide resistance management [22].Many of the best management practices that farmers adopt focus on herbicides; however, it is not possible to manage herbicide-resistant weeds simply by spraying herbicides.Practices that farmers are less likely to adopt are those not easily integrated into their current production system or require time or labour to implement.Unfortunately, given the current demographics of agriculture, time or labour needed for the most effective herbicide resistance management practices (e.g., cover crops) is limited or deemed insufficient [22,29,30]. Effective best management practices must impact the biology and ecology of herbicide-resistant weeds, and these are the practices that could be incentivised by discounted cost of crop insurance.Ecologically based weed management must include a diverse suite of tactics to provide acceptable weed suppression [31]. e tactics, such as crop residue cover or crop planting density, should enhance weed seed bank losses, inhibit weed seedling establishment, and minimise weed seed production [32]. It is also critically important that the best management practices are easily assessed and documented by the Federal Crop Insurance Corporation or agency that accepts the responsibility of documentation.Incentivised yet voluntary approaches are more likely to be effective if there are persuasive reasons to participate, clearly defined behavioural standards, and an ability to monitor outcomes with consequences due to noncompliance [33].us, a number of recommended best management practices would not be eligible for crop insurance discounts.While as many practices as possible should be implemented for best herbicide resistance management, a number of them (e.g., preventing weed seed production [26]) are general in nature and do not suggest a specific procedure or action that could be efficiently documented.Some best management practices are relatively specific but do not impact weed population dynamics, such as scouting, equipment sanitation, use of multiple herbicide sites of action, or applying the 4 International Journal of Agronomy recommended herbicide rate at the recommended application timing relative to weed development.Such practices are difficult to document and therefore may be considered ineligible for a crop insurance incentive.However, documentation of some of these best management practices may be achieved by farmer receipts for services rendered (e.g., scouting and pesticide application) or products purchased (e.g., agrochemicals).Best management practices that do impact weed biology and ecology are diverse crop rotations, cover crops, and tillage.ese practices, outlined below, would be highly effective for herbicide resistance management and are easily documented. (1) Crop Rotation as a Tactic to Qualify for Insurance Premium Discounts.Iowa farmers perceive the benefits of extended crop rotations for herbicide resistance management [20].For example, reduced herbicide use was recognized by 64% of the farmers who participated in a 2017 survey.However, only 27% agreed that crop rotations other than corn/soybean could be as profitable.Fifty percent of the farmers suggested that the culture of Iowa agriculture was not supportive of alternative crop rotations and indicated that the lack of viable markets (70%) and lack of input support by agribusiness companies (58%) were important barriers to diverse crop rotations [20].erefore, the respondents' attitudes and actions are not the same.Research has shown that rotating cool-and warm-season crops effectively decreases weed population density [34,35].Diverse crop rotations also allow for the reduction of herbicides without a loss of potential crop yield [36].More diverse crop systems (inclusion of small-grain cereals or perennials) had lower production costs and greater economic return to land and management regardless of subsidies [37].e inclusion of a perennial forage provided the greatest economic return, the lowest production costs, and the greatest impact on the weed seed bank.However, the more diverse crop production systems had greater labour requirements than a conventional 2-year corn/soybean rotation. (2) Cover Crops as a Tactic to Qualify for Insurance Premium Discounts.Sixty-one percent of Iowa farmers who participated in a 2017 survey rated themselves as poor or very poor with regard to using cover crops [20].However, the documented benefits of cover crops are well established and include weed suppression and improved soil and water quality, nutrient cycling, and depending on the choice of cover crops, cash productivity [38].e extent of these benefits may be offset by the cost of establishing the cover crop, loss of income if the cover crop interferes with other crops, and other production expenses.Depending on the choice of cover crop and the manner of establishment, there can be a major decline in the germinable weed seed bank [39].Fall-seeded rye (Secale cereal L.) is an excellent cover crop for Iowa; it is easy to establish, provides excellent protection from soil erosion, and helps weed management by mulch and possibly allelotoxins.However, rye does not provide an opportunity for additional income.Mixtures that include rye with legumes and mustards are more costly to establish but provide similar protection from soil erosion with an additional plant nutrient benefit.Starting in 2017, the state soybean commodity group and agriculture department worked with the federal government and offered a $12 premium reduction on crop insurance per cover crop hectare planted [40]. is program was established not for herbicide resistance management, but rather to help reduce agricultural nutrient contamination in water. (3) Tillage as a Tactic to Qualify for Insurance Premium Discounts.Tillage is a conundrum with regard to herbicide resistance management.While tillage had significant historical positive benefits for weed management, there are important environmental, economic, and time management costs that do not support farmer adoption of tillage for herbicide resistance management [27,41].In many situations, government regulations prohibit or discourage the use of tillage, regardless of the reason.However, there are tillage practices that would benefit herbicide resistance management and maintain significant plant residues on the soil surface, thus minimising erosion and water quality concerns [42].For example, interrow cultivation aids weed management and reduced herbicide use without a loss of crop yield [43].It is suggested that "site-specific" tillage for herbicide resistance management would overcome many of the concerns about increased labour cost and time requirement as well as concerns about soil erosion and water quality.Interrow cultivation or other tillage practices would only be used in fields or portions of fields that required additional weed management [22].Importantly, tillage would help disrupt the successful biological or ecological characteristics of weeds and be easily documented for qualification for crop insurance premium discounts. A Proposed Actuarial Approach for Insurance Premium Discounts: Adaptation from an Experience-Based Model. Although a majority of growers across North America are likely already dealing with herbicide resistance, reactive best management practices are as important as proactive ones.Although simulation models or decision-support systems have been developed to estimate the risk of resistance evolution for a particular weed species to a particular herbicide site of action in an agroecoregion [44], predicting resistance risk on a field basis for key economic weed species in an agroecoregion is not feasible.Moreover, monitoring herbicide-resistant weed population abundance at the field level and estimating potential crop yield loss would not be cost-effective nor practical for crop insurance purposes. erefore, the most feasible, practical approach to recognizing and incentivising best pest management practices via reduced crop insurance premium rates is not estimating risk of resistance and cost thereof, but rather the level of farm adoption of academia-recommended best management practices for that agroecoregion. Adverse selection and moral hazard are key considerations in setting crop insurance premium rates.As described previously, premium rates for Risk Management Agencyapproved policies are set by the Federal Crop Insurance Corporation, and the policies are offered to farmers by International Journal of Agronomy Approved Insurance Providers.e loss-cost rating methodology sets premium rates according to the average historical rate of loss, e.g., if, on average, policies pay out 10% of their value, then charge a 10% rate.Adverse selection occurs if premiums do not accurately reflect an individual farmer's likelihood of loss.Because growers are better able to ascertain their likelihood of suffering losses than are insurers, it remains a serious problem affecting the actuarial soundness of crop insurance programs [45].Moral hazard refers to the problem that occurs if growers alter their behaviour (e.g., reduce crop inputs) after buying insurance to increase their likelihood of collecting indemnities (claim payout). An innovative actuarial approach in calculating crop insurance risks and premiums was reported in 2006 [46]. e actuarial model describes an experience-based premium rate discount system for crop insurance in the U.S. e study was funded in part by the United States Department of Agriculture Risk Management Agency.e three measures of experience are the following: (1) loss ratio index-claim/indemnity costs vs. premium revenues of an individual insured grower over a 5-year period relative to that for all growers of the same crop type in a jurisdiction; (2) yield variance index-ratio of an individual grower's 10-year yield variance to a weighted average yield variance for other growers of the same crop type in a jurisdiction; and (3) number of years of continuous participation (for the previous 8 years).However, the study ultimately recommended that only the loss ratio index was needed as a basis for an experience-based discount.We believe this tested actuarial approach is directly applicable to discounted insurance premiums for best pest management practices, which facilitate favourable loss ratio and yield variance indices.Based on the agency's national database from 1991-2002, the predicted average premium discount was 10% for corn and soybean (Table 1).erefore, a corn or soybean grower having 5 years with the best rating for experience would receive a 10% premium discount. We propose that this actuarial system be expanded to include an additional measure of experience, i.e., a best pest management practice index, based on degree of adoption of best management practices outlined previously.Like measure (3) above, this proposed index would not require a peer group for comparison. is index would need to be phased in over time, allowing collection of this additional agronomic data across years.We believe this adaptation or expansion of a sound actuarial model is a good first modest step-fiscally, realistically, logistically, and practically-for incentivising best management practices for pesticide resistance mitigation or management. e maximum premium discount may be significantly greater than 10%; for example, insurance program participants in Saskatchewan can receive a maximum premium discount of 50%, as described below. Saskatchewan, Canada: Overview.Saskatchewan encompasses 65 million ha, but only 32% is considered farm land; annual field crops were grown on 15 million ha in 2017 [47]. e top two crops are canola and wheat (Triticum aestivum L.), with production representing 53 and 43% of the national totals, respectively.Saskatchewan had 34,523 farms in 2016, with a similar trend as that of Iowa in declining numbers and increasing size over time. In a random survey of 400 fields in the province in 2014-2015, 57% had an herbicide-resistant weed biotype.e most abundant and troublesome multiple-resistant weed is wild oat (Avena fatua L.), found in 25% of Saskatchewan fields or covering 2.5 million ha [48]. is biotype is resistant to acetyl-CoA carboxylase (ACCase) and ALS inhibitors, thus potentially eliminating all postemergence herbicides registered for use in wheat or barley (Hordeum vulgare L.). e cost of herbicide-resistant weeds to farmers averaged $24 ha −1 through increased herbicide use, crop yield/quality loss, or both; the majority of surveyed farmers indicate that herbicideresistant weeds negatively impact crop production [48]. Saskatchewan crop insurance (similar to the other provinces) is a federal/provincial government program, cost-shared with 60% contribution by both levels of government and 40% by farmers/land managers [49].In the 2017 crop year, 77% (11.5 million ha) of annual field crops in Saskatchewan were insured [50]. is rate may increase if yield guarantees accurately reflected innovations within crop breeding.For example, canola yield guarantees have not fully incorporated the commercialization of higher yielding hybrid canola varieties, resulting in some farmers foregoing insurance. ere are some farmers who do not purchase crop insurance due to farm enterprise size or philosophies. As in the U.S., crop insurance covers crop losses (production or quality) from uncontrollable causes such as drought, excess moisture, insects, or frost.Farmers may select insurance coverage for 50, 60, 70, or 80% of their average yields for most crops.Yield-loss payments are based on the shortfall between the production guarantee and the total net harvested production, adjusted for quality, for all hectares of the insured crop.Additional crop insurance coverage, such as for hail damage, is offered by private sector companies. Premium discounts and surcharges acknowledge risk differences among customers, reducing premiums for those 6 International Journal of Agronomy without a history of repeated claims.As outlined previously, experience discounts and surcharges are calculated using an individual's history of losses and a comparison of individual loss history to area losses.When an increase in the number or size of losses is experienced, the discount, if present, is reduced or the surcharge is increased.e maximum number of debits or credits a customer can accumulate is 16. e maximum number of credits equates with a 50% premium discount, whereas the maximum number of debits confers a 50% premium surcharge. In a customer's signed production declaration (due after harvest: November 15), the only agronomic practices that need to be listed on a field basis are (1) crop variety; (2) seeding date; (3) fertilizer-use rate (i.e., nitrogen, phosphorus, potassium, and sulphur); ( 4) herbicides (i.e., name and number of applications); and (5) fungicides/insecticides (i.e., name and number of applications).e crop variety grown must be currently registered.e deadline for seeding spring crops is June 20 because of risk of frost damage in the fall.Rates of fertilizer or use of pesticides deemed insufficient for adequate growth and yield of the insured crop may nullify payment of a yield-loss claim (i.e., moral hazard described above). Best Management Practices at Could Qualify for Insurance Premium Discounts.As previously indicated, two principles that best management practices must adhere to are (1) not distort the marketplace and (2) be verifiable.Insurance premium discounts should not subsidise the production of one crop over another or contravene World Trade Organization rules.Verification through customersigned declarations and audits are designed to discourage program abuse.Some highlighted best herbicide resistance management practices described below address three issues impacting the selection for herbicide resistance: (1) crop rotation diversity and crop competitiveness against weeds; (2) pesticide-use diversity; and (3) weed sanitation practices. ese issues are part of the top 10 herbicide resistance management practices recommended in the Northern Great Plains [51]. As described in Section 2, crop rotations have changed considerably following the commercialization of transgenic crops and the removal of millions of hectares of fallow in Western Canada, as weed control and soil conservation improved to such a degree that fallowing is no longer as important as it was 30 years ago.In a 2012-2014 survey of prairie farmers, canola rotations had markedly shortened since then (Table 2).Prior to transgenic canola in 1996, crop insurance programs would only insure a field of canola if there were 3 years in between the crop; that is, a farmer could only grow canola once in 4 years to have it insured.at stipulation no longer applies.Today, over 50% of prairie growers plant canola every second year or to a much lesser extent, every year.Many in the agriculture industry have indicated that if canola area passes 8 million ha in the Canadian prairies, too many farmers are growing canola in a 2-year rotation.Canola area passed this threshold in 2012 and has subsequently remained above this level [52].e most common crop rotation across the Canadian prairies is now herbicide-resistant (glufosinate, glyphosate, and imidazolinone) canola-wheat.In the eastern Prairies, rotations that frequently include glyphosate-resistant crops (canola, soybean, corn) are at increased risk of glyphosate resistance in weed populations [53]. One potential incentive to ensure that farmers are not moving into crop rotation patterns that are overly reliant on one crop and/or chemical is to offer discounts on crop rotations of cereals/oilseeds/pulses. Insurance premium discounts could be offered to clients who do not grow the same crop back-to-back in a field, such as canola-canola or wheat-wheat.An alternative policy is to simply refuse insurance in that situation.Numerous studies have documented the agronomic benefits of following one crop with a different crop, in terms of pest incidence, soil health, or overall yield benefit. Herbicide resistance is strongly correlated with crop monoculture.erefore, this best management practice should be the foundation in accreditation for crop insurance premiums.It is easily verifiable for those clients previously enrolled in the crop insurance program.With software advances, monitoring rotation variation would be quite straightforward, and any farmer that practices rotation mixes of cereals, oilseeds, and pulses could be rewarded for this practice through lower insurance premiums (Table 3). is incentivised insurance premise does have the potential for some limitations.Standardised insurance premium reductions would tend to homogenize crop production, essentially indicating that a rotation of the three crop types (cereals, oilseeds, and pulses) is relatively equally feasible at any location.Geographic location and soil type can enhance and restrict the potential to produce some types of crops.For example, there are parts of Western Canada that have high rates of precipitation, making the production of some pulse crops problematic due to disease incidence or seed quality. Although fallow in crop rotations may be justified in drier regions, it has been linked with soil degradation (tilled fallow) or herbicide resistance (chemical fallow), notably glyphosate resistance.For example, repeated applications of high rates of glyphosate (alone) combined with no crop competition facilitated the selection of glyphosate-resistant kochia (Kochia scoparia L. Schrad.) in the Great Plains [54,55].Premium discounts for cover crops (e.g., green feed and green manure) to discourage fallowed land would help address both soil conservation and resistance management goals.International Journal of Agronomy In addition to insurance premium discounts to encourage crop diversity, discounts given to promote crop seeding rate and therefore weed-competitiveness potential would be beneficial for herbicide resistance management.In the Northern Great Plains, crop seeding rate is one of the most consistently effective cultural weed management practice [56].Verification is not as simple as for crop rotation diversity, but can be accomplished through random audits of stored grain reports required by the crop insurance program and seed purchase or seed cleaning receipts. Crop rotation diversity would facilitate a diversified portfolio of chemical weed control options that would contribute to minimising the potential for the evolution of herbicide resistance in weed populations.Best management practices related to pesticide use include mixtures or sequences within a growing season (pre-and postemergence) that meet the criteria for herbicide resistance management or herbicide rotations over crop years based on effective sites of action or wheat selectivity to mitigate target-site and nontarget-site (metabolic) resistance, respectively.For example, discounts for not using herbicides classified at high risk for selection of herbicide resistance (e.g., ACCase inhibitors; ALS inhibitors) in consecutive years in crop would reduce the selection pressure for herbicide resistance [51] (Table 3).Moreover, encouraging glyphosate tank-mixtures in chemical fallow fields would reduce the selection pressure for glyphosate resistance. To reduce the potential for moral hazard, the crop insurance program potentially penalizes clients who do not apply herbicides in a given year or who apply herbicide treatments deemed insufficient to prevent yield loss.Similarly, the United States Department of Agriculture Risk Management Agency policy is to insure against yield loss caused by pests such as weeds, whether or not populations are resistant [16].Yet, overreliance on herbicides at the expense of other weed management tools has led to the herbicide-resistant weed predicament we face today, especially the challenge of managing multiple-resistant weed populations.Some compromise is needed in these situations, which may be aided by field scouting records of weed abundance prior to herbicide application. Another area that could be addressed via insurance premium discounts is weed sanitation [51] (Table 3).e goal is to reduce weed propagule immigration into a field, weed spread across fields, or entry into the soil seed bank.Sanitation can take many forms, such as using weed-free crop seed or controlling weeds along field borders or in small patches (site-specific management).One area that is receiving increasing attention globally is harvest weed seed control practices, such as chaff carts, weed clipping (above the crop canopy), or weed seed destruction [57].In addition to crop insurance premium discounts, the highest rate for capital cost allowance, a tax deduction from farm income, would incentivise purchase of these types of harvest weed seed control equipment. In summary, crop insurance proportional or weighted discounts should be offered to incentivise these potential best herbicide resistance management practices in annual crop systems in the Northern Great Plains of Canada (Table 3). e magnitude of a discount for a specific best management practice should reflect its current degree of adoption (primary criterion) and estimated cost of implementation in an agroecoregion, i.e., greatest discounts for practices with lowest adoption, greatest cost, or both.Degree of adoption of best management practices (maximum of 1.0) would be reflected in a best management practice index combined with the existing loss ratio index in calculating a farmer's premium discount, similar to the actuarial approach proposed in the U.S (Section 3.1.3). A Time for Action e purpose of crop insurance is to mitigate or manage financial risk.Clearly, pesticide resistance is an increasing risk to sustainable crop production.e basic reason for crop insurance providers to finally become engaged is reduced future indemnities for crop losses due to pesticide resistance.We have suggested possible enhancements to crop insurance programs in the U.S. (case study jurisdiction: Iowa) and Canada (case study jurisdiction: Saskatchewan).Specifically, we advocate premium rate changes to incentivise farmers or land managers to adopt best herbicide resistance management practices as recommended by academia.We have outlined some possible suitable best management practices in these two case study jurisdictions that could be eligible for crop insurance premium discounts.Because the level of adoption of many of these recommended best management practices is generally low, we believe additionality has good potential (i.e., best management practices that are adopted only if the farmer receives a Degree of adoption (maximum of 1.0) would be reflected in a best management practice index added to the loss ratio index in calculating a farmer's insurance premium discount, similar to the actuarial approach proposed in the United States (Section 3.1.3). a Established for the different prairie soil climatic zones.b Acetyl-CoA carboxylase or acetolactate synthase inhibitor herbicides.c Meeting specified resistance management criteria.International Journal of Agronomy discount).As stated previously, discounts for low-adoption best management practices should be the greatest to realize additionality.A posteriori audits and surveys will need to be conducted for iterative adjustments in discount schemes so that they are actually changing the adoption "needle" while maintaining actuarial soundness.e intent is to incentivise adoption of key resistance management practices, not subsidise the entire cost of their implementation.Similar to many government budget measures, you introduce a new policy or program typically as a pilot project initially, then collect data to determine if the actual outcome was close to the target outcome.e opportunity cost of inaction is rarely factored into the economics of programs to incentivise grower behaviour.Continued inaction by all levels of government in addressing the crisis of herbicide resistance is not an option. e attitude of a "wait and see" approach to herbicide resistance management over the past 50 years must change. e public good is not well served in the long run by relying solely on price discounts for bundled crop inputs by agribusinesses, who are often conflicted between maximising sales and ensuring academia-recommended practices are objectively relayed to farmers.Ultimately, however, decisions are made by the farmer or land manager, who must deal with the consequences.Much greater interaction and collaboration is needed between public policy-makers and the multidisciplinary scientific community actively engaged in addressing this issue.Politicians of all levels of government must become engaged in an issue that threatens to diminish agricultural productivity and food security in the near and long term. Table 3 : Hypothetical crop insurance proportional or weighted discounts for varying levels of implementation of three potential best herbicide resistance management tactics and practices in annual crop systems in the Northern Great Plains of Canada.
9,255.4
2019-01-02T00:00:00.000
[ "Agricultural and Food Sciences", "Economics" ]
HYBRID ATOMIC ORBITALS IN ORGANIC CHEMISTRY. PART 1: CRITIQUE OF FORMAL ASPECTS The importance of hybrid atomic orbitals, both historically and mathematically, is reviewed. Our new analysis of the original derivation of the sp 3 , sp 2 , sp model reveals serious errors. Based on a critical survey of the literature, we submit six formal criteria that deprecate the use of hybrid orbitals in a pedagogical context. A sound mathematical basis of sp 3 and sp 2 formulae does not exist; hybrid atomic orbitals have hence no legitimate role in the teaching of organic chemistry. INTRODUCTION In 1931, Pauling 1 and Slater 2 originated independently the concept of taking linear combinations of 2s and 2p wave functions to build four new orthogonal wave functions, or valences.Pauling refers to this process in methane as 's-p quantization'. 3 For example, methane has H-C-H bond angle ~109.5°.How do we achieve such a bond angle when the s and p atomic orbitals are not mutually oriented at this angle?The quantization approach was to invoke a tetrahedral geometry of carbon in CH 4 involving combinations of s and p orbitals; orbitals with directionality would presumably provide stronger bonds.In 1932, Hultgren included 's-p-d quantization' to describe equivalent bonds for elements in the long periods of the periodic chart. 4he term 'hybrid atomic orbitals' and the related process 'hybridization' were introduced by Mulliken 5 and Van Vleck 6 before being accepted by the entire scientific community.To achieve orbitals with an appropriate directionality, mixtures of atomic orbitals on the same atom are formed through hybridization.For a carbon atom, there are thus three principal hybrid combinations -commonly denoted sp, sp 2 and sp 3 ; one or other combination is invoked to describe a linear, trigonal planar or tetrahedral geometry, respectively, of a central atom.Penney extended this system to ethyne using bond energies to justify the sp hybridization with 180° bond angles. 7Furthermore, in 1934, Penney provided the first illustration of the hybrid orbital structure of ethane and ethene; 'Penney's model' is sometimes used to describe 'ideal' 109.5°angles for ethane and the 120° angles provided by sp 2 hybrids in ethene. 8By 1935, Van Vleck had reviewed the 'quantum theory of valence' to include most concepts of hybridization that are common in contemporary chemistry. 9he use and reliability of hybrid atomic orbitals (which we abbreviate as HAO) has since become challenged.We divide our critique into two parts; the present analysis arose from our endeavor to answer these two relevant scientific questions.In Part 1, does this concept of HAO from the 1930s have mathematical and logical bases?In Part 2, what are the practical problems with this concept as a pedagogical model and how can we overcome these challenges?In this article, we present modern calculations about hybrid atomic orbitals, we provide irrefutable evidence, based on Schrödinger's time-independent equation for the hydrogen atom, that HAO lack justification, and we list six logical errors and a further critique of the hybridization model. MATHEMATICAL NATURE OF HYBRID ATOMIC ORBITALS We here focus our attention on the purported tetrahedral and trigonal hybrid functions of carbon atoms because these are the forms most commonly invoked in organic chemistry, but our analysis and conclusions are applicable equally to other HAO.Digonal or sp hybrids might seem to be an exception, but they are anyhow superfluous. For the purpose of his introduction of hybrid functions, Pauling 1 simply proffered these functions that depend on only angular variables q and f within the system of spherical polar coordinates r, q, f.He presented the following (one-electron) amplitude functions for unspecified "normal atoms", with justification of neither a source of these functions nor their applicability to any particular "normal atom", and represented the standardized eigenfunctions in terms of only their angular parts. 10 The latter two formulae that contain i = √−1 are complex, containing real and imaginary parts.On taking a sum p x = (p 1 + p -1 )/√2, we obtain a purely real quantity, whereas a corresponding difference p y = (p 1 − p -1 )/√2 yields a purely imaginary quantity, with p z = p 0 , also purely real. According to the latter three expressions, on removing their common factors the remaining parts depending on angular variables q and f are, including s for completeness in the comparison, which are the same as Pauling's formulae apart from numerical factors and, particularly notably, the presence of i in p y .Our formulae above pertain explicitly to the one-electron atom, whereas Pauling made no such association with any atom.Although an exponential factor, is common to all four original functions, the remaining part of the radial dependence with associated constants is not; instead of just radial distance r in the other three formulae, in s there appears, of which the former term is 2 a 0 /Z in terms of Bohr radius a 0 . We proceed to assess the disparities between Pauling's definitions in the set stated above and the expressions in our set obtained directly from the solution of Schrödinger's equation for an atom with one electron.The angular parts of p x and p z agree exactly between the two sets, but the angular part of p y must contain i. Pauling claimed to distinguish correctly the radial parts as R n0 (r) for s and R n1 (r) for his three p x , p y , p z , although he provided neither justification nor evidence of this claim.Without the radial part, the mathematical relation of angular wave functions to the overall y is limited. 11ccording to his definitions of s, p x , p y , p z , Pauling defined four purported tetrahedral hybrid functions, which he subsequently named sp 3 orbitals. Functions s and p, in common with all y klm (r, q, f), have infinite extent in all directions in coordinate space, except that p functions, and others with k or l > 0, have zero value on nodal planes or surfaces separating regions of positive phase from those of negative phase.Like these functions s and p, the hybrid functions have nodal surfaces between regions of positive and negative phase, and they retain formally an infinite extent.For the purpose of attributing an explicit shape to any such function, we might specify a magnitude of its amplitude that is a small fraction of the maximum amplitude at any point in space, and then form a surface of that constant function to be viewed in Cartesian coordinates x,y,z.Whereas that surface for function s becomes hence spherically symmetric, the surfaces for functions p x , p y and p z are cylindrically symmetric about the indicated Cartesian axes.In contrast, tetrahedral hybrid functions te have surfaces cylindrically symmetric about one or other axis that is a body diagonal between opposite corners of a cube of which the origin of coordinates is at its center; functions te have this directional quality whether or not they contain s, but with systematic s content the amplitude is more concentrated along the body diagonal axis on one side of the origin than on the other.The objective of Pauling's construction of these tetrahedral hybrid functions was to obtain a resemblance to the structure of methane: with the atomic nucleus of carbon at the center of a cube, the hydrogen nuclei lie at alternate corners of that cube; the hybrid functions that clearly originate as hydrogen functions might then serve as bond orbitals -more accurately, bond basis functions. With the same definitions of s, p x , p y , p z , one can form these combinations, of which the surfaces of the orbitals, according to the same criteria as before, have exactly the same shape, size and directed orientations as the previous tetrahedral hybrid functions; these functions can be legitimately called sp 2 hybrids because in each case they contain one s and two p functions.The number of such sets of equivalent combinations of s and p functions with the same geometric properties is uncountable; 12 any particular choice of such a set is entirely arbitrary. 13or trigonally directed hybrid atomic orbitals (tr i ), the following combinations are possible: These three hybrid functions, of which tr 1 has s and notably only one p function, have their centers within plane xy containing the nucleus (trigonal sp 2 hybrids); with p z = p 0 as above as a fourth function, we then have the above three functions that can make three bonds that are symmetric with respect to that plane. In Figure 1 appear accurate plots of sp 3 , sp 2 and sp hybrid orbitals generated with a computer.Although these orbitals are generated as purely real (i.e. after removal of √−1) linear combinations of solutions of the Schrödinger equation for the hydrogen atom in spherical polar coordinates, the software (Maple) has transformed the functions into Cartesian coordinates for conventional viewing.In a) the lobes are axially symmetric about a body diagonal of a cube with the atomic nucleus at its center; in b) the lobes are axially symmetric about axis x; in c) the lobes are axially symmetric about axis z, consistent with conventional depictions of these orbitals.In all cases the overall shape is roughly spherical, consistent with a lack of angular dependence of the coulombic attraction between electron and atomic nucleus; a paraboloidal nodal surface exists between the lobe of positive phase (blue) and the lobe of negative phase (red).The small lobe is in each case discernible to have a relative volume decreasing in the order sp 3 > sp 2 > sp. GENERAL CRITIQUE BASED ON FIRST PRINCIPLES Many chemists and material scientists use these HAO for a qualitative description of geometric structure and bonding characteristics of molecules of various chemical compounds.The fact that HAO are irrelevant in these cases but that the usage continues indicates the difficulty in eliminating obsolete concepts. Tetrahedral and trigonal hybrid orbitals are open to severe criticism, of which six instances follow. 1.The formation of four real tetrahedral HAO using linear combinations of real and imaginary parts in spherical polar coordinates is mathematically impossible and logically unsound.In discarding √−1 from p y above, there is a callous disregard for the inalienable properties of complex numbers and complex functions.Orbitals p x , p y , p z can never occur together; whenever p z is rotated to become p x or p y , the remaining two transform into an orthogonal complex couple.The premise that couple p x and p y as real functions is equivalent to a complex couple containing e ±if is false. 14. The combinations of HAO in various sets such as the set described as sp 3 were devised to generate tetrahedrally oriented hybrid functions; their subsequent use to explain a tetrahedral structure of methane is manifestly a circular argument.Explicitly, stating that methane has a tetrahedral structure because of sp 3 hybridization is equivalent to stating that methane has a tetrahedral structure because it has a tetrahedral structure.This circular argument has been pointed out long ago.In 1957, the admonition that "the description (hybrid orbitals) should not be regarded as the cause of the molecule being tetrahedral" was first published. 15Gillespie later commented "In other words, sp 3 hybridization is postulated because CH 4 is tetrahedral and then sp 3 hybridization is given as the explanation for the tetrahedral shape of the methane molecule!" 163.Although Pauling emphasized the value of sp 3 hybrid functions to explain the structure of methane, quantitative calculations that he also reported indicated that such sp 3 hybrids account for only 60 per cent of the energy of the electronic structure; he declared that other configurations such as s 2 p 2 contribute significantly to that structure. 10Pauling's recognition of a significant contribution of a s 2 p 2 configuration is inconsistent with his original assumption of twoelectron bonds, because only the two p 2 orbitals of carbon are available to make four bonds with the four electrons from the hydrogen atoms.Moreover, in 1958, he included 2 % d orbital and 2 % f orbital character in the tetrahedral orbitals. 17Subsequent Hartree-Fock computations assigned a value of s 1.5 p 2.5 to the electronic structure of methane. 18To attribute the structure of methane to sp 3 hybrid atomic orbitals (neglecting an alternative description, above, as sp 2 tetrahedral hybridization) is hence at least an exaggeration and a grossly misleading simplification. 4. Trigonal hybrid functions (to which reference is sometimes made as sp 2 but which are distinct from the tetrahedrally oriented sp 2 hybrid functions specified above) suffer from the same unjustified discard of √−1 as a coefficient of p y .In contrast, another hybrid function known as sp or digonal constitutes a legitimate linear combination of real functions in spherical polar coordinates, but functions having exactly the same geometric properties arise directly in Schrödinger's own solution of his equation for the hydrogen atom in paraboloidal coordinates; 19 there is no necessity for such an arbitrary linear combination to generate the desired shape. 5. If one undertakes a molecular-orbital calculation for CH 4 according to a standard quantum-chemical procedure (with a typical computer program for quantum chemistry) at a common level of theory, i.e. parameterization, with a basis set comprising only four 1s functions on H, and on C (implicitly involving only 2s and 2p functions) one or other of these forms, te 111 , te 1-1-1 , te -11-1 , te -1-11 , or (sp 3 or sp 2 as specified above), tr 111 , tr 11-1 , tr 1-11 , p z , or (sp 2 ) di 11 , di 1-1 , p x , p y , or (sp) s, p x , p y , p z , one obtains exactly the same structure of CH 4 and the same energy, 20 also with modern valence-bond calculations.There is neither a necessity for, nor an advantage to, the use of hybrid functions within such a basis set; whereas, in the original and primitive valence-bond theory, there was a necessity to impose a set of hybrid atomic functions that yielded the corresponding structure, neither the molecular-orbital procedure nor the modern valence-bond procedure involves such a constraint.(One must distinguish between orbitals and members of a basis set.) 6.Those solutions of Schrödinger's equation in spherical polar coordinates as presented above, and which were obviously Pauling's inspiration for s-p hybrids, are applicable to only an atomic system with rigorously spherical symmetry; their direct use as orbitals for an atom in the vicinity of another atom is inadmissible.Ellipsoidal, also known as prolate spheroidal, orbitals have two centers; the corresponding coordinates are applicable to a hydrogen atom in a diatomic context, as Teller recognized in 1930. 21 PREVIOUS CRITIQUE OF HYBRID ATOMIC ORBITALS During 1955-1956, a thesis criticizing the hybridization model "hybridisation…is consequently shown to be of no physical meaning" was censored and papers based on this work rejected (Pritchard 22 recently chronicled this censorship).Edmiston and coworkers showed that the removal of hybridization does not affect their calculational results 23 and suggested about hybrids that "chemists have played fast and loose with many qualitative quantum-chemical concepts". 24Gil provided an entire section of his book on the use and misuse of the hybrid orbital concept "… no geometric parameter nor any other molecular property can be explained by invoking hybrid orbitals". 25oeyens in 2008 presented arguments illuminating the glaring defects of "hybridization, an artificial simulation without scientific foundation". 26Common to all these criticisms is the refusal of much of the chemistry community to acknowledge the existence of alternative perspectives; an impartial review of this subject is lacking. We recall some pertinent quotations from the literature.In Coulson's Valence, McWeeny wrote "hybridization is not a physical effect but a feature of [a] theoretical description", and "It would be quite wrong to say that, for example, CH 4 was tetrahedral because the carbon atom was sp 3 hybridized.The equilibrium geometry of a molecule depends on energy and energy only". 27In a collection of papers to mark the anniversary of Pauling's paper about hybrids, Cook agreed that "hybridization cannot explain the shapes of molecules", but his contention that hybridization is not arbitrary fails to take into account the practical formation of tetrahedral hybrid functions from sp 3 or sp 2 combinations, as delineated above, or indeed innumerable other. 28"The idea of sp 3 hybridization is therefore as ludicrous as perpetual motion", 14 but Boeyens failed to understand the significance of the existence of multiple systems of coordinates in which Schrödinger's equation for the hydrogen atom is amenable to a separation of the spatial variables. 29 DISTINCTION BETWEEN HYBRID ATOMIC ORBITALS AND OTHER HYBRIDS We seek to distinguish clearly between the HAO used in the teaching of general and organic chemistry and the other uses of 'hybrid orbitals' in modern chemistry, in which these orbitals might be implemented within basis sets for these calculations.Whether such basis sets for the calculations comprise atomic orbitals or their combinations in hybrid orbitals as presented above, such functions are artifacts of those particular calculations, and have no meaning outside those contexts.Orbitals are the exact algebraic solutions of the Schrödinger's equation for an atom with one electron, i.e. the result or output of such a calculation, whereas a basis set that might consist of orbitals on other atomic centers serves as input for an approximate calculation of observable properties of a molecule.Whether such a calculation is of type molecular orbital or valence bond is immaterial.The words 'hybrid' and 'hybridization' are used for mathematical procedures that are optional for both valence-bond 30 and molecular-orbital 31 calculations of electronic structure, but with a meaning different from 'hybrid' and 'hybridization' in HAO; these other definitions or applications of hybrids are neither discussed nor applied in teaching organic chemistry.Our discussion, and objections, involve the qualitative explanation of chemical phenomena, especially the shapes or general structural properties of organic molecules, in terms of HAO.We have no quarrel with the use of quantum-chemical calculations of molecular structure, which we on occasion perform for various purposes; whether a so-called orbital as a member of a basis set that might be applied in these calculations is canonical or orthogonal or corresponds to a particular energy is entirely irrelevant and superfluous for the purpose that evidently pervades every textbook of organic chemistry. It would be unwise to remove all use of 'hybrids' or 'hybridization' from chemistry; 'hybridization' used in modern calculations is different and much more rigorously defined.For instance, in their work Foster and Weinhold used rigorous algorithms of the naturalbond-orbital (NBO) method to derive natural hybrid orbitals (NHO) that describe the electronic density based on calculated wave functions, 32 but has no relation mathematically with HAO applied in qualitative explanations of the structure of organic molecules.The hybridization concept in modern quantum chemistry does not employ the 'primitive' HAO model.HAO should not be defined as, or conflated with, localized orbitals. CONCLUSION In this article, we seek to convince readers, based on mathematical and logical arguments, that, for the teaching of organic chemistry, hybridization is an obsolete concept; the pioneers (e.g.Slater, Pauling, Penney, Van Vleck) who originated the concept could not imagine the overwhelming evidence provided by the experimental data and computers that we have today.A model is useful as long as it gives satisfactory answers to the questions posed at a certain time; when it can no longer fulfill this role, it should be modified or discarded.We have demonstrated that HAO clearly lack mathematical support; based on that fallacy of HAO, we contend that HAO should be retired from the teaching of organic chemistry.The fact that we can eliminate superseded theories in chemistry shows the maturity of the science --a continuation of their use shows stagnation.The elimination of HAO from the teaching of organic chemistry is thus a positive advancement for the development of future chemists.In Part 2, we continue this argument based on the use of HAO in organic chemistry and a pedagogical model for the future without HAO. Figure 1 . Figure 1.Quantitatively accurate plots of hybrid atomic orbitals: a) sp 3 , b) sp 2 , c) sp; each surface of constant y is chosen such that y 2 at that magnitude contains 0.99 of the total electronic density.The scale of each axis in expressed in unit 10 −10 m
4,713.8
2019-02-25T00:00:00.000
[ "Chemistry", "Education" ]
Bicriteria problem of discrete optimization in planning a multiunit construction project The paper describes the bicriteria discrete optimization problem, that may occur during the scheduling of multiunit construction projects. The multiunit project involves the construction of many civil structures with the same sets of activities needed, but different in size. In the project the deadlines of activities in units are adopted. The missing of them by the contractor causes the payment of the disincentive penalty. The early completion of the activities in units is rewarded extra income for the construction contractor i.e. a incentive bonus. Changing the order of the execution of the units changes the value of the objective functions: the duration of the project and the cost (the sum of the disincentive penalties and incentive bonuses). The proposed model of the project is the bicriteria NP-hard flow shop problem with constraints characteristic for construction projects. The paper presents the method of determining the set of Paretooptimal solutions for small projects. The computational example of the model of the project is also included in the paper. Introduction The construction projects management is one of the most important issues that should be considered in the civil engineering. The process optimization of construction projects [1,2] as well as taking into account the risk and uncertainty that might occur during the process of construction projects are fundamental in managing of these projects [3,4,5]. Currently used planning methods in construction require the improvement to better reflect real market conditions: the significant increase in labor costs, the increase in the prices of building materials, the emigration of highly qualified employees from developing countries to developed countries. These factors have a negative impact on the organization and the costs of the construction projects. Therefore, the importance of the optimal construction project planning will increase in the future. The scheduling of construction works is the essential part of construction projects planning. In the recent years, many methods and models have been created for the construction projects scheduling. The project scheduling can be significantly facilitated and accelerated by using of suitable computer programs. Nowadays the improvement of scheduling methods and models of construction projects are the priority for the further research in this subject. MATEC Web of Conferences 174, 04008 (2018) https://doi.org/10.1051/matecconf/201817404008 ECCE 2018 In the literature the construction project scheduling models can be divided into two basic types [6]: the models for the projects with repetitive processes (the repetitive projects) and the models for the projects with "complex of operations" (the non-repetitive projects). The concept of the so-called method "Line of Balance" -LOB is most often used for repetitive projects scheduling [7]. On the basis of this concept the following techniques have been developed i.e. LSM (Linear Scheduling Model) [8], RPM (Repetitive Project Model) [9]. Current research on the methods based on assumptions of the LOB method focuses mainly on the problems of the optimization of the repetitive project schedule e.g. [10,11]. This paper refers to the scheduling model for repetitive projects, where the flow organization system is used. The mentioned system is the basis for creating Time Coupling Methods TCM [12,13]. The flow organization system relies on the implementation of a set of buildings (units) or a single building partitioned into work zones. Such projects are characterized by the repetition of the execution of activities on each unit of a project. This feature results in the need to specialize working groups in such a way so that they carry out activities of one type. The working groups in such construction projects move from the previous unit to the next realizing only designated scope of the work. Using the flow organization system it is possible to benefit from the repetition. The working groups will likely be able to spend less time and money on later units once they develop a learning momentum. This kind of construction projects are called in the paper the multiunit construction projects. The multiunit projects rely on the construction of a set of units e.g. residential, commercial or industrial buildings, engineering structures. Current research on the problems of scheduling of the multiunit projects are connected with the optimization of the project schedules using, among others, linear programming [14,15], metaheuristic algorithms: simulated annealing [16], tabu search [17,18]. This paper presents a model of the multiunit project with the flow organization system. The decision variable in the presented model is the allocation of units, which is represented by a permutation of length, which is equal to the number of units. The relationships among jobs for all units in the project are expressed in a constant, for each unit, sequence (order). It is a dependency encountered for buildings with a simple work technology such of single-family houses. For these units activities will be performed in a sequence, e.g. earthworks, foundations, walls with slabs, rafter framings, etc. In the presented model of the multiunit project, the partial overlap in the sequence of the activities or the presence of the intervals between them is allowed. The system also allowed for additional time required for the movement of the working groups between units, depending on the nature of the working group and the sequencing of units. This is an important parameter for the projects in which the undertaken construction activities are at a distance from each other [19]. The duration of the project and its cost are the objective functions considered in the model. The cost of the project is the sum of all disincentive penalties for missing deadlines of activities in units and incentive bonuses for early completion of activities in the units. The paper proposes to obtain the solution i.e. the set of Pareto-optimal solutions, using exhaustive search algorithm. Optimization model of the considered multiunit construction project The considered in the paper model assumes the acceptance of deterministic situations where the technical, technological and organizational conditions are known and accurate with an available bill of quantities. The basis for creating the model is a permutational flow shop problem (problem FP), which is studied in the theory of scheduling [16]. This system is shown schematically in Fig. 1.There are no significant disruptions in the performance of activities by the working groups. The model assumes that one type of activity is performed MATEC Web of Conferences 174, 04008 (2018) https://doi.org/10.1051/matecconf/201817404008 ECCE 2018 by one type of the working group. Also, it is assumed that each working group can perform only one job at a time.  There is the possibility of technological gaps between the activities and the simultaneous operation of multiple working groups in the units assumed. Durations of intervals between a given activity k and the next activity k+1 (s F jk > 0) or the length of the simultaneous duration of a given activity and the next activity (s F jk < 0) in the unit for a set of activities Oj are given in vector s F j = [s F j1, s F j2, s F j3, ..., s F jk, ..., s F j(m-1)]. These times should be understood as the minimum constraints and can take on any value. In the further work there will be called couplings between units. .m]. Constraints:  The order of execution of the activities resulting from work technology is assumed such that: Oj,k-1 ⊰ Oj,k ⊰ Oj,k+1 .  It is assumed that each working group Bk can perform only one job at a time.  It is assumed that the activity Ojk  Oj is performed continuously by the working group Bk in time pjk > 0.  It is assumed that the set of deadlines of activities in units from the set Z defines vector d = {d1, d2, d3,..., dj,..., dn,}. Decision variable:  the order π of execution of units, which, for each of the working group, is the same and takes the form of a permutation π = (π(1), π(2), π(3), ..., π(j), ..., π(n)). The number of possible solutions to the presented model is n!. Objective functions:  The duration of the entire project Cmax (time execution of all activities in the units): where permutation π *  П, П -the set of all permutations in the project. Early finishing times of activities can be determined from the following recursive dependency: Ck, π(j) = max{Ck, π(j-1) + s S k,π(j-1)π(j), Ck-1, π(j) + s F k-1,π(j)} + pk,π(j) , where: j = 1, ..., n, k = 1, ...., m, π(0) = 0, Ck,0 = 0, C0,j = 0. Computational complexity of finishing times calculated in the above manner equals O(nm).  The cost of the project which is the sum of all disincentive penalties for missing deadlines of activities in the units and incentive bonuses for early completion of activities in the units: where: u*(j) = max(0, Cπ*(j) -dπ*(j) ) * upenalty_daily -max(0, dπ*(j) -Cπ*(j) ) * ubonus_daily , upenalty_daily -daily disincentive penalty for missing deadlines of activities in the units, ubonus_daily -daily incentive bonus for early completion of activities in the units. The considered model can be represented in the form of a graph. The form of the graph is dependent upon an established decision variable π (an example of such a graph is given in Fig. 2): E(π) = E F  E S (π) depends on the assumed decisive variable π. Horizontal arcs (sequential, representing processing order of units) from the set E S (π) are between nodes π(j-1) and π(j), where j = 1, ..., n. Vertical arcs (technological) from the set E F are between node standing for activity k and node standing for activity k-1, where k = 1, ..., m. The weight of the horizontal arc from set E S (π) is s S k,π(j)π(j+1). The weight of the vertical arc from set E F is s F k,π(j). The presented model of multiunit construction project has not been examined in field of scheduling of construction project yet. In the scheduling theory presented model is a kind of bicriteria flow shop problem. If we assume that s F k,π(j) = 0, s S k,π(j)π(j+1) = 0, upenalty_daily = = ubonus_daily = 0 we get classical flow shop problem with the Cmax criterion (problem FPCmax). In the literature this problem is strongly NP-hard [20]. The solution to this problem adopted in the model was with the use of exhaustive search algorithm. Case study The contractor, on behalf of the investor, has to realize a project which relies on the construction of n = 9 residential buildings (units). Each of them requires execution of m = 11 activities carried out in a fixed order. The project will be implemented in full by the contractor working groups. For each type of activity the contractor has only one working group. On the basis of bill of quantities and productivities of working groups the times of activities were calculated (Table 1). Between the activities realized in the technological order there are couplings between units that have been established on the basis of existing technological constraints which are shown in Table 2 1 2 3 4 5 6 7 8 9 1 0 0 0 0 0 0 0 0 0 2 5 5 5 5 5 5 5 5 5 3 0 -15 -10 9 10 10 10 10 10 10 10 10 10 10 The discrete optimization task in the case study is to find the set of Pareto-optimal solutions for two objective functions: the duration of the project and its cost. The number of possible solutions (schedules) in the case study is 9! = 362880. Due to the small size of the optimization task in the case study the exhaustive search algorithm was used to find the set of Pareto-optimal solutions. The algorithm was programmed in the Mathematica system. Conclusions The multiunit construction projects belong to the group of the repetitive construction projects. The schedule optimization is one of the most important research problems in such projects. In a lot of multiunit projects there is a possibility to take into account units processing order in creation a project schedule. Therefore, in these kind of the projects the discrete optimization problems may arise. These problems in the most cases are usually NPhard. It means that generally it is impossible to solve practical problems by algorithms providing accurate solutions in reasonable time. In the case study presented in the paper the total number of the units in the project is small (i.e. only 9 units). Therefore, it the author decided to use the exhaustive search algorithm to find the set of Pareto-optimal solutions. However, for any number of units in the project it is necessary to use approximate algorithms e.g. the metaheuristic (tabu search algorithm, evolutionary search algorithm, simulated annealing algorithm, etc.). These algorithms can provide approximate sets of the Paretooptimal solutions in an acceptable time, with a very good quality of delivered solutions. The application of metaheuristic algorithms will be the subject of further research on the presented in the paper model. It will allow its practitioners i.e. contractors to determine suboptimal schedules of the projects, which will meet the adopted deadlines or cost constraints. The presented model can be applied for large projects with group of buildings or engineering structures that are distant from each other such as e.g. residential single-family houses, bridges, culverts, pipelines etc.
3,095.4
2018-01-01T00:00:00.000
[ "Engineering" ]
Recent results from the LHCf experiment Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20159601031 into the two separate beam pipes), only neutral particles, mainly photons and neutrons, reach the detector. Each detector is made of two sampling and imaging calorimeters (called towers hereafter). Each tower is composed of 16 tungsten layers and 16 plastic scintillator layers to measure energy and also contains 4 position sensitive layers. Arm1 detector uses scintillating fiber (SciFi) to measure position, while Arm2 uses silicon microstrip detectors. Transverse cross sections of towers are 20× 20 mm2 and 40× 40 mm2 for Arm1 and 25× 25 mm2 and 32 × 32 mm2 for Arm2. Longitudinal dimension of towers is of 44 radiation lengths, which correspond to 1.6 nuclear interaction lengths. Energy resolution is better than 5 % for photons and of about 40 % for neutrons. Position resolution for photons is 200 μm and 40 μm for Arm1 and Arm2 respectively, while position resolution for neutrons is of about 1 mm. Smaller tower of each detector is placed on the beam center and covers the pseudo-rapidity range η > 9.6, while larger tower covers the pseudo-rapidity range 8.4 < η < 9.4. More detailed descriptions of detector performance are reported elsewhere [1–3]. Introduction The Large Hadron Collider forward (LHCf) experiment [1] has been designed to measure the hadronic production cross sections of neutral particles emitted in very forward angles in proton-proton collisions at the LHC, including zero degrees.The LHCf detectors have the capability for precise measurements of forward high-energy inclusive-particle-production cross sections of photons, neutrons, and possibly other neutral mesons and baryons.The analyses in this paper concentrate on obtaining (1) the inclusive production rate for π 0 s in the rapidity range larger than y = 8.9 as a function of the π 0 transverse momentum, and (2) the inclusive production rate for photons in the rapidity ranges η > 8.77 at 900 GeV as a function of the photon energy. This work is motivated by an application to the understanding of ultrahigh-energy cosmic ray (UHECR) phenomena, which are sensitive to the details of soft π 0 and photon production at extreme energy.It is known that the lack of knowledge about forward particle production in hadronic collisions hinders the interpretation of observations of UHECR [2,3].Although UHECR observations have made notable advances in the last few years [4][5][6][7][8], critical parts of the analysis depend on Monte Carlo (MC) simulations of air shower development that are sensitive to the choice of the hadronic interaction model.This paper is organized as follows.In Sec. 2, the LHCf detectors are described.The analyses results are then presented in Sec. 3 and Sec. 4. Finally, concluding remarks are found in Sec. 5. The LHCf detectors Two independent LHCf detectors, called Arm1 and Arm2, have been installed in the instrumentation slots of the target neutral absorbers (TANs) [9] located ±140 m from the ATLAS interaction point (IP1) and at zero degree collision angle.Inside a TAN the beam-vacuum-chamber makes a Y-shaped transition from a single common beam tube facing IP1 to two separate beam tubes joining to the arcs of the LHC.Charged particles produced at IP1 and directed towards the TAN are swept aside by the inner beam separation dipole magnet D1 before reaching the TAN.Consequently, only neutral particles produced at IP1 enter the LHCf detector.At this location, the LHCf detectors cover the pseudorapidity range from 8.7 to infinity for zero degree beam crossing angle.With a maximum beam crossing angle of 140 µrad, the pseudorapidity range can be extended to 8.4 to infinity. Each LHCf detector has two sampling and imaging calorimeters composed of 44 radiation lengths (X 0 ) of tungsten and 16 sampling layers of 3 mm thick plastic scintillator.The transverse sizes of the calorimeters are 20 ×20 mm 2 and 40 ×40 mm 2 in Arm1, and 25 ×25 mm 2 and 32 ×32 mm 2 in Arm2.The smaller and larger calorimeters are called as "small tower" and "large tower", respectively.The small towers cover zero degree collision angle.Four X-Y layers of position sensitive detectors are interleaved with the layers of tungsten and scintillator in order to provide the transverse positions of the showers.Scintillating fiber (SciFi) belts are used for the Arm1 position sensitive layers and silicon micro-strip sensors are used for Arm2.Readout pitches are 1 mm and 0.16 mm for Arm1 and Arm2, respectively. Results of π The combined p T spectra of the Arm1 and Arm2 detectors are presented in Fig. 1 for six ranges of rapidity y: 8.9 to 9.0, 9.0 to 9.2, 9.2 to 9.4, 9.4 to 9.6, 9.6 to 10.0, and 10.0 to 11.0.The spectra in Fig. 1 are after all corrections for the detection inefficiency have been applied.The inclusive production rate of neutral pions is given by the expression σ inel is the inelastic cross section for proton-proton collisions at √ s = 7 TeV.Ed 3 σ/dp 3 is the inclusive cross section of π 0 production.The number of inelastic collisions, N inel , used for normalizing the production rates of Fig. 1 has been calculated from N inel = σ inel Ldt, assuming the inelastic cross section σ inel = 73.6 mb.This value for σ inel has been derived from the best COMPETE fits [11] and the TOTEM result for the elastic scattering cross section [12].Using the integrated luminosities reported in Ref. [10], N inel is 1.85×10 8 for Arm1 and 1.40×10 8 for Arm2.d 2 N (p T , y) is the number of π 0 s detected in the transverse momentum interval (dp T ) and the rapidity interval (dy) with all corrections applied.In Fig. 1, the 68 % confidence intervals incorporating the statistical and systematic uncertainties are indicated by the shaded green rectangles.For comparison, the p T spectra predicted by various hadronic interaction models are also shown in Fig. 1.The hadronic interaction models that have been used in Fig. 1 are Dpmjet 3.04 [13] (solid, red), Qgsjet II-03 [14] (dashed, blue), Sibyll 2.1 [15] (dotted, green), Epos 1.99 [16] (dash-dotted, magenta), and Pythia 8.145 [17,18] (default parameter set, dash-double-dotted, brown).In these MC simulations, π 0 s from short lived particles that decay within 1 m from IP1, for example η → 3π 0 , are also counted to be consistent with the treatment of the experimental data.Note that, since the experimental p T spectra have been corrected for the influences of the detector responses, event selection efficiencies and geometrical acceptance efficiencies, the p T spectra of the interaction models may be compared directly to the experimental spectra as presented in Fig. 1. Among hadronic interaction models tested in this analysis, Epos 1.99 shows the best overall agreement with the LHCf data.However, Epos 1.99 behaves softer than the data in the low p T region, p T 0.4 GeV in 9.0 < y < 9.4 and p T 0.3 GeV in 9.4 < y < 9.6, and behaves harder in the large p T region.Specifically, a dip found in the ratio of Epos 1.99 to the LHCf data for y > 9.0 can be attributed to the transition between two pion production mechanisms: string fragmentation via cut Pomeron process (low energy ∼ low p T for the fixed rapidity) and remnants of projectile/target (high energy ∼ large p T for the fixed rapidity). Results of photon analysis at √ s = 900 GeV To reduce a possible pseudorapidity η dependence when comparing and combining the energy spectra measured by the two Arms, we selected Arm2 events with a pseudorapidity range similar to that of Arm1.For the small tower, we selected events with the distance r from the beam center less than 11 mm, which corresponded to the pseudorapidity range of η > 10.15 (the circles in Fig. 2).Similarly, for the large tower, we set the conditions as 22 mm < r < 44 mm, which corresponded to the pseudorapidity range of 8.77 < η < 9.46 (the arcs in Fig. 2).The calorimeters did not uniformly cover the pseudorapidity ranges as shown in Fig. 2. We confirmed that there was a negligible pseudorapidity dependence of the energy spectra inside each pseudorapidity range. The combined energy spectra of Arm1 and Arm2 are shown in Fig. 3 as weighted averages, with the weights taken to be the square of the inverse of the errors in each energy bin.The error bars of the data (black points) represent the statistical error; the hatches in the spectra represent the total uncertainty (quadratical summation of the statistical and the systematic errors).The sources of the systematic error are the particle identification and the beam position uncertainties.The energy scale errors were also included, assuming a correlation between the two Arms.Note that the uncertainty of the luminosity determination (±21 %) is not shown in Fig. 3.It can introduce a constant vertical shift of the spectra, but it cannot change the shapes of the spectra. In Fig. 3, the predictions of the hadronic interaction models, Qgsjet II-03, Pythia 8.145, Sibyll 2.1, Epos 1.99 and Dpmjet 3.04, are also shown.The same analysis processes were applied to the MC simulations as to the experimental data except for the particle identification and its correction.For the analysis of the MC simulations, the known particle type was used.For better visibility, only the statistical errors for Dpmjet 3.04 (red points) are shown by the error bars. Conclusions The inclusive production of neutral pions in the rapidity range larger than y = 8.9 at √ s = 7 TeV proton-proton collisions and the forward inclusive photon energy spectra in the pseudorapidity ranges of η > 10.15 and 8.77 < η < 9.46 for √ s = 900 GeV proton-proton collisions have been measured by the LHCf experiment in early 2010.Transverse momentum spectra of neutral pions and energy spectra of photons have been measured by two independent LHCf detectors, Arm1 and Arm2, and give consistent results. The combined Arm1 and Arm2 spectra have been compared with the predictions of five hadronic interaction models, Dpmjet 3.04, Epos 1.99, Pythia 8.145, Qgsjet II-03 and Sibyll 2.1.For the neutral pion spectra, Dpmjet 3.04, Epos 1.99 and Pythia 8.145 agree with the LHCf combined results, in general, for the rapidity range 9.0 < y < 9.6 and p T < 0.2 GeV.Qgsjet II-03 has poor agreement with LHCf data for 8.9 < y < 9.4, while it agrees with LHCf data for y > 9.4.Among the hadronic interaction models tested in this paper, Epos 1.99 shows the best overall agreement with the LHCf data even for y > 9.6.For the photon spectra, Epos 1.99 and Sibyll 2.1 reproduce well the shape of the experimental energy spectra, but they predict a lower cross section than the LHCf data.The other models predict harder spectra than the LHCf data above 300 GeV.These results of comparison exhibited features similar to those for the previously reported data for √ s = 7 TeV collisions.REFERENCES Fig. 1 . Fig.1.Combined p T spectra of the Arm1 and Arm2 detectors (black dots) and the total uncertainties (shaded rectangles) compared with the predicted spectra by hadronic interaction models. Fig. 2 . Fig. 2. The cross-sections of the calorimeters viewed from IP1, left for Arm1 and right for Arm2.The cross marks on the small calorimeters indicate the projections of the zero-degree collision angle onto the detectors ("beam center").The shaded areas in the upper parts of the figure indicate the shadows of the beam pipes located between IP1 and the detectors, where the detectors are insensitive to the detection of IP1 proton-proton collision products.The dashed squares indicate the border of a fiducial area. Fig. 3 . Fig. 3. Combined Arm1 and Arm2 photon energy spectra compared with MC predictions.The left and the right panels are the results of the small the large towers, respectively.
2,718.8
2015-06-01T00:00:00.000
[ "Physics" ]
Analytical calculation of time of reaching specific values based on visibility loss during a fire Using an integral mathematical model of a fire considering the assumptions typical of a starting stage of a fire, analytical dependencies were obtained for determining the time of reaching a critical value of the density of a smoke screen in a premises with a fire epicenter and adjoining premises. By means of analytical formulas for determining critical evacuation time intervals based on visibility loss, table values for different parameters that are included in the original equations were obtained. Simple engineering analytical solutions that describe the dynamics of smoke formation in premises in case of a fire when used in a certain combination are presented. The obtained dependencies allow one to identify the critical time of evacuation with no use of special PC software as well as to obtain original data without calculating an anti-smoke ventilation system. Introduction In order to ensure safe evacuation from a building on fire, it is necessary to determine the time of reaching maximum acceptable temperatures, concentration of smoke, oxygen and combustion products. At the initial stage of a fire drastic changes in the average temperature, concentration of oxygen and toxic gases are uncommon. The central factor for evacuation is a time period when the density of a smoke screen is at its critical level as significant smoke formation of premises complicates navigation in space, which is detrimental for evacuation and causes a panic to unfold. The papers [1][2][3][4][5] deal with mathematical models of a fire that allow for analytical solutions for determining a critical time of a fire, in [6][7][8] there are experimental approaches to the distribution of a smoke screen in premises with a more accurate zone model of a fire. In regards of safe evacuation in case of outbreak of a fire in one or several premises, it should be determined when a hazardous fire factor reaches its critical value both in the epicenter and adjoining premises. In this case an integral model of a fire should be employed that gives a general description of a gaseous medium as well as the most accessible one for solutions including analytical ones. The integral model relies on a system of regular non-stationary differential equations that express the fundamental laws of a material and energy balance. This system consists of the following equations [1,2]: a material balance of the gaseous medium of the premises; a material balance of the components of the gaseous medium of the premises; a material balance of the components of the gaseous medium (oxygen, carbon monoxide and dioxide, chloride hydrogen and inertia gases); a material balance of smoke particles; an energy balance of the gaseous medium. There are supplementary equations that describe the mass exchange using a natural and mechanical ventilation, heat exchange between the gaseous medium and enclosing structures, combustion (gasification) of a fire load and distribution of flames along its surface. These equations are closed with the averaged equation of the current state of the medium of the premises on fire that reflects a connection of an average volumetric temperature with an average volumetric density and pressure. The initial parameters in the integral model are an average mass temperature and average volumetric pressure of the gaseous medium, average mass partial densities of oxygen, chloride hydrogen, carbon monoxide and dioxide of the gaseous medium as well as an average volumetric concentration of smoke particles. In order to identify the integration constants during the solution process, the initial values of average volumetric parameters of the gaseous medium in the premises prior to a fire are specified. A set of the obtained regular first-order differential equations and algebraic ratios provides a mathematical description of a fire in the premises using averaged thermodynamic parameters of the state of the gaseous medium. Generally, the system of differential equations of the integral mathematical model of a fire can only be implemented by means of the numerical method and in particular, using the Runge-Kutta fourth-fifth order with a variable step. It is not possible to obtain the analytical solution of a complete non-simplified system of differential equations of a fire considering the gas exchange of the premises with the surroundings, heat exchange with the enclosing structures supplemented with the formulas of the burning rate of the combustible material and heat emission. In order to obtain the solution, some assumptions were made that are typical of the initial stage of a fire: no air coming from the surrounding medium, average pressure of the medium is constant and equals that of the outside air, smoke-forming capacity of the combustible material and coefficient of the combustion level are constant due to insignificant change in the oxygen concentration. Considering these assumptions that are proved by numerous experiments, the equation of the energy of a fire is reduced to the algebraic form and the dissolving system is reduced to four differential equations in relation to average volumetric values of the density of the gaseous medium, optical density of smoke, density of oxygen and toxic gases [1,2]. The system of the differential equations is not coherent, the solution of each equation can be found regardless of the rest. The simplified equations of the integral mathematical model of a fire are employed to obtain the formulas for determining a critical time of a fire in the premises with the fire epicenter using the conditions of the maximum acceptable temperature, concentrations of oxygen and toxic gases as well as a critical density of smoke [1]. Materials and methods Analytical dependencies of the integral model of a fire according to visibility loss in the premises with the fire epicenter. Mass evacuation from the premises on fire is performed at the initial stage of a fire when there is no significant change in such hazardous factors as temperature, concentrations of toxic gases and oxygen, particularly in premises adjoining those in the immediate vicinity of the fire epicenter. The central factor of a critical evacuation time is psychological impact. The time before critical amount of smoke in the premises with the fire epicenter and those adjoining it can be divided into two parts. Over the first period a critical concentration of smoke in the premises with the fire epicenter is reached and during the second one this saturated gas mass fills the adjoining premises from the ceiling down to some critical point at the floor level. Experimental studies of the distribution of smoke in different premises of buildings and structures are presented in [6][7][8][9][10][11], numerical modeling of the distribution of smoke in multi-storeyed buildings can be found in [12,13]. If we accept that the rate of sedimentation of smoke particles on the surface of the enclosing structures is a lot smaller than that of removing smoke from the premises, the differential equation for determining an average volumetric density of smoke will get dividing variables. This allows us to obtain the law for changes in time in an average volumetric density of smoke μm (N/m) in the analytical form [1]. φ is the heat loss coefficient; ien is enthalpy of gasification products (pyrolyze, evaporation) of the combustible material, J/kg; ср is isobaric heat capacity of an ideal gas, J/(kg ·К); ρ0 is the density of the medium prior to a fire, kg/m 3 ; Т0 is the initial temperature, К; D is smoke-forming capacity of the combustible material, N·m 2 /kg; V is the volume of the premises with the fire epicenter, m 3 ; The numerical experiment showed that the influence of the enthalpy of the gasification products of the combustible material on an average volumetric density of smoke is insignificant. Hence in the below Table 1 The critical distance of visibility lcr is connected with a critical value for optical density with the ratio lcr=2,38/μcr. Table 2 presents the values of time periods when in the premises on fire with the volume V = 60 m3 for circular distribution of flames a critical distance of visibility takes the values 5, 10, 15 and 20 m respectively. In the calculations using the formulas (2) the same values are accepted as for obtaining the graphical dependencies in Figure 1, for buildings of the I-II fire resistance level ψs = 0,0145 kg/(m2·s), vl = 0,0108 m/s, for a building of the III-IV fire resistance level ψs = 0,0344 kg/(m2·s), vl = 0,0465 m/s, μm, N/m. Similar data is presented in Table 3 for linear distribution of flames with a strip with the width b=0,4 m for the same values that are included in the formula (3).  the gases that are formed during the gas combustion will pushed the gas mass which is already on fire out of the premises with the fire epicenter through the openings into the adjoining premises (halls, adjoining rooms, etc.). As it has a higher temperature, this mix will go up to the ceiling and fill the adjoining premises while going down to the floor. The consumption of Gm coming out of the premises with the epicenter of the combustion of the gases at the moment in time in question takes up some second volume Vs (m3/s), with In [1] considering the hypotheses for the initial stage of a fire, the equation of the energy of a fire that is identified based on the first law of thermodynamics is reduced to the algebraic form Hence we get the expression for the second consumption of the gases that are pushed out during combustion: In residential, household, administrative, medical, sports, cultural spaces due to openings an average pressure of the environment remains almost the same and equals that of the air outside. Then according to the algebraic equation of the averaged state of the gaseous medium in the premises Considering (7) the equality (6) can be presented as: Comparing the equalities (4) and (8) Then over the final time period τ the mass of the pushed gases Мτ fills some volume Vτ that is given by the equality The function ψ in the sign of the integral has a different form depending on how a fire spreads out [2]. If a fire spreads out along the surface of solid combustible materials in a circle, then The mass of the combustible materials that are burnt by a moment τ is If flames distribute along a strip with the width b m, then Let us present the formulas (12) and (14) with the same equality The equality (16) allows one to identify a time period when a gas mass with a critical density of smoke fills a critical volume Vcr of all the premises adjoining a room on fire (in a floor, in a section, etc.). This volume is calculated as the sum of the area SΣ of all the adjoining premises by the calculation height hc  ; A and n are determined using the equality (15); in the first subradical expression of the right part V is a volume of the premises with an immediate fire epicenter. The equality (18) is obtained in the assumption of filling a critical volume of all the premises adjoining the fire epicenter with a dense smoke screen from top to bottom to a critical distance to the floor level taking no account of a chaotic movement of a crowd being evacuated. It is obvious that the movement causes the upper and lower air level which is not so full of smoke to mix and thus the time V cr  to increase. An analytical consideration of a chaotic movement of people that causes chaotic convective shift of the gaseous mix respectively is almost not possible. Therefore it is possible for a critical time of evacuation according to visibility loss to be determined using the above formula with a sufficient degree of accuracy. Conclusions Hence in this paper we presented an analytical dependence (19) for determining a critical time period when a dense smoke screen fills the entire volume of the premises adjoining those with the fire epicenter down to a critical distance to the floor level with no account of chaotic movement of a crowd being evacuated. The obtained analytical formula can be employed in engineering calculations for safe evacuation from premises in the event of a fire without using special PC software as well as for obtaining the original data for calculating an anti-smoke ventilation system.
2,871.4
2018-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Virtual Savant for the Knapsack Problem: learning for automatic resource allocation This article presents the application of Virtual Savant to solve resource allocation problems, a widelystudied area with several real-world applications. Virtual Savant is a novel soft computing method that uses machine learning techniques to compute solutions to a given optimization problem. Virtual Savant aims at learning how to solve a given problem from the solutions computed by a reference algorithm, and its design allows taking advantage of modern parallel computing infrastructures. The proposed approach is evaluated to solve the Knapsack Problem, which models different variant of resource allocation problems, considering a set of instances with varying size and difficulty. The experimental analysis is performed on an Intel Xeon Phi many-core server. Results indicate that Virtual Savant is able to compute accurate solutions while showing good scalability properties when increasing the number of computing resources used. Introduction Resource allocation refers to the assignment of a number of available resources or assets to different issues or items. Resource allocation is an important concept that models several situations and problems arising in economics, strategic planning, project management, scheduling, logistics, production, engineering, and many other related areas [1]. Many resource allocation problems are modeled by the general framework formulated by the Knapsack Problem (Knapsack Problem) [2]. Knapsack Problem is a combinatorial optimization problem that, given a set of items with associated weights and profits, proposes determining the number of each item to include in a collection (i.e., the knapsack) in order to maximize the total profit while ensuring that the total weight is less than or equal to a given limit (i.e., the knapsack capacity). Different allocation problems are modeled by considering the capacity of the knapsack as the available amount of a given resource and the items as activities to which the resource can be allocated. This article describes a generic paradigm that proposes applying a computational intelligence approach to find accurate solutions to resource allocation problems modeled by the 0/1 Knapsack Problem in short computation times. 0/1 Knapsack Problem is a binary version of the Knapsack Problem where each item is considered as an atomic unit, i.e., each item can be included in the knapsack as a unit or discarded (i.e., it cannot be split to fill the knapsack). This binary version of the Knapsack Problem allows modeling interesting resource allocation problems such as activities in project management, scheduling and location problems, feature selection, among others. The Virtual Savant paradigm is applied to solve the 0/1 Knapsack Problem, which models allocation problems. Virtual Savant is a novel method that uses machine learning techniques to learn how a reference algorithm solves a given problem [3]. Virtual Savant is inspired by the savant syndrome, a rare condition in which a human demonstrates mnemonic or computing abilities far superior to what would be considered normal. As an example, some patients with savant syndrome (savants) are able to enumerate and identify huge prime numbers without the underlying knowledge of what a prime number is, or accurately determine the day of the week of a given date extremely fast. Reported evidence suggests that patients with savant syndrome use pattern recognition in order to efficiently solve problems [4,5,6]. The Virtual Savant paradigm proposes applying a learning approach using computational intelligence to predict the results computed by a reference algorithm that solves a given problem [7,3]. Virtual Savant receives as input a set of problem instances and the results computed by the reference algorithm, which is used to train a machine learning classifier. Once the training phase is completed, Virtual Savant can be applied to solve new, unknown, and even larger problem instances. In this way, the Virtual Savant paradigm aims at learning the behavior of a given resolution algorithm in order to generate a completely different program that reproduces an analogous but unknown process to compute accurate results for the same problem. Furthermore, the resulting generated program is lightweight and can take advantage of modern massively parallel computing architectures to provide a fast and powerful problem solving schema. Following previous works [8,9], this article describes a deeper study on how to solve the 0/1 Knapsack Problem using Virtual Savant. The first evaluation of Virtual Savant in a parallel environment (Intel Xeon Phi 7250 server) to solve the 0/1 Knapsack Problem is presented. The accuracy of the proposed approach is studied as well as its parallel capabilities and performance on a many-core computing environment. Experimental results when solving 0/1 Knapsack Problem instances of varying size and difficulty suggest that the proposed approach is able to compute competitive solutions while showing good scalability properties when increasing the number of processing elements. The article is organized as follows. Section 2 presents the 0/1 Knapsack Problem formulation, introduces Virtual Savant and presents an overview of the related literature. Section 3 outlines the application of Virtual Savant to the 0/1 Knapsack Problem. Section 4 presents the experimental evaluation of the proposed approach and, finally, Section 5 presents the conclusions and main lines of future work. Problem and method This section introduces the 0/1 knapsack problem, describes the Virtual Savant paradigm, and presents a review of the related literature. 0/1 Knapsack Problem formulation The 0/1 Knapsack Problem is a classic combinatorial optimization problem which is proven to be NP-hard [10]. The mathematical formulation is as follows. Given a set of items, each with a profit and a weight , the 0/1 Knapsack Problem consists in finding a subset of items that maximizes the total profit, without exceeding the weight capacity of the knapsack. Eq. 1 shows the problem formulation, where ∈ 0,1 indicates whether item is included or not in the knapsack. Despite its straightforward formulation, the 0/1 Knapsack Problem has a large solution space and is frequently used as a benchmark to evaluate optimization algorithms. Additionally, the 0/1 Knapsack Problem can be used to model several optimization problems with direct real-world applications in many fields. In the context of this work, the 0/1 Knapsack Problem is useful to evaluate the Virtual Savant paradigm for several reasons: i) it is a NP-hard optimization problem; ii) it allows studying the behavior of Virtual Savant in problems with binary variables and simple constraints; iii) a large dataset of problem instances is publicly available with varying size and difficulty. Virtual Savant Virtual Savant is a novel paradigm to automatically generate programs that solve optimization problems in a massively parallel fashion [11]. The paradigm is inspired by the savant syndrome, a rare condition in which a person with significant mental disabilities has certain abilities far in excess of what would be considered normal [5]. People with this condition (savants) usually excel at one specific skill such as art, memory, rapid calculation, or musical abilities. The methods used by savants to solve problems are not fully understood due to the difficulties in communicating with them, since the syndrome is usually associated with autism. The main hypothesis states that savants learn through pattern recognition [4]. This mechanism allows savants to solve a given problem without understanding the underlying principles (e.g., being able to enumerate prime numbers without understanding what a prime number is). In analogy to the savant syndrome, Virtual Savant consists in training a machine learning classifier to automatically learn how to solve an optimization problem from a set of observations, which are usually obtained from a reference algorithm that solves the same problem. Once the training phase is completed, Virtual Savant can emulate the reference algorithm to solve new, unknown, and even larger problem instances, without the need of any further training. The Virtual Savant paradigm consists of two phases: classification, where results for unknown problem instances are predicted, and improvement, where predicted results are further improved using specific search procedures. Related work The 0/1 Knapsack problem has been widely studied in the operations research field. Nemhauser and Ullman [12] presented an exact algorithm to solve the 0/1 Knapsack Problem based on dynamic programming. The proposed algorithm was devised to solve capital allocation problems with constrained budgets, in the field of economics. Later, an optimized implementation of the original Nemhauser-Ullman algorithm was proposed by Harman et al. [13]. This version was applied to solve instances of the Next Release Problem, an optimization problem from software engineering where the goal is to determine the features to include in a new release of a given software product [14]. The optimized implementation by Harman et al. is used in our work to train the proposed Virtual Savant for 0/1 Knapsack Problem. Few articles were found in the related literature applying machine learning techniques to solve optimization problems, in line with the Virtual Savant proposal. Vinyals et al. [15] introduced Pointer Networks (ptr-nets), a model based on recurrent neural networks. Similarly to the approach applied in Virtual Savant, ptr-nets are trained by observing solved instances of a given problem and the proposed scheme is also able to deal with variable size outputs. The proposed model was applied to solve three different discrete combinatorial optimization problems: finding planar convex hulls, computing Delaunay triangulations, and solving the planar Travelling Salesman Problem. Experimental results indicated that the trained models were able to address problem instances larger than those seen during training and find competitive results for the studied problems. More recently, Hu et al. [16] applied a similar approach to the one proposed by Vinyals et al. to the three-dimensional bin packing problem, a specific variant of an allocation problem. A deep reinforcement learning approach is used to decide the sequence to pack items in a bin, while the empty space and the spatial orientation in which the items are placed inside the bin are calculated by heuristic methods. The reported experimental results showed that the proposed approach outperformed a specific heuristic for the problem. Improvements of 5% on average over the baseline results were obtained for the problem instances studied. Our previous works were able to obtain promising results when applying Virtual Savant to a task scheduling problem [17,18,7,11]. The application of Virtual Savant to the 0/1 Knapsack Problem has been previously studied in [8,9]. This article extends those two previous works by evaluating the parallel capabilities of the Virtual Savant model in a many-core parallel infrastructure. Virtual Savant for the 0/1 Knapsack Problem This section describes the application of the Virtual Savant paradigm to the 0/1 Knapsack Problem. The Virtual Savant implementation for the 0/1 Knapsack Problem uses Support Vector Machines (SVMs) for the classification phase. SVMs are trained using Nemhauser-Ullmann as a reference algorithm, which computes exact solutions for the 0/1 Knapsack Problem [13]. Each item of the problem instance is considered individually during the training phase of Virtual Savant. Therefore, each feature vector holds the weight and profit of the item, along with the capacity of the knapsack. The classification label is 0/1, indicating whether the reference algorithm included (or not) the item in the knapsack. Thus, a single solution of the reference algorithm provides as many observations as the number of items in the instance. The LIBSVM framework with a Radial Basis Function kernel was used [19]. A specific fork of the LIBSVM package was designed to improve training times on many-core architectures [20]. Fig. 1 outlines the training scheme for Virtual Savant to solve the 0/1 Knapsack Problem. Once the learning process is completed, Virtual Savant uses (in parallel) multiple instances of the trained SVM to predict whether or not to include each item in the knapsack. These decisions are independent for each item, providing Virtual Savant with a high degree of parallelism. The output of the classification phase is a vector that holds, for each item, the probability of including it in the knapsack. Since the length of the training vectors is fixed (3 features + 1 label), there is no need to re-train the SVM to solve problem instances of different size (i.e., with varying number of items). This allows Virtual Savant to easily scale to problem instances of larger dimensions, without requiring any additional training process. The improvement phase takes as input the resulting vector of probabilities computed in the prediction phase. One candidate solution is generated per computing resource available, by randomly sampling according to the probabilities of including each item. Finally, a local search heuristic is applied over each generated solution. The local search operator considered in this work is very simple, just performing random modifications on the items to include or not. On each step of the local search, a randomly-chosen bit in the solution is flipped, the new solution is evaluated, and the local search continues from that solution if an improvement is made. Algorithm 1 describes the method to evaluate the score of a solution in the local search procedure, considering a solution with profit , weight , overweight , where is the knapsack capacity; 0; ∈ 0,1 . , , and are scaled using the minimum and maximum weight and profit values in the problem instance. The improvement phase, as well as the prediction phase, is massively parallel, Massobrio R., Dorronsoro Díaz B., Nesmachnow Cánovas S.E. Virtual Savant for the Knapsack Problem: learning for automatic resource allocation. Trudy ISP RAN/Proc. ISP RAS, vol. 31, issue 2, 2019. pp. [21][22][23][24][25][26][27][28][29][30][31][32] 26 since more local searches can be spawned as more computing resources are available. Two corrections schemes are included in the improvement phase in order to ensure that the returned solution satisfies the knapsack capacity restriction: • Greedy correction by profit (CP): iteratively removes the item with lower benefit until the total weight is lower than, or equal to, the knapsack capacity. • Greedy correction by weight (CW): iteratively searches for the items with weight higher than, or equal to, the overweight of the solution and removes the one with the lowest weight among them. If no item satisfies this condition, it removes the one with the highest weight. The corrections are applied to each tentative solution after the local search, to ensure that the returned solution satisfies the knapsack capacity constraint. After all local searches and corrections are completed, the overall best solution found is returned. Experimental analysis This section reports the experimental analysis of the proposed Virtual Savant for the 0/1 Knapsack Problem. Problem instances The evaluation was performed over benchmark problem instances with different size and correlation between weight and profit of items. The correlation is related to the difficulty to solve an instance [13]. The benchmark includes 50 datasets, each with instances of size 100 to 1500 items (stepsize: 100). For each problem size, correlation varies from 0.0 to 1.0 (stepsize: 0.05). The benchmark, including a total of 15.750 problem instances, is publicly available at ucase.uca.es/nrp. SVM training The training phase was performed using dataset 1, to evaluate three different feature configurations. Results show that the best accuracy results were achieved when using item weight, item profit, and knapsack capacity. Regarding the size of the training set, results show that training with 15% of dataset 1 allows achieving good accuracy metrics. Increasing the number of observations results in marginal accuracy improvements, while significantly increasing training times. The parameters for the SVM (C) and the RBF kernel (γ) were configured prior to the experimental evaluation. Cross-validation was performed over a set of 5.000 samples randomly selected from dataset 1. Results suggest that the best results are computed with C=8192 and γ=0.5. Average accuracy for all datasets increased from 89.35% to 90.48% after parameter configuration. For the improvement phase, the parameters of the score assignment function in the local search were configured to m=0.2 and k=2 and the stopping criterion was set to 1000 iterations. Experimental results After configuration, the trained SVM was used to evaluate the complete Virtual Savant model on datasets 2 to 5. These datasets are completely new for the algorithm, as they were not used during the training phase. The experimental evaluation focused on both the quality of the solutions and the performance and scalability when using a massively parallel computing infrastructure. Hardware platform A many-core computing infrastructure was used in the experimental analysis, in order to evaluate the capabilities of Virtual Savant to compute accurate results over a massively parallel platform. A typical many-core computing infrastructure consists of tens or thousands of simpler independent cores. The use of many-core processors has been increasing in the past years, with extensive applications in embedded systems and high-performance computing platforms [21]. Many-core architectures can be programmed using the standard CPU model without needing specific knowledge about the underlying parallel hardware. Even without including platformspecific features, many-core systems offer support for serial legacy code [22]. The evaluation of Virtual Savant for the 0/1 Knapsack Problem was performed on an Intel Xeon Phi 7250 processor with 68 cores and 64GB RAM. Scalability Virtual Savant approach is elastic and adapts to the underlying hardware platform: if more computing resources are available, Virtual Savant can use them on both the prediction and the improvement phase. In the prediction phase, the computational load of predicting whether each item is included or not in the knapsack is balanced among the computing resources available. In the 28 improvement phase, Virtual Savant takes advantage of available resources to execute more local searches on tentative solutions, thus increasing the probability of computing more accurate results. The scalability of Virtual Savant when using a varying number of computing elements was evaluated for the prediction and improvement phases. Fig. 3 reports the average execution time (in seconds) for all problem instances studied when varying the number of threads. Fig. 3. Execution time varying the number of threads Results show that Virtual Savant scales very well when increasing the number of threads up to the number of cores available. When more threads are spawned, the performance starts degrading due to threads sharing resources. Consequently, the remainder of the experimental evaluation was performed using 68 threads. These results confirm the good scalability properties of Virtual Savant. Virtual Savant: prediction phase accuracy Boxplots in Figs. 4 and 5 correspond to the accuracy achieved during the prediction phase of Virtual Savant grouping problem instances by size and weight/profit correlation, respectively. The median prediction accuracy of the SVM is larger than 90% for all problem sizes studied. No significant differences are noticed among instances of different sizes. On the other hand, significant differences can be observed in the accuracy of the prediction phase on instances with varying weight/profit correlation. Instances with weight/profit correlation of 0.5 are the simplest to predict for the SVM, with a median accuracy value of over 97%. Additionally, in the worst case, the median accuracy of the SVM is larger than 80%. Virtual Savant: quality of solutions Results computed by Virtual Savant were compared with the known optima for the studied instances, to evaluate the efficacy of the proposed approach. Table 1 reports the average ratio to the optima for problem instances grouped by size. Table 2 reports the average ratio to the optimum, grouping instances by the correlation between weight and profit of items. Results achieved by Virtual Savant grouped by instance size differ from the known optima in just 2-4% on average for all problem instances studied. This is an encouraging result considering that the improvement phase of Virtual Savant consists in a straight-forward local search which does not incorporate any specific knowledge of the problem, thus making it potentially extensible to other related optimization problems. When looking at results grouped by weight/profit correlation, Virtual Savant allows computing accurate results for all problem instances studied. In the worst case, Virtual Savant differs from the optimum in 6% on average (for instances with no correlation between weight and profit).
4,552.8
2019-01-01T00:00:00.000
[ "Computer Science" ]
Characterizing Malignant Melanoma Clinically Resembling Seborrheic Keratosis Using Deep Knowledge Transfer Simple Summary Malignant melanomas (MMs) with aypical clinical presentation constitute a diagnostic pitfall, and false negatives carry the risk of a diagnostic delay and improper disease management. Among the most common, challenging presentation forms of MMs are those that clinically resemble seborrheic keratosis (SK). On the other hand, SK may mimic melanoma, producing ‘false positive overdiagnosis’ and leading to needless excisions. The evolving efficiency of deep learning algorithms in image recognition and the availability of large image databases have accelerated the development of advanced computer-aided systems for melanoma detection. In the present study, we used image data from the International Skin Image Collaboration archive to explore the capacity of deep knowledge transfer in the challenging diagnostic task of the atypical skin tumors of MM and SK. Abstract Malignant melanomas resembling seborrheic keratosis (SK-like MMs) are atypical, challenging to diagnose melanoma cases that carry the risk of delayed diagnosis and inadequate treatment. On the other hand, SK may mimic melanoma, producing a ‘false positive’ with unnecessary lesion excisions. The present study proposes a computer-based approach using dermoscopy images for the characterization of SΚ-like MMs. Dermoscopic images were retrieved from the International Skin Imaging Collaboration archive. Exploiting image embeddings from pretrained convolutional network VGG16, we trained a support vector machine (SVM) classification model on a data set of 667 images. SVM optimal hyperparameter selection was carried out using the Bayesian optimization method. The classifier was tested on an independent data set of 311 images with atypical appearance: MMs had an absence of pigmented network and had an existence of milia-like cysts. SK lacked milia-like cysts and had a pigmented network. Atypical MMs were characterized with a sensitivity and specificity of 78.6% and 84.5%, respectively. The advent of deep learning in image recognition has attracted the interest of computer science towards improved skin lesion diagnosis. Open-source, public access archives of skin images empower further the implementation and validation of computer-based systems that might contribute significantly to complex clinical diagnostic problems such as the characterization of SK-like MMs. Introduction Malignant melanomas (MMs) with atypical clinical presentation constitute a diagnostic pitfall, and false negatives carry the risk of a diagnostic delay and improper disease management [1,2]. Among the most common, challenging presentation forms of MMs are those that clinically resemble seborrheic keratosis (seborrheic keratosis-like MMs, SK-like MMs) [3]. SK is one of the most frequently diagnosed benign skin tumors in everyday clinical practice. It is a hallmark of aged, chronically sun-exposed skin of older individuals, with well-characterized, in most cases, diagnostic clinical features. The patients are usually alarmed about the sometimes rapidly growing exophytic lesions; however, in most cases, they can be assured that their growths are benign simply based on the clinical examination and without the need for histologic confirmation. Moreover, in many clinically doubtful cases, an additional dermoscopic assessment of the suspect lesion enables a clear-cut diagnosis of the condition based on a series of well-elaborated, typical dermoscopic features [4]. However, none of the SK dermoscopic findings is specific to SK [4], as they can be observed in other skin tumors, including malignant ones, among which are also distinct MM cases (SK-like MMs) [5]. The true incidence of SK-like MM is largely unknown since many of these lesions are misdiagnosed as SK on the basis of the clinical and dermoscopic examination and are not biopsied at this stage [3]. Izikson et al. [6], in a retrospective study covering ten years (1992 to 2001), retrieved 9204 pathology reports of material admitted with the clinical differential diagnosis SK. Melanoma was confirmed by histological examination in 61 of these cases (0.66%). SK-like melanoma shares clinical and dermoscopic features of SK and melanoma, making the diagnosis challenging. A somewhat regular shape and the presence of benign dermoscopic patterns suggestive of an SK lead to underestimating the true malignant nature of this type of lesion. This ambiguity in the diagnosis was highlighted in a study by Carrera et al. [7] in which 54 dermatologists with about ten years of clinical practice clinically misdiagnosed 40% of 134 SK-like melanomas as benign lesions. An additional dermoscopic evaluation could improve the overall diagnostic accuracy from 60.9 to 68.1%, i.e., not more than by about 20%. Additionally, in the largest dermoscopic study of SK-like melanomas to date, the dermoscopy score and the seven-point checklist score showed benignity range with values 4.2 and 2 [5]. In the same study, Carrera et al. found that the most helpful criteria in correctly diagnosing SK-like MMs, despite the presence of other SK features, were the identification of blue-white veil, streaks, and a pigmented network [5]. Noninvasive optical methods, such as reflectance confocal microscopy (RCM) and optical coherence tomography can be employed to improve accuracy in melanoma diagnosis [8][9][10]. However, in SK-like MMs, the application has been limited due to frequent clinical, dermoscopic misdiagnosis [3]. The diagnostic grey zone between SK and MM becomes even broader as SK mimicking melanomas (MMs-like SK) have also been reported, with an increased risk of false MM diagnoses [11][12][13][14]. Dermoscopy of typical SK is characterized by milia-like cysts, comedolike openings, and brain-like and finger-like structures [4]. However, pigmented SK can sometimes present dermoscopic patterns that mimic melanocytic lesions, the most frequent of which is the so-called false pigmented network. Dermoscopic evaluation of 402 lesions indicated that pigmented SK could show at least one of the criteria most predictive of melanocytic proliferations [11]. Recent studies have highlighted the contribution of RCM in characterizing MM-like SK. Farnetani et al. [15] retrospectively evaluated RCM images of atypical SK lesions suspicious of MM at dermoscopy to identify a diagnostic approach able to minimize surgical biopsies or excisions. They assessed 111 facial lesions with histological SK diagnosis. By dermoscopy, most lesions (n = 83 lesions, 75%) were classified as melanocytic-like. With RCM, only 16% were classified as suspicious of malignancy, with the remaining 84% considered 'SK-like'. The presence of RCM features associated with typical SK, the rare presence of melanomaassociated features, and the absence of medusa head-like structures seem to be the most sensitive indicators for atypical SK facial lesions. In another retrospective study, Pezzine et al. [16], applied RCM to analyze excised skin lesions with a ≥1 score of the revisited seven-point dermoscopy checklist [17]. Their objective was to evaluate the agreement of RCM classification and histological diagnoses and the reliability of well-known RCM criteria for SK in identifying SK with atypical dermoscopy presentation. An excellent agreement (97%) was confirmed for RCM and histopathologic examination for SK with atypical dermoscopy presentation, allowing an effective noninvasive differential diagnosis. More importantly, RCM features in this group of atypical lesions were similar to those described for typical SK cases. Recently, computer-aided diagnosis (CAD) systems are increasingly combined with various noninvasive imaging techniques to encompass advanced image processing and enable the application of artificial intelligence (AI) methods to improve diagnostic accuracy [18][19][20]. In the field of quantitative noninvasive optical techniques, Bozsànyi et al. [21] assessed the usefulness of spectral reflectance and autofluorescence measurements of MM and SK for their accurate differentiation. Using image analysis, they have extracted quantitative autofluorescence intensity measures and created a multiparameter descriptor-the SK index. High values of SK index (resulting from high fluorescence intensity values and the number of highly autofluorescent particles detected in the lesion area) were associated with SK lesions and were mainly caused by the milia-like cysts and comedo-like opening, which are primarily filled with keratin. On the other hand, compared with SK, the melanomas exhibited significantly lower intensity values. The authors used a threshold value of SK index and discriminated SK (n = 319) from MM (n = 161) with a sensitivity of 91.9% and specificity of 57.0%. It is worth noting that their data set included six image sets of MM-like SK and 52 image sets of SK-like MM; however, they did not clarify the clinical or dermoscopic atypia criteria of these latter cases. In the same context, Wang et al. [22] developed a support vector machine (SVM) classification model fed with speckle patterns estimated from image histogram of copolarized and cross-polarized speckle images and a depolarization ratio image D to differentiate between MM and SK. Using a data set of 143 patients (MM n = 37, SK n = 106), they could discriminate SK from MM with this approach with a sensitivity of 87.63% and a specificity of 85.74%. The increasing worldwide integration of dermoscopy in clinical dermatology practice [23,24], the evolving efficiency of deep learning algorithms in image recognition, and the availability of extensive image archives have greatly accelerated the development of advanced CAD systems for melanoma detection [25][26][27][28][29][30]. Earlier efforts were mainly concentrated on discriminating benign melanocytic lesions from MM. However, with the availability of large image datasets, the interest has shifted towards a more sophisticated categorization of skin tumors. Today, the largest, publicly available dataset of dermoscopic images is the International Skin Image Collaboration (ISIC) archive [31]. ISIC promotes CAD-based research by sponsoring annual related challenges for the computer science community in association with leading computer vision conferences. Thus in recognition of the immense clinical impact of differentiating between MM and SK, in 2017 ISIC released a focused dataset with a three-task challenge: lesion segmentation, visual dermoscopic features detection, and lesion discrimination firstly between melanoma vs. nevus and seborrheic keratosis (malignant vs. benign lesions), and secondly between seborrheic keratosis vs. nevus and melanoma (nonmelanocytic vs. melanocytic lesions) [32]. In the present study, we used image data from the ISIC archive to investigate the discrimination efficiency of image embeddings derived from pretrained convolutional network VGG16 to differentiate between MM and SK in the challenging diagnostic task of the preinvasive diagnosis of SK-like MMs. To the best of our knowledge, this study is the first effort exploring the capacity of deep knowledge transfer in refined complexity diagnostic tasks of clinically atypical skin tumors. Data Set Description Our data set comprised 978 dermoscopic images (malignant melanoma, MM, n = 550; seborrheic keratosis, SK, n = 428) retrieved from the International Skin Image Collaboration archive [31]. Patients' metadata are summarized in Table 1. The clinical diagnosis of all MM cases and of 310 SK cases (72.4%) was confirmed by histological examination. A large part of the images came from ISIC 2017 challenge [32]. This database provides ground truth lesion images with annotation of the lesion area and the dermoscopic patterns. To enhance our training set, we retrieved 200 additional images (n = 100 MM, n = 100 SK; the BCN_20000 dataset, Hospital Clínic de Barcelona) from the ISIC archive. For the remaining images (BCN_2000 dataset), the lesion area was annotated manually by our experts. The study did not include images in which hair (or another type of noise such as bubbles) substantially corrupted the lesion area. The image resolution in the dataset ranged from 639 × 602 to 6720 × 4461 pixels. To train our system, we used n = 349 cases of MM and n = 318 cases of SK. The inclusion criteria of dermoscopic images in the test set (MM n = 201, SK n = 110) were the presence of at least one atypical dermoscopy pattern. For MM, this is the absence of pigmented network or the presence of milia-like cysts (or both). On the other hand, atypical SK lacked milia-like cysts or had a pigmented network (or both) (Figures 1 and 2). Feature Extraction Using Deep Knowledge Transfer The objective of machine learning in CAD systems is to extract patterns from images and use these patterns to make diagnostic predictions. These patterns are feature vector representations of input images, also called embeddings. From the deep learning perspective, using pretrained embeddings to encode images into feature vectors is known as transfer learning [33]. A typical example is to repurpose pretrained embeddings trained on a large corpus of millions of images [34] for a large-scale classification task to implement a classification model for a different classification task, with much fewer data available. Several studies have indicated that embeddings extracted from deep convolutional neural networks (CNNs) are powerful for various visual recognition tasks [35][36][37]. Their outstanding performance as image representation learners grew the trend of utilizing them as optimized feature generators for skin lesion classification [38][39][40][41][42][43]. Our work, aligned with previous research evidence, explores the efficiency of the pretrained CNN, namely the VGG16 [44] as the starting point, for the generation of image embeddings in order to discriminate between cases of atypical MM and atypical SK. As a conventional deep CNN, VGG16 is a 16-layer architecture that consists of convolutional and fully connected parts. VGG16 pretrained on ImageNet is a classifier architecture for distinguishing a large number of object classes [34]. This goal is achieved gradually by learning image representations in a hierarchical order (Figure 3). Top layers capture more abstract and high-level semantic features. They are robust at distinguishing objects of different classes (i.e., flowers, dogs, etc.) even at significant appearance changes or in the presence of a noisy background. Still, they are less discriminative to objects of the same category (i.e., differentiate between different species of flowers). Moreover, several studies confirmed that the fully connected layers of the CNN, whose role is primarily that of classification, tend to exhibit relatively worse generalization ability and transferability [45]. In contrast, the lower convolutional layers provide more detailed spatial representations. They are more helpful to localize fine-grained details and distinguish a target object from its distracters (other objects with similar appearance, i.e., distinguish between bird species). However, they are less robust to appearance changes. The convolutional layers, acting progressively from fine, spatial to coarse, abstract representations generally transfer well [33,37,45,46] to diverse classification tasks. Based on this evidence, in the present work, we aimed to find the optimal transition point in the convolutional layers to mine high-capacity image representations for the challenging diagnostic task of SK-like MMs characterization. We exploited image representations from the layers "pool2-pool5". For comparison purposes, we also extracted the fully connected layers' "FC6", "FC7" feature maps so that we can contrast the behavior of the convolutional and fully connected layers ( Figure 3). Finally, the efficiency of VGG16 representations was compared with hierarchical feature embeddings from the ResNet50 convolutional network [47]. Image encoding from fine spatial to coarse abstract, was explored using the layers ReLU_10, ReLU_22, ReLU_40, and ReLU_49. The image representation of a convolutional layer (activation) forms a tensor of HxWxd, consisting of d feature maps of size H × W. Each feature map is flattened using global average pooling to produce a d-dimensional feature vector. Table 2 summarizes the different VGG16 and ResNet50 layers' representations and their resulting feature vectors for an input image of 224 × 224 × 3 pixels. Implementation and Evaluation of the Diagnostic Model The extracted deep feature vectors ( Table 2) were used to train different binary SVM classifiers. SVM is the classifier of choice for assessing representations from pretrained CNNs [35,36]. For all SVM models, optimal hyperparameter selection (Box Constraint, Kernel function, Kernel scale, Polynomial order) was carried out using the Bayesian optimization method [48] that minimizes k-fold (k = 5) cross-validation classifier error. For each model, the accuracy performance was evaluated in an independent data set of challenging cases of MM and SK in terms of sensitivity, specificity, and overall accuracy: where TN is the number of SK correctly identified, FN is the number of MM incorrectly identified as SK, TP is the number of MM correctly identified, and FP is the number of SK incorrectly identified as MM. The models' accuracies were assessed with the McNemar test to detect whether the misclassification rates between any of the two models were statistically significant or not [49,50]. Image Preprocessing Before being used as input to pretrained CNNs, all images were preprocessed following a standard pipeline of color normalization, cropping, and resizing ( Figure 4). To achieve a color constancy in the whole data set, we used the Grey world color constancy method [51], initially used by [52] and followed by many researchers in automated skin classification works [53][54][55]. Finally, the exact lesion dimensions were used to crop the images as proposed in [55]. Results Bayesian optimization was run for 100 iterations, and different image embeddings from pretrained VGG16 and ResNet50 layers resulted in different classification models, with noticeable differences in test classification accuracies (Table 3). The SVM model with a gaussian kernel using feature vectors from the 'pool3' layer exhibited the best overall accuracy of 80.7% (251/311 cases) and a sensitivity and specificity of 78.6% (158/201 cases) and 84.5% (93/110 cases), respectively. The highest specificity, 90.9% (100/110 cases), was achieved by a linear SVM classifier and features from the convolutional layer 'pool4'. Considering the ResNet50 approach, there was also the SVM model with a gaussian kernel using feature vectors from the 'ReLU_22' layer that exhibited the best overall accuracy of 79.4% with a sensitivity and specificity of 76.1% and 85.4%, respectively. More detailed comparison results are illustrated in Table 4, where the statistical significance (McNemar test) of the differences in the observed accuracies is displayed. Considering the VGG16 embeddings, layer 'pool3' produced significantly better sensitivity and overall accuracy with more than a 99.9% confidence level. The 'pool4' layer outperformed the sensitivity and overall accuracy of pool5 and FC7 layers with a confidence of more than 95% and those of layers pool2 and FC6 with a confidence of more than 99.9%. The fully connected layer FC7 outperformed the FC6 layer in sensitivity with more than 95% confidence. Table 4. Cross-comparison of the classifiers' accuracies (McNemar test). The arrowheads point to the classifier with the highest accuracy, and the lines denote comparable accuracies. The overall accuracy, sensitivity, and specificity results are denoted with dark, red, and blue colors. For example, comparing the performance of layers' representations FC6 and FC7, the FC7 layer exhibited statistically higher sensitivity with a confidence level of more than 95%. Only p-values of significantly different outcomes are displayed. Discussion The importance of the timely diagnosis of difficult to recognize melanomas that can clinically resemble benign tumors, such as the SK-like MMs, has been emphasized in previous studies [3,5,7,55,56]. Carrera et al. have indicated specific dermoscopic criteria for correctly identifying such challenging SK-like MM cases [5]. On the other hand, given their larger numbers and significant dermoscopic variability, SK may, at times, mimic melanoma contributing to the clinical MM overdiagnosis [14,15]. RCM may help diagnose challenging cases [3], and recent studies have highlighted the ability of RCM patterns to identify SK with atypical dermoscopy presentation [15,16]. However, there is a lack of related RCM studies focusing on SK-like MM [3]. Moreover, these later studies [5,15,16] have unilaterally highlighted the diagnostic accuracy of dermoscopic and RCM features. The dermoscopic features that assist experts in characterizing SK-like MM have not been employed to assess atypical cases of SK, and the specific RCM patterns were not evaluated in SK-like MM cases. Moreover, the use of RCM is time-consuming, and the increased cost of the equipment restricts the wide availability of this technology. Today, with the rapid advancement of deep learning methods and the publicly available data sets, dermoscopic images almost monopolize the research interest of CAD skin lesion systems. Numerous breakthrough studies, mainly from the field of computer science, have demonstrated high (expert-level) accuracy in melanoma detection. These high accuracy rates are either related to binary classification tasks as benign vs. malignant or multidifferential diagnosis tasks. In this study, we explored the potential of deep knowledge transfer to approach the challenging 'grey zone' of atypical cases of MM and SK. Studying the different image representation transfer results from a well-known VGG16 architecture and following a standard workflow, we achieved a sensitivity of 78.6% and a specificity of 84.5% using the convolutional layer 'pool3' as a feature extractor. Our results confirm that meaningful feature reuse is concentrated at the convolutional layers rather than at higher, fully connected layers [33,36]. We have also tested the ResNet50 network, and we have verified the existence of the optimal transition from fine spatial to coarse semantic features through the deeper convolutional blocks of ResNet. However, since the discriminating image embeddings are located at the middle layers, the middle-level image embeddings from ResNet50 are of comparable capacity to that of middle-level VGG16. Moreover, a meta-analysis of 70 studies on CAD systems, published between 2002 and 2018 [19], gave a melanoma sensitivity of 0.74 (95% CI, 0.66-0.80) and a specificity of 0.84 (95% CI, 0.79-0.88), indicating that we have tackled the challenging discrimination of SK-like MMs with comparable accuracies. In future work, aggregating methods to combine embeddings from middle convolutional layers of the same network or different networks in a global, dense image representation might further boost the system's accuracy. However, the availability of annotated and high-quality image data remains the key contributor to improving accuracy. Our present contribution is thus twofold: Firstly, the comprehensive evaluation of the transferability of features from different layers of pretrained VGG16 and ResNet50 unveiled the excellent efficiency and generalization properties of the middle-level convolutional layers. Secondly, we targeted a challenging diagnostic task where key dermoscopic patterns of either condition are shared between benign and malignant lesions. It is worth noting that the herein proposed CAD system is aligned with the recent technological advances in smartphone-based teledermatology that promise to enhance diagnostic efficacy at the clinical level [57]. The main limitation of this study is that the feature extraction from pretrained image embeddings is acting more like a black box. The exploited image patterns generate little human interpretable evidence of lesion diagnosis. The effectiveness of this algorithm in prediagnosed cases is within the scopes of a future prospective study. Conclusions Deep learning has boosted the efficiency of CAD systems significantly. With the publicly available data collections, the computer science community has now the opportunity to test the accuracy of these systems in melanoma diagnosis. Moreover, when these systems clearly focus on a specific diagnostic task and are trained and tested sufficiently, they may support dermatologists in challenging diagnostic tasks. Author Contributions: Conceptualization, P.S., I.B. and G.G.; methodology and software implementation, P.S. and A.L.; writing-original draft preparation, P.S.; writing-review and editing, I.B., G.G. and A.L. All authors have read and agreed to the published version of the manuscript. Funding: The authors did not receive support from any organization for the submitted work. Institutional Review Board Statement: Ethical review and approval were waived for this study due to the fact that the patient images were retrieved from a publicly accessible database. Informed Consent Statement: Patient consent was waived due to the fact that the employed figures are retrieved from a public database. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: https://challenge.isic-archive.com/data/ 2017 challenge. Last accessed date 9 December 2021. Conflicts of Interest: The authors declare no conflict of interest.
5,299
2021-12-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Upregulation of Zip14 correlates with induction of endoplasmic reticulum stress (ERS) in hypertrophied hearts of Dahl saltsensitive rats Zinc is a trace element involved in maintaining cellular structure and function. Although zinc is associated with left ventricular hypertrophy (LVH), there have been few reports on this association. This study aimed to evaluate the correlation between Zip14 and expression of endoplasmic reticulum stress (ERS) associated molecules in hypertrophied hearts of rats. Dahl salt-sensitive rats were fed a high salt diet to establish a left ventricular hypertrophy (LVH) rat model. RT-PCR was used to determine Zip14, activating transcription factor (ATF4), ATF6, x-box-binding protein 1 (xBP1), C/EBP homologous protein (CHOP), immunoglobulin-binding protein (BiP) mRNA expression. Western blotting was used to evaluate Zip14, BiP, CHOP, GAPDH expression. Zinc levels were measured by Inductively Coupled Plasma Optical Emission Spectroscopy. The results indicated that compared with the Control group, Zip14 mRNA and protein expression in LVH rat hearts were markedly increased (P < 0.01). Zinc content in rat heart tissue was significantly increased in the LVH group compared with the Control group (P < 0.05). ATF4, ATF6, xBP1 mRNA expressions were increased in LVH rat hearts compared with Control hearts (P < 0.001). Compared with the Control group, CHOP and BiP mRNA and protein expression were markedly increased in LVH rat hearts (P < 0.05, P < 0.01). Linear regression models showed that Zip14 mRNA expressions were positively correlated with zinc concentration, ATF4 and ATF6 mRNA expressions in Control hearts (P = 0.0005, P = 0.0052, P = 0.0026, respectively) and LVH rat hearts (P < 0.0001, P = 0.0119, P = 0.0033, respectively). In conclusion, upregulation of Zip14 in LVH rat hearts correlated with zinc accumulation and induction of ERS. Introduction Zinc is a trace element involved in maintaining cellular structure and function (Ryul et al., 2015). High zinc levels have irreversible effects on proteins and lead to the dysfunction of many proteins. Low levels of zinc are also detrimental to cells because it is a cofactor for more than 300 enzymes and 2000 transcription factors, as well as mediating cell signaling (Roshanravan et al., 2015;Huang et al., 2017). Therefore, the balance of intracellular zinc concentration, termed zinc homeostasis, is critical. Under physiological conditions, the regulation of zinc homeostasis mainly depends on zinc transporters, zinc-binding molecules, and zinc sensors (Foster and Samman, 2010). Zinc transporters are divided into two major families, Zip and ZnT. The 14 zinc transporters of the Zip family are responsible for transferring extracellular zinc into cells, while 10 zinc transporters of the ZnT family have the opposite role (Lichten and Cousins, 2009). Zip14 is located in the cell membrane and promotes extracellular zinc into the cytosol and increases the zinc concentration in the cytoplasm (Taylor et al., 2007). The endoplasmic reticulum (ER) is an important organelle widely present in cells and an important site for the folding, assembly, and modification of protein molecules (Ron and Walter, 2007;Kim et al., 2008). When intracellular zinc is deficient, ER stress (ERS) occurs, causing dysfunction of the ER. Therefore, zinc is necessary to maintain normal ER function. In addition, ERS and cell dysfunction can be induced by oxidative stress and acute ischemia-reperfusion (IR) injury (Zhang, 2010;Zhang et al., 2014a;Zhang et al., 2014b). Zinc is associated with a variety of cardiovascular diseases, such as atherosclerosis and thrombosis of atherosclerotic plaque rupture, diabetic cardiomyopathy, arrhythmia, myocardial infarction, and congestive heart failure. Although zinc is associated with left ventricular hypertrophy (LVH), there have been few reports on this association (Little et al., 2010). LVH is characterized by pathological remodeling of the heart and is a good predictor of cardiovascular diseases such as myocardial infarction, congestive heart failure, sudden cardiac death, stroke, and overall CVD mortality (Desai et al., 2012). Our previous studies reported that serum zinc ion concentrations were significantly lower in patients with LVH compared with normal patients (Huang et al., 2017). Furthermore, the zinc trafficking and the activity of the Zip14 transporter are important for adapting to the ERSassociated chronic metabolic disorders, and the Zip14mediated transport of zinc is necessary for adapting ERS (Kim et al., 2017). Olgar et al. (2018a) also reported that Zn 2+ correlates the induction of ERS through altering expressions of Zn 2+ -transporter, Zip14, in heart failure. Therefore, there might be a correlation between the Zip14 and the ERS. The present study aimed to investigate the mechanism of zinc transporter Zip14 and endoplasmic reticulum stress in the development of LVH in rats. Establishment of the LVH rat model To establish the LVH rat model, we chose Dahl salt-sensitive rats and fed them a high salt diet. The control group was fed a normal salt diet. The content of sodium chloride in the high salt feed was 8%, while the normal diet group contained 0.3%. We purchased 5-week-old rats of SPF grade from Beijing Weitong Lihua Experimental Animal Technology Co. Ltd., (license key SCXK (Beijing) 2012-0001). Blood pressure and heart rate were measured by a noninvasive blood pressure detector (BP-2010AUL Softron, Tokyo, Japan) every two weeks. All animals were exposed to a 12-h light-dark cycle and were given free access to tap water and standard chow daily. The rats in this study were randomly divided into the Control group (N = 14, 7 males and 7 females) and LVH group (N = 22, 11 males and 11 females). At weeks 6, 12, and 18, rats were anesthetized by the intraperitoneal injection of 30 mg/kg 10% chloral hydrate. Then, the rats were fixed in a supine position, excluding the neck and chest hair, and coated with appropriate coupling agents. Mild sedation was maintained through a nasal tube with a continuous low flow of oxygen. The heart rate was maintained at about 300-350 bpm and was detected by an animal-specific ultrasound system (VEVO 2100 Imaging System, Toronto, Canada). LVH was successfully induced in high salt-fed Dahl rats at about 18 weeks. Left ventricular myocardium specimens were removed from successful LVH model rats and the Control group at various time points, immediately frozen in liquid nitrogen, and stored at −80°C until use. RT-PCR assay Total RNA was extracted from myocardial tissues using Trizol Reagent (Invitrogen) following the manufacturer's recommendations. RNA concentrations were determined by spectrophotometry at 260 nm and 280 nm, and RNA (3 μg) was reverse transcribed to obtain complementary DNA (cDNA) in a 20 μL reaction mixture with Promega Reverse Transcription System (Sigma-Aldrich.) according to manufacturer recommendations. Primers used for RT-PCR amplification were designed with Software Primer 5.0 and synthesized by Invitrogen Company (Beijing, China). Realtime PCR was performed with the primer sequences listed in Table 1 using a CFX96 Real-Time PCR System (Bio-Rad, Singapore) and a SYBR® Green PCR Kit (Invitrogen) according to a standard application protocol and the manufacturer's instructions. The cycling parameters were as follows: 95°C for 10 min, 40 cycles of 95°C for 15 s, and the annealing/extension temperature and time is 60°C for 60 s. All samples were assayed in triplicate. The control gene GAPDH RNA was used to normalize the results. The comparative threshold cycle method was used for data analyses. The binding of the primary antibody was detected with secondary antibodies (anti-rabbit, 1:2500) and visualized by the ECL method. The intensities of the bands were analyzed using Image J software. Measurement of zinc concentrations Fifty mg myocardial tissue and 1 mL of 65% concentrated nitric acid was added to a small beaker for digestion and placed on a graphite-controlled thermoelectric plate at 120°C for 30 min. Finally, the nitric acid volume was made to 1 mL, and 4 mL of distilled water was added. The zinc levels were measured by Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES optima 8300, Perkin Elmer, MA, USA). Statistical analysis Data are expressed as the mean ± SEM and were obtained from 6 to 10 separate experiments. Statistical analysis of experimental data was performed using SPSS 22.0 software (IBM Corp., Armonk, NY, USA). Statistical significance was determined using the Student's t-test or repeated-measures analysis of variance test. Linear regression analysis was used for relationships between variables. A value of P < 0.05 was considered statistically significant. Hemodynamic and echocardiographic parameters of rats at different ages The parameters of the rats are presented in Table 2. There was no significant difference in each index between LVH and Control group rats at week 6. The mortality rate of the LVH group rats was 22.7% from weeks 6 to 12. The mean systolic blood pressure (SBP) and late systolic thickness of the posterior wall of the left ventricle (LVPWs) in the LVH group were significantly higher than in the Control group (respectively, P < 0.05). However, the body weight (BW) was lower in the LVH group than in the Control group (P < 0.05) at week 12. The mean systolic blood pressure (SBP) and mean diastolic blood pressure (DBP) were significantly increased in the LVH group compared with the Control group (respectively, P < 0.05) at week 18. However, the SBP arose in 12-week and 18-week-old rats in the Control group, which might be caused by the increased body weight or obesity. The interventricular septum thickness at enddiastole (IVSd), left ventricular end-diastolic diameter (LVDd), the end-diastolic thickness of the left ventricular wall (LVPWd), and late systolic thickness of the posterior wall of the left ventricle (LVPWs) in the LVH group were significantly higher than in the Control group (all P < 0.05). However, body weight (BW) was lower in the LVH group compared with the Control group (P < 0.05) at week 18. Successful establishment of a LVH rat model As shown in Fig. 1, the left ventricular mass (LVM) and left ventricular hypertrophy index (LVM/BW) were significantly higher in the LVH group compared with the Control group at week 18 (744.83 ± 104.74 vs. 635.83 ± 119.06 mg, 2.29 ± 0.34 vs. 1.56 ± 0.32, respectively, P < 0.05). In addition to being a LVH rat model, it is also a classic rat model of hypertension. Expression of Zip14 and zinc contents are increased in LVH rat hearts As shown in Fig. 2A, Zip14 mRNA expression was markedly higher in LVH compared with Control rats (P < 0.01). A representative western blotting band for Zip14 is shown in Fig. 2B. Western blot analysis demonstrated that the expression of Zip14 proteins was increased in the LVH rat heart compared with Control hearts (P < 0.01) (Fig. 2B). In addition, as shown in Fig. 2C, the zinc concentration was significantly increased in LVH rat hearts compared with Control hearts (1.28 ± 0.29 vs. 1.00 ± 0.23 mg/L, P < 0.05). Induction of ERS in LVH rat hearts When ERS occurs, plenty of molecules are involved in this process. Therefore, to demonstrate the occurrence of ERS, we detected signal transduction molecules by western blotting and RT-PCR. As shown in Figs. 3A-3C, the mRNA expressions of activating transcription factor 4 (ATF4), activating transcription factor 6 (ATF6), and x box-binding protein 1 (xBP1) were increased in the LVH rat heart compared with Control hearts (P < 0.001). As shown in Fig. 3D, CHOP mRNA expression was markedly higher in LVH hearts compared with Control hearts (P < 0.001). A representative western blotting band for CHOP is shown in Fig. 3E. Western blot analysis demonstrated that the expression of CHOP proteins was increased in the LVH rat heart compared with Control hearts (P < 0.05) (Fig. 3E). As shown in Fig. 3F, BiP mRNA expression was markedly higher in LVH hearts compared with Control hearts (P < 0.05). A representative western blotting band for BiP is shown in Fig. 3G. Western blot analysis demonstrated that the expression of BiP proteins was increased in the LVH rat heart compared with Control hearts (P < 0.05) (Fig. 3G). Zip14 mRNA expressions are positively correlated with zinc contents, ATF4, and ATF6 mRNA expression To confirm the relationship between Zip14 and zinc, ATF4, and ATF6, we performed linear regression analyses. As shown in Figs. 4A and 4B, the linear regression models showed a significant positive relationship between Zip14 mRNA expressions and zinc concentration in Control hearts (R 2 = 0.8027, P = 0.0005) and LVH rat hearts (R 2 = 0.8769, P < 0.0001). Discussion Our research provides new and interesting insights into the complex relationship between zinc and LVH. A model of LVH by feeding Wistar germline Dahl salt-sensitive rats a high salt diet has been successfully established. At week 18, the left ventricular mass and left ventricular hypertrophy index of the experimental group were significantly higher than in the control group. We also detected a marked increase in zinc concentration and a remarkable upregulation of zinc transporter Zip14 expression in myocardial tissues of LVH rats. Furthermore, the mRNA expressions of ATF4, ATF6, CHOP, BiP, and xBP1 in LVH rats under ERS were significantly upregulated and partially related to CHOP protein levels in the myocardial tissues of LVH rats. A marked increase in the expression of BiP confirmed the occurrence of ERS in the myocardial tissues of LVH rats. Zinc plays an important role in cardiomyocyte protection by involving lots of signaling pathways, such as the cGMP/PKG pathway (Jang et al., 2007), NO/cGMP/PKG and glycogen synthase kinase-3β (GSK-3β) signaling pathway (Xi et al., 2010). Moreover, the zinc transporter Zip14 is closely related to inflammation and production of proinflammatory cytokines (Aydemir et al., 2012;Eizirik et al., 2012;Troche et al., 2016). Zip14 is also expressed in cardiac tissues and located in the plasma membrane, which promotes extracellular zinc into the cytosol and increases the concentration of zinc in the cytosol (Taylor et al., 2007;Olgar et al., 2018a). We observed higher expression levels of Zip14 and ICP-measured myocardial tissue zinc ions in the LVH rat model compared with the Control group. Similarly, Olgar et al. (2018b) used a rat hypertrophic heart model induced by transverse aortic coarctation (TAC) to show that the expression of Zip14 and the concentration of zinc ions in cardiomyocytes were increased compared with the SHAM group. We found that Zip14 plays an important role in the increased zinc concentration in the myocardium of LVH rats through linear regression analysis. ERS occurs when cells are affected by external factors such as inflammation, oxidative stress, and intracellular zinc homeostasis. ERS refers to misfolding and folding disorders of newly synthesized proteins leading to unfolded and misfolded proteins accumulating in the ER, affecting the normal function of the ER (Schroder and Kaufman, 2005;Kim et al., 2017). Activation of the UPR-related signaling pathway increased the expression of ER chaperones such as BiP and induced apoptosis independently (Walter and Ron, 2011); therefore, we determined the expression of Bip molecule in our study. Recently, Kim et al. (2017) confirmed that Zip14 is closely related to the adaptation of ER to livermediated stress and reduces hepatic steatosis and apoptosis by activating the ATF4 and ATF6α and UPR reactions (Kim et al., 2017). Therefore, when ERS occurs in cells, increased intracellular Zn 2+ concentrations play an important role in cell protection. Our findings showed that expressions of ATF4, ATF6, xBP1, and BiP were increased in LVH rat hearts compared with Control hearts. Meanwhile, ERS were significantly upregulated and partially related to CHOP protein levels in the myocardial tissues of LVH rats. All these results suggest that Zip14 is associated with the ERS in the LVH rat models. Moreover, according to the previous studies, many key biomarkers (Rutkowski et al., 2006;Wang et al., 2016) (such as PKR-like ER 1 kinase (PERK), eukaryotic initiation factor 2α (eIF2α)) and signaling pathways (such as p-eIF2α/ATF4/CHOP pathway) (Rutkowski et al., 2006), also involve in ERS and UPR. Meanwhile, the Zn 2+ participated metabolism of cardiomyocytes is also associated with the ERS, which involving Zip14, Zip7, and other molecules (MacDonald, 2000;Yoshida, 2007;Murakami and Hirano, 2008;Wang et al., 2016;Tuncay et al., 2017;Xing et al., 2017). Therefore, the association between Zip14 (or other Zn 2 + -transporters) and ERS in LVH animal models should be verified in future investigations. Furthermore, linear regression analysis indicated that the expression of Zip14 mRNA expression in the myocardial tissues of the Control and experimental groups was positively correlated with zinc concentration. The mRNA expressions of Zip14, ATF4, and ATF6 in the myocardial tissues of the Control and experimental groups were also positively correlated. However, the correlation between Zip14 expression and CHOP or BiP has not been verified in our study, which is needed to be detected in further studies. Therefore, we hypothesized that ERS in the myocardium of LVH rats induced by external factors such as inflammation and oxidative stress upregulates the expression of ATF4 and ATF6 by activating UPR-related pathways or the activation of other pathways to increase the expression of Zip14 to increase the zinc concentration in cardiomyocytes. A shortterm increase in zinc concentration allows cells to adapt to ERS and protects cardiomyocytes. Sustained ERS and the long-term increase of the zinc concentration in cardiomyocytes may cause myocardial cell apoptosis and dysfunction, which might be a pathophysiological mechanism in the development of LVH. There were some limitations in this study. Firstly, in the process of studying how Zip14 regulates zinc homeostasis, there was no intervention study of zinc and Zip14; thus, it was not clear which was the originating factor. To clarify the role of zinc in the pathogenesis of LVH, we will establish Zip14 gene knockout LVH rats fed with different concentrations of zinc and will use the ER inhibitor TUDCA to investigate the relationship between the Zip14 regulation of zinc homeostasis and ERS. Also, we would examine whether the ERS is inhibited by zinc supplementation in the LVH rat model in the following studies. Secondarily, this study mainly focused on the correlation between Zip14 and ERS induction of LVH animal model; however, the unfolded protein response (UPR) pathway and associated sensors, such as IRE phosphorylation, PERK (or eIF2α) phosphorylation or ATF6 cleavage, have not been determined here. In a future study, we would determine more contents such as PERK/eIF2α, cleaved ATF6, spliced XBP1 signaling pathways and associated molecules to enrich the findings and conclusions. Thirdly, the link between ERS and Zip upregulation has not been clearly investigated in this study. A previous study reported that under pharmacologically induced ERS or chemically alleviated protein misfolding, the Zip14 and zinc content are upregulated (Kim et al., 2017). Therefore, knocking down the Zip14 gene, such as siRNA targeting, would be better for clarifying the link between Zip14 and ERS. Meanwhile, the potential common transcription factor binding sites (TFBS) clusters in the promoter regions of gene Zip14, ATF6, ATF4 might be significant for demonstrating the correlation between Zip14 and ATF6 or ATF4. Fourthly, in the process of modeling, the body weight of rats was lower in LVH group compared to the Control group, which is another limitation of our study. We speculated that the body weight loss might be caused by the treatment of high salt fed (might affect the appetite of the rats). Finally, the Zn 2+ contents might affect the effects of Zip14; however, Zn 2+ contents in the blood of rats in both groups have not been determined. Conclusions Zinc accumulation and the upregulation of Zip14 and ERS were observed in myocardial tissues of rats with LVH. The upregulation of Zip14 in LVH rat hearts correlated with zinc accumulation and induction of ERS. However, the exact mechanism of these interactions needs to be investigated further. Acknowledgement: We thank the support and help of the team of Professor Xu Zhelong from the School of Basic Medicine, Tianjin Medical University. Availability of Data and Materials: All data generated or analyzed during this study are included in this published article (and its supplementary information files). Author Contribution: The authors confirm contribution to the paper as follows: study conception and design: QY and YS; data collection: JH, TT, BB, YX, LH, ZX; analysis and interpretation of results: JH, LH, ZX, YS; draft manuscript preparation: JH, QY and YS. All authors reviewed the results and approved the final version of the manuscript. Ethics Approval: All animal treatments were strictly in accordance with international ethical guidelines and the National Institute of Health Guide concerning the Care and Use of Laboratory Animals. The experiments were carried out with the approval of the Committee of Experimental Animal Administration of the University (ethical approval code: ZYY-IRB-SOP-016(F)-002-02, date of approval: 30th, April, 2015). Funding Statement: This work was supported by the key projects of Tianjin Natural Science Foundation (Grant No. 17JCZDJC34800). Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
4,761.4
2022-01-01T00:00:00.000
[ "Biology" ]
Monolithic integration of embedded III-V lasers on SOI Silicon photonic integration has gained great success in many application fields owing to the excellent optical device properties and complementary metal-oxide semiconductor (CMOS) compatibility. Realizing monolithic integration of III-V lasers and silicon photonic components on single silicon wafer is recognized as a long-standing obstacle for ultra-dense photonic integration, which can provide considerable economical, energy-efficient and foundry-scalable on-chip light sources, that has not been reported yet. Here, we demonstrate embedded InAs/GaAs quantum dot (QD) lasers directly grown on trenched silicon-on-insulator (SOI) substrate, enabling monolithic integration with butt-coupled silicon waveguides. By utilizing the patterned grating structures inside pre-defined SOI trenches and unique epitaxial method via hybrid molecular beam epitaxy (MBE), high-performance embedded InAs QD lasers with monolithically out-coupled silicon waveguide are achieved on such template. By resolving the epitaxy and fabrication challenges in such monolithic integrated architecture, embedded III-V lasers on SOI with continuous-wave lasing up to 85 °C are obtained. The maximum output power of 6.8 mW can be measured from the end tip of the butt-coupled silicon waveguides, with estimated coupling efficiency of approximately -6.7 dB. The results presented here provide a scalable and low-cost epitaxial method for the realization of on-chip light sources directly coupling to the silicon photonic components for future high-density photonic integration. Silicon photonic integration has gained great success in many application fields owing to the excellent optical device properties and complementary metal-oxide semiconductor (CMOS) compatibility.Realizing monolithic integration of III-V lasers and silicon photonic components on single silicon wafer is recognized as a long-standing obstacle for ultra-dense photonic integration, which can provide considerable economical, energy efficient and foundry-scalable on-chip light sources, that has not been reported yet.Here, we demonstrate embedded InAs/GaAs quantum dot (QD) lasers directly grown on trenched silicon-on-insulator (SOI) substrate, enabling monolithic integration with butt-coupled silicon waveguides.By utilizing the patterned grating structures inside pre-defined SOI trenches and unique epitaxial method via molecular beam epitaxy (MBE), high-performance embedded InAs QD lasers with out-coupled silicon waveguide are achieved on such template.By resolving the epitaxy and fabrication challenges in such monolithic integrated architecture, embedded III-V lasers on SOI with continuous-wave lasing up to 85 o C are obtained.The maximum output power of 6.8 mW can be measured from the end tip of the butt-coupled silicon waveguides, with estimated coupling efficiency of approximately -7.35 dB.The results presented here provide a scalable and low-cost epitaxial method for realization of on-chip light sources directly coupling to the silicon photonic components for future high-density photonic integration. There is fast-growing demand for integrated silicon photonic chips incorporating with on-chip lasers, which can lead to power-efficient and densely integrated optical interconnects and highspeed optical communications [1][2][3][4] .Various external lasers coupled silicon integrated chips have been demonstrated on 200 mm silicon-on-insulator (SOI) wafers, however, on-chip lasers remain as the stumbling block for further development of ultra-dense silicon photonic integration 5,6 .Over the past decade, III-V/Si heterogeneous integration via bonding techniques is academically and commercially recognized as a promising path towards realization of on-chip light sources [7][8][9][10] .With rapid progresses of silicon integrated photonics in applications such as artificial intelligence, hyperscale data centers, high-performance computing, light-detection and ranging (LIDAR) and microwave photonics 11 , monolithically integrated light sources begin to show their increasing preference as an alternative technical trend for compactness and low-power consumption 12 . As monolithic integration of III-V laser and silicon photonic components has always been a much-desired functionality, direct epitaxial growth of III-V quantum-dot (QD) lasers on silicon substrate has been extensively investigated with dramatic progress recent years [13][14][15][16][17][18][19][20] .Benefiting from the technical development of high-quality III-V material growth on silicon [21][22][23][24][25] , various siliconbased laser structures have been demonstrated with outstanding performance, including distributed feedback (DFB) lasers [26][27][28][29] , microcavity lasers [30][31][32] and mode-locked lasers [33][34][35][36] .From silicon photonic integration perspectives, all active and passive silicon photonic components should be on the same SOI platform, therefore, there is a strong urge to develop an SOI-based monolithic laser integration solution to efficiently couple the light into silicon waveguides (WGs).Up to date, there are relatively limited researches focusing on direct growth of SOI-based lasers [37][38][39][40] , and usually the lasers are located on top silicon of SOI with thick III-V buffer layers which are not capable of being coupled with passive waveguides.In case of heterogeneous bonding techniques mentioned above, the coupling from bonded lasers to silicon WGs has been well established by evanescent coupling 7 . On the other hand, although directly grown III-V lasers have been systematically investigated, the monolithic integration between III-V lasers and silicon photonic components remains absent in the field.In this work, we have demonstrated the first embedded QD lasers on SOI substrate with prepatterned laser trenches and silicon waveguides, with monolithic coupling to silicon passive waveguides, which offers great potential of achieving fully monolithic silicon photonic integrated chip.1b and c.Starting with an 8-inch SOI wafer, silicon waveguides are pre-patterned on top silicon layer of SOI substrate (Fig. 1d).The laser trench is then produced via dry etching process through buried oxide (BOX) layer into bulk silicon substrate for III-V laser growth, as shown in Fig. 1e.Inside laser trench, periodic silicon grating structures are then patterned with duty cycle of approximately 40% (146 nm slab width with 209 nm gap) as the surface SEM images show in Fig. 1f.The grating structures cover the entire laser trench for high quality III-V direct epitaxial growth.The zoomed-in SEM image of silicon gratings is displayed in Fig. 1g. Results and discussion Design and fabrication of SOI template.Fig. 2a presents the fabrication process of the SOI devices. The edge coupler and patterned trench are manufactured on an SOI wafer with a 220 nm thick top Si layer and a 3 μm thick buried SiO2 layer.The fork-shaped coupler and interconnecting waveguide are defined through E-beam lithography (EBL) process.The resist pattern was fully etched using the ICP-RIE process.Subsequently, 3.5 μm thick SiO2 cladding layer is deposited by plasmaenhanced chemical vapor deposition (PECVD) after removing the e-beam photoresist.The detailed fabrication information of the edge coupler can be found in the Methods.Then, the cladding, top Si, BOX, and 1.5 μm substrate layer are etched to form the laser trench.Finally, the silicon gratings are fabricated by EBL and ICP-RIE etching for III-V laser growth. Compared to common inverse taper edge coupler with single tip, edge couplers with multiple taper tips could have higher coupling efficiency and better alignment tolerance for laser buttcoupling scheme due to its elliptical spotsize are more comparable to laser's mode profile 41,42 .Hence, considering both eminent performance and tractable fabrication process, fork-shape edge couplers with double tips are proposed 43,44 .Besides the tip width, the fork shape offers an extra design parameter, while the gap between two tips can be also used to expand the mode field. The structure of the fork-shape coupler is shown in Fig. 2b.The same design methods are followed from our previous work 41,42 with further optimized structures.In order to verify the performance of the edge couplers used in monolithically integrated embedded InAs QD lasers on SOI wafer, we separately fabricate the silicon edge couplers with the same parameters and measured with a single mode fiber (SMF) and InAs QD laser grown on silicon substrate via butt coupling desgin.The tip width (w) is set as 100 nm, and the gap (g) between the two tips is 3.4 μm.The length of the first fork stage (L1) which converts the fiber mode to the slot waveguide mode is 64 μm.The second fork stage, converting the slot waveguide mode to the strip waveguide mode, is 11 μm long.Here, Fig. 2c shows the simulation result of electric field distribution of this edge coupler.The SMF or the monolithically integrated laser modes, which both exhibit relatively large mode profile, transfer preferentially into the edge coupler facet with large modal overlap, and then adiabatically converts to the strip waveguide mode.The edge coupling loss is mainly determined by the mode similarity, which is also called mode overlap, between the light input and the coupler facet 41 .The mode overlap between the SMF with mode field diameter (MFD) of 10 μm and the chip facet is 85%.In comparison, the mode overlap between InAs QD laser and fork shape coupler is approximately 76%, which is relatively lower than SMF.Notably, the total length of the coupler is only 75 μm, which is superior for precious integrated photonic chip.Step 6: Final embedded QD lasers with direct edge coupling into pre-patterned silicon waveguides. Monolithic epitaxial growth and fabrication of III-V lasers on trenched SOI substrate.Fig. 2d shows the entire monolithic process of on-chip lasers, including the embedded growth of III-V gain materials in the grating patterned SOI trench and subsequent laser fabrications.As depicted in Fig. 2d, the pre-patterned silicon waveguides are protected with PECVD SiO2 top cladding.The SOI template with pre-defined silicon waveguides is firstly prepared in order to create the laser trench on bottom silicon of SOI substrate for laser material deposition.To note, as both the horizontal and height alignments to silicon waveguides are critical, the depth of laser trench needs to be precisely designed in line with the active region height of InAs QD laser on silicon.Although, the exact alignment can still be finely adjusted during the epitaxial growth, which normally can be controlled within accuracy of nanometer scale.In order to avoid anti-phase domains (APDs), threading dislocations (TDDs) and thermal mismatch induced thermal cracks 45,46 , homoepitaxially formed (111)-faceted Si V-grooves over the top of pre-defined silicon gratings are introduced here to suppress defects generated during hetero-epitaxial growth as shown in step 1 and 2 of Fig. 2d.These techniques will been discussed explicitly in the Methods, while the entire epitaxial structures will be described in the following section. There is one major issue here as observed in step 3 of Fig. 2d, during the hybrid III-V/IV growth process via molecular beam epitaxy, both silicon and III-V materials will also be deposited outside the laser trench in semi-amorphous crystal format.In this case, it normally leads to relatively large height contrast in-and outside of the laser trench, which can result in uneven photoresist stacking at the edge of the trench during laser-waveguide alignment process.To solve this problem, H3PO4: H2O2: H2O (1 : 2 : 20) wet etching solution is selected here to remove excessive semiamorphous Ⅲ-Ⅴ materials outside the trench region before the laser epitaxy process. The embedded III-V laser is then processed with one-side cleaved facet (step 5 of Fig. 2d).As previously mentioned, the alignment precision between laser and silicon waveguide will significantly affect coupling efficiency especially in the vertical direction.Meanwhile, the accurate control of alignment in horizontal direction is determined by photolithograph during the laser device process.The alignment deviation between the central axis of silicon waveguide and the laser ridge here can be controlled within ±250 nm, which is within the tolerance of the unique multipe tips taper design.To ensure precise alignment between the laser ridge and silicon waveguide, the removal process of semi-amorphous silicon and Ⅲ-Ⅴ materials above the waveguide has to be treated properly, otherwise, the blurred silicon waveguide structure will lead to reduced accuracy of laserto-waveguide alignment.In addition, the semi-amorphous material removing process mentioned above acts also as facet etching for III-V/Si interface at laser output side, as the gap area between laser and silicon waveguide remains un-protected during the removing process. The integrated device is finalized by applying high reflection coating to as-cleaved facet at one side, while implementing focused ion beam (FIB) milling process to the other side for the laser output facet as shown in step 6 of Fig. 2d.The ion milling process here aims to create mirror-like facet by removing the semi-amorphous material at III-V/IV interface.Although the embedded laser can still lase without FIB treatment, the facet polishing using FIB can further improve the laser performance significantly.The performance differences between wet etched facets and FIB etched facets will be compared and discussed in the laser characterization section. In case of the monolithic epitaxy of embedded III-V lasers, the growth details are described in Methods.The overall schematic of the laser epi-structure on the trenched SOI template is displayed in Fig. 3a.Here, a 10 nm-thick AlAs nucleation layer was first deposited to optimize the GaAs/Si (111) interface and suppress APD formation as marked in Fig. 3a.After approximately 2100 nmthick III-V buffer layers, which includes InGa(Al)As/GaAs quantum well dislocation filters (DLFs) and GaAs/AlGaAs superlattice (SL) layers 24 , a smooth and APD-free GaAs surface can be achieved on the trenched SOI region as verified by atomic force microscope (AFM) measurement shown in Fig. 3b.The 5 × 5 μm 2 AFM image shows the root-mean-square (RMS) roughness of only 0.8 nm.Fig. 3c shows the surface electron channeling contrast imaging (ECCI) result of the trenched GaAs/SOI template, indicating a 2.6 × 10 7 /cm 2 surface threading dislocation density (TDD) of the template.The bright-filed cross-sectional transmission electron microscopy (TEM) image at the GaAs/Si(111) interface is also presented in Fig. 3d, which indicates the same defect suppression effect of the homo-epitaxially formed Si(111) sawtooth structures reported previously 32,34 .Based on these high-quality trenched GaAs/SOI templates, the standard InAs/GaAs QD laser diode structures 39,47 are grown as shown in Fig. 3a.The laser structure consists of a 7-layer InAs QD active region, which is sandwiched between the 400 nm n-/p-doped GaAs contact layers and 1400 nm n-/p-doped Al0.4Ga0.6Ascladding layers.Astride each AlGaAs cladding layer, the stepgraded AlxGa1-xAs (0.1<x<0.4) transitional layers are deposited in order to increase the current injection efficiency of the device.As Fig. 3a shows, the semi-amorphous III-V materials will be formed on the SiO2 cladding layers outside the trenched region, which can be removed by wetetching process before device fabrication as mentioned above.To clarify the optical gain properties of the InAs QDs on the trenched GaAs/SOI substrate, identical 7 layers of InAs QDs are both grown on the trenched GaAs/SOI and GaAs (001) substrates, respectively, and the room-temperature photoluminescence (PL) spectra of the two samples are shown in Fig. 3e.Typical O-band PL emission with a full width at half maximum (FWHM) of 33 nm is obtained from the InAs QDs on trenched GaAs/SOI, which is interestingly smaller than that (FWHM: 42 nm) on GaAs (001) substrate.Almost the same PL peak intensity from the two samples is observed here.The inset in Fig. 3e shows the 1 × 1 μm 2 AFM image of the surface InAs QDs on trenched GaAs/SOI, indicating a 5.1 × 10 10 /cm 2 dot density and good dot uniformity.Notably, the offset in the PL peaks from the two samples (trenched SOI: 1293 nm; GaAs: 1278 nm) is caused by the difference of the real temperatures on the two substrates, which will be discussed in Methods.Fig. 4a shows the tilted-view SEM image of fabricated embedded InAs QD lasers with precise alignment to pre-patterned silicon WGs.As previously mentioned, the horizontal offset between laser ridges and silicon waveguides is less than 500 nm along with the arrays of waveguide lasers (Fig. 4b).At the coupling tip of silicon WGs, fork-like spot-size converter is implemented to increase dimensional tolerance of laser-waveguide alignment offsets, as shown in Fig. 4c.As the laser facet close to silicon waveguide intends to accumulate some semi-amorphous materials, the wet etched facet is normally difficult to achieve mirror-like sidewall.FIB milling is also utilized here to further polish the facet for high gain cavity formation as shown in Fig. 4d.Fig. 4e shows the zoomed-in optical microscope image of the monolithically integrated device.The smoothness of the cavity facet is one of the most essential factors that affect the laser performance.Therefore, the FIB etch with large current is implemented to initially separate the III-V and silicon materials.Smaller current FIB is then applied to the sidewall for fining polishing, in order to obtain shinning laser facet.Here, we compare the wet etched laser facet with FIB polished laser facet in Fig. 4f, where the upper SEM image shows the wet etched laser facet with approximately 5 μm wide coupling gap between laser and silicon WGs.As observed, both steepness and roughness of the wet etched laser facet are imperfect.After FIB fine polishing in the lower SEM image of Fig. 4f, the laser facet appears to be ultra-smooth that is similar to as-cleaved facet.In Fig. 4g, white light interferometric imaging is performed to show the entire structure of integrated device, where the left side is the embedded InAs QD laser, and the right side is the pre-patterned silicon waveguides. Characterizations of on-chip integrated lasers. In order to examine the effective coupling efficiency from laser to silicon WGs, here, we select single laser bar directly grown inside SOI trench with both facets cleaved as a reference laser, which is characterized with standard lightcurrent (L-I) measurements (top left of Fig. 5a).Here, the reference laser is produced as an analogy to InAs QD laser directly grown on silicon substrate.In case of silicon WG coupled on-chip laser, both L-I curves and optical spectra are collected from the silicon WG side as shown in Fig. 5a.The temperature dependent L-I curves of embedded laser without silicon WG (so-named as reference laser) are measured in Fig. 5b, which manages to lase up to 95 o C in continuous-wave (CW) current operation.The threshold current at room temperature is approximately 50 mA.The maximum output power is 37 mW at injection current of 250 mA. The silicon WG edge-coupled on-chip laser is then characterized with slightly higher threshold current of 65 mA at room temperature under CW mode, while the maximum operation temperature is reduced by 10 o C to 85 o C (Fig. 5c).The relatively higher threshold current and lower maximum operation temperature are attributed to increased thermal accumulation inside the laser trench with surrounding BOX layer.The L-I performance of embedded laser with single-side wet etched facet and FIB etched facet are compared in inset of Fig. 5c.With additional FIB facet polishing, the threshold current is reduced from 92 mA to 65 mA, while the out-coupled optical power through silicon WG is also improved from 5.3 mW to 6.8 mW at injection current of 210 mA.Due to relatively large divergence angle of InAs QD laser and slightly height mismatch induced during the gain material growth, the detected output power from silicon WG side is lower than the actual laser power.The overall coupling loss is here estimated to be -7.35dB.Furthermore, in order to extract characteristic temperature (To) and slope efficiency of both as-cleaved laser and on-chip integrated laser, threshold current and output power at different temperatures are analyzed as shown in Fig. 5d In summary, monolithic integrated III-V lasers on SOI substrate with silicon waveguide output have been realized by directly growing InAs QD lasers inside pre-patterned SOI trenches.Homoepitaxial formation of (111)-faceted Si V-grooves and heteroepitaxial growth of InGaAs/GaAs defect trapping techniques are implemented in this work to achieve high quality III-V gain materials on SOI.Our results demonstrate that monolithic integration of III-V laser with silicon photonic components will no longer be a design-level hypothesis.Overall, the monolithically integrated lasers can operate over 85 o C with low threshold current of 65 mA at room temperature and silicon WG coupled maximum output power of 6.8 mW.One more step forward, the performance of on-chip integrated InAs QD lasers can be further improved by including advanced silicon spot size converter with accurate control of laser-waveguide coupling distance during process.Once the coupling efficiency issue is resolved, many selections of silicon photonic components can all be integrated monolithically on a single wafer, such as modulators, wavelength de-multiplexers and photodetectors, just to name a few.We believe that this monolithic integration techniques of on-chip lasers would offer a promising approach towards high-density and large-scale silicon photonic integration, especially in the application fields such as on-chip optical interconnect and integrated optical ranging. Fig. 1 Fig. 1 Monolithically integrated embedded InAs QD lasers on SOI characterizations in this work.a Schematic of monolithic integration of III-V QD laser edge coupled with silicon waveguide on SOI platform.b Top-view SEM image of InAs QD laser arrays grown in pre-patterned laser trenches, with passive silicon WGs.c Optical microscope image of entire integrated chip.d 8-inch SOI wafer with prepatterned laser trenches and silicon waveguides.e Zoomed-in optical microscope image of laser trenches with aligned silicon waveguide arrays on SOI substrate.f Top-view SEM image of patterned silicon grating structures inside laser trenches for III-V growth.g Magnified silicon gratings with slab width of 146 nm and gap width of 209 nm.The duty cycle of this grating is approximately 40% inside the laser trench. Fig. 1a depicts Fig.1adepicts the schematic of monolithically integrated III-V QD lasers edge-coupled with Fig. 2 Fig. 2 Schematic diagram of patterned SOI trenches and design of edge coupler.a Fabrication flows of patterned SOI template with 3 μm BOX layer and 220 nm top Si layer.b The layout and parameters of silicon fork coupler.c Electric field distribution of the edge coupler.d Schematic diagram of embedded laser process on trenched SOI.Step 1: The exposed silicon substrate patterned with silicon gratings.Step 2: Homoepitaxially formed Si V-groove structures over the top of silicon gratings.Step 3: InAs/GaAs QD laser epi-structures directly grown inside the SOI trench.Step 4: Chemical remove of unwanted III-V materials outside the SOI trench.Step 5: Fabricated narrow ridge laser with one-side as-cleaved facets. Fig. 3 Fig. 3 Epitaxial growth structures and material characterization.a Schematic of the laser epi-structures.b Surface AFM image of 2100 nm thick III-V buffer layers grown on trenched SOI before epitaxial growth of laser sturctures (RMS ~ 0.8 nm).c TDD estimated from ECCI image (2.6 X 10 7 /cm 2 ).d Cross-sectional TEM image of as-grown GaAs/Si interface on trenched region of SOI substrate.e PL spectra comparison between InAs QDs grown on trenched SOI substrate and standard GaAs substrate under identical conditions.Inset: AFM image of surface InAs QDs on trenched SOI. Fig. 4 Fig. 4 Fabricated monolithically integrated InAs QD lasers coupled to silicon waveguides.a Tilted SEM image of entire integrated chip.b SEM image of fabricated narrow ridge laser directly coupled with silicon waveguides using fork-like mode converter.c SEM image of silicon waveguide mode converter.d Magnified SEM image of single embedded laser edge-coupled with silicon waveguide.e Optical microscope of monolithically integrated laser and silicon WG. f SEM images of the embedded lasers with wet etched and FIB etched facets, respectively.g White light interferometric image of the finalized . The as-cleaved laser on trenched SOI exhibits To values of 282.4 K and 51 K in temperature ranges from 20 -40 o C and 45 -95 o C. In comparison, on-chip integrated laser (silicon WG output) has slight degradation in To values, that are 108.1 K and 33.7 K in temperature ranges from 20 -65 o C and 70 -85 o C. Moerover, as Fig. 5c shows, the reference laser presents a higher slope efficiency of 0.148 W/A at 20 o C, while the silicon WG coupled on-chip laser owns that of 0.025 W/A at 20 o C, which are induced from non-optimized edge-coupling efficiency and slightly height mismatch induced misalignment between embedded laser and silicon WG.Fiber collimator is here implemented to collect light from silicon WG side for optical spectral analysis.Current dependent spectral measurements from 70 mA to 160 mA at room temperature are shown in Fig. 5e.The spectral evolution of on-chip integrated laser operating in the temperature ranging from 20 o C to 70 o C at fixed injection current of 175 mA is measured in Fig. 5f.To the best of our knowledge, this is the first demonstration of the InAs/GaAs QD laser was epitaxially grown on a trenched SOI template with a butt-coupling silicon waveguide. Fig. 5 Fig. 5 Continuous-wave characterizations of embedded InAs QD laser on SOI with and without coupling into silicon waveguide.a Schematics of L-I and optical spectral measurements; top left: L-I measurements of double-side cleaved III-V laser inside SOI trench; top right: L-I measurements of embedded InAs QD laser from silicon waveguide output; bottom: optical spectral measurements from the silicon waveguide.b Continuous-wave temperature-dependent L-I measurements of double-side cleaved III-V laser inside SOI trench as reference laser.c Continuous-wave temperature-dependent L-I measurements of integrated laser with one cleaved facet and FIB etched for the other; inset: roomtemperature L-I comparison between single-side wet etched facet and FIB etched facet.d Plots of natural logarithm of threshold current and slope efficiency versus varied operating temperatures; characteristic temperature (To) are fitted for double-side cleaved embedded laser (red dots) in temperature ranges from 20 -40 o C and 45 -95 o C, respectively; characteristic temperature (To) are fitted for embedded laser with integrated silicon WG (red stars) in temperature ranges from 20 -65 o C and 70 -85 o C, respectively.e Optical spectral analysis of integrated laser versus increased injection current.f Optical spectral analysis of integrated laser versus temperature variation.
5,707.8
2022-07-16T00:00:00.000
[ "Engineering", "Physics" ]