text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Integral transforms of an extended generalized multi-index Bessel function 1 Department of Mathematics, University of Sargodha, Sargodha, Pakistan 2 Department of Mathematics, University of Lahore, Sargodha, Pakistan 3 Department of Mathematics, Shaheed Benazir Bhutto University, Sheringal, Upper Dir, 18000, Pakistan 4 Department of Mathematics and General Sciences, Prince Sultan University, Riyadh, KSA 5 Department of Medical Research, China Medical University, Taichung 40402, Taiwan 6 Department of Mathematics, College of Arts and Sciences, Wadi Aldawser, 11991, Prince Sattam bin Abdulaziz University, Saudi Arabia Introduction The Bessel function [1][2][3][4][5][6][7][8] has great importance in the field of mathematics, physics and engineering due to its applications. Researchers and mathematicians developed a new class of Bessel functions in the sense of multi-index functions, which motivate the future research work in the field of special functions and fractional calculus. The theory of multi-index multivariate Bessel function discussed by Dattoli et al. [9] in 1997. is defined in the following way: . Remark 1.1. The E 1 GMBF can also be write as (1.18) Particular special cases In this section, we establish some particular special cases of E 1 GMBF as below • if we set p = 0, then E 1 GMBF reduce into extended multi-index Bessel function . (2.1) . Results of E 1 GMBF In this section, we investigate the E 1 GMBF, and studied some important observations. Moreover, we develop integral and differential of E 1 GMBF in the form of theorems. Theorem 3.1. The E 1 GMBF can be able to represent with α j , β j , b, δ, γ, c ∈ C ( j = 1, 2 · · · m) be such that m Proof. Using the definition of Eq (1.8) in (1.17), we obtain Changing the order of summation and integration, and after simplification of Eq (3.2), we get dt. 1+r in theorem 3.1, then following relation holds (3.5) Proof. Consider the definition of (1.17) for j = 1, and the right side of the Eq (3.6), we get Theorem 3.3. For the E 1 GMBF we have the following higher derivative formula for δ = 1, is given below Proof. Differentiation with respect to z in Eq (1.17), we get we can write the pochhammer symbols as (3.10) Now, using the Eq (3.10) in Eq (3.9), we have Again differentiation with respect to z in Eq (3.9), we have continue this technique up to n times, we obtain the desired result which state in the theorem 3.3. Proof. Replacing z by λz α j ···α j , b = −1 and δ = 1 in Eq (1.17), take its product z β 1 ···β j , and after taking differentiation with respect to z up to n times, we obtain our required result. Integral transforms of E 1 GMBF In this section, we establish some integral transforms (Euler, Mellin and Laplace transform) of E 1 GMBF in the form of theorems, and also discuss its sub cases. Relation of the E 1 GMBF with the Laguerre polynomial and Whittaker function In this section, the authors represent the E 1 GMBF in terms of Laguerre polynomial, and Whittaker function in the form of theorems. Conclusions In this research, we described extension of extended generalized multi-index Bessel function (E 1 GMBF) and developed some results with the Laguerre polynomial and Whittaker function, integral representation, derivatives and solved integral transforms (beta transform, Laplace transform, Mellin transforms). Moreover, we discussed the composition of the generalized fractional integral operator having Appell function as a kernel with the E 1 GMBF and obtained results in terms of Wright functions.
835.8
2020-09-20T00:00:00.000
[ "Mathematics" ]
Monitoring radiation damage in the LHCb Tracker Turicensis This paper presents the techniques used to monitor radiation damage in the LHCb Tracker Turicensis from 2011 to 2017. Bulk leakage currents in the silicon sensors are monitored continuously, while the full depletion voltage of the sensors is estimated at regular intervals by performing dedicated scans of the charge collection efficiency as a function of the applied bias voltage. Predictions of the expected leakage currents and full depletion voltages are extracted from the simulated radiation profile, the luminosity delivered by the LHC, and the thermal history of the silicon sensors. Good agreement between measurements and predictions is found. Introduction The LHCb experiment [1] at the LHC was designed to study decays of hadrons containing beauty and charm quarks. Its main goal is to perform precision measurements of CP violating observables, and to search for signatures of potential physics beyond the Standard Model in rare decays of such hadrons. The layout of the LHCb detector is illustrated in Fig. 1. The detector is a single-arm forward spectrometer that covers polar angles from about 15 mrad to 250 mrad and consists of a vertex locator, a tracking system comprising a single planar tracking station upstream of the spectrometer dipole magnet and three tracking stations downstream of this magnet, two Ring Imaging Cherenkov detectors, a calorimeter system comprising electromagnetic and hadronic calorimeters, and a muon system. The Silicon Tracker (ST) comprises the tracking station upstream of the dipole magnet, called Tracker Turicensis (TT), and the central part in each of the three tracking stations downstream of the dipole magnet, called Inner Tracker (IT); the outer part of the downstream tracking stations is formed by a straw-tube detector. The TT employs conventional AC-coupled float-zone p + -on-n silicon micro-strip sensors The four different types of readout sectors employed in each of the detection layers are indicated by different shading: readout sectors close to the beam pipe consist of a single silicon sensor, other readout sectors consist of two, three or four silicon sensors that are connected together in series. that are 500 µm thick and have 512 strips with a pitch of 183 µm and a length of 96 mm. The total silicon volume per sensor is 4512 mm 3 . The initial full depletion voltages of the sensors were determined from measurements of the bulk capacitance as a function of applied bias voltage and were found to range between 135 and 275 V. The TT consists of one detector box containing four detection layers arranged in two pairs: the first two layers are centered around z = 232 cm, and the last two around z = 262 cm. An ambient temperature of about 8 • C is maintained inside the box. The box is flushed with dry gas (N 2 ) to avoid condensation on cold surfaces. Silicon sensors within each detection layer are electronically grouped into readout sectors consisting of one, two, three or four sensors, as illustrated in Fig. 2. All sensors within a readout sector are connected in series to a front-end readout hybrid that carries four 128-channel Beetle chips [2]. The Beetle front-end readout chip amplifies and shapes the signals from the readout strips and samples the signal amplitude every 25 ns, corresponding to the LHC bunchcrossing frequency of 40 MHz. The time offset between the LHC bunch clock and the Beetle sampling time is an adjustable parameter. The output data of the Beetle chips are multiplexed, digitised and transmitted optically to the LHCb TELL1 readout boards [3]. In the TELL1 boards, algorithms for common-mode subtraction, cluster finding and zero-suppression are executed during normal data taking. It is, however, also possible to store the raw sampled amplitudes from each input channel of the Beetle chip for offline analysis. This non-zero-suppressed readout mode is employed for the studies discussed in this paper. LHCb collected an integrated luminosity of about 3 fb −1 at proton-proton collision energies of √ s =7 and 8 TeV during Run 1 of the LHC (2010-2012), and √ s = 3.7 fb −1 at 13 TeV in 2015-2017. About 2 fb −1 more are expected to be collected in the last year of data taking during Run 2 (2018) at √ s = 13 TeV. The evolution of the integrated luminosity is shown in Fig. 3 (left) for each year of data taking. Expected particle fluences at collision center-of-mass energies of √ s = 7, 8 and 14 TeV have been derived from simulation using Fluka [4], a general purpose tool for calculations of particle transport and interactions with matter, and a detailed description of the geometry of the LHCb experimental area [5,6]. Maps of the particle fluence at √ s = 7 and 8 TeV are scored with a resolution of [1,1] cm, while the map at √ s = 14 TeV is scored with a resolution of [2.5, 2.5] cm and subsequently interpolated to obtain a map with a resolution of [1,1] cm. All maps are produced at a z-coordinate corresponding to the location of the TTaU layer. The statistical uncertainty on the expected fluence is modeled as a power-law function of the radial position from the beam pipe, a 0 × (r/r 0 ) α , where a 0 = 0.7 × 10 −2 cm −2 , r 0 = 1 cm, and α = 0.918. The expected 1-MeV neutron equivalent fluence per proton-proton collision at a center-of-mass energy of √ s = 14 TeV at the location of one of the detection layers of the TT is shown in Figure 3 (right). The highest expected 1-MeV neutron equivalent fluence, in the innermost region of the TT, corresponds to about 10 13 cm −2 per 1 fb −1 of integrated luminosity collected at a proton-proton collision energy of √ s = 14 TeV. Since particles are primarily produced in the forward direction, the expected fluence falls off by almost three orders of magnitude from the innermost to the outermost region of the TT detector. Two methods are employed to monitor the radiation damage to the silicon sensors and its impact on the detector performance. The first method, discussed in Section 2, is based on the continuous measurement of detector leakage currents. These measurements are compared to the expected radiation-induced increase of the bulk leakage current as a function of received fluence at an ambient temperature of 8 • C. The second method, discussed in Section 3, is based on the analysis of collision data accumulated during dedicated Charge Collection Efficiency (CCE) scans. These CCE scans are performed at regular intervals and allow an estimation of the full depletion voltage of the sensors. The results of these measurements are compared to predictions derived from the Hamburg model [7], which parametrises the evolution of the effective doping concentration of the silicon bulk as a function of received fluence and temperature. Section 4 presents a summary of the different studies and discusses the expected evolution of the TT performance in terms of radiation damage. Leakage current measurements Bias-voltage for the TT is supplied by a commercial high-voltage (HV) system 1 with 152 channels supplying up to 500 V at a maximum current of 10 mA per HV channel. The innermost six silicon sensors in each detection layer, which receive the highest fluence and therefore experience the largest increase in leakage current, are connected to individual HV channels. In the outer regions of the detection layers, groups of two, three, four, nine or twelve silicon sensors are connected to a common HV channel. The current drawn by each HV channel is monitored with a resolution of 1.0 µA, and a maximum interval of 120 minutes allowed between consecutive readings. For the purpose of this analysis, the leakage current, I leak , for a given HV channel is determined for each LHC fill as the maximum current observed in that fill 2 . Measurements from fills with special detector configurations (e.g. higher or lower bias voltage) are discarded in the leakage current measurements. The leakage current has a temperature dependent behaviour given by where T 1,2 are different temperatures, k B is the Boltzmann constant and E g = 1.21 eV is the band-gap energy of silicon. All measured currents are normalised to a nominal temperature of 8 • C, using measurements of the ambient temperature inside the detector box and the well-known temperature dependence of the bulk leakage current in silicon [8]. The normalized leakage currents, scaled to 8 • C and per silicon volume, are shown as a function of integrated delivered luminosity and time in Fig. 4 for the innermost sensors in the second and third detection layers, indicated as U and V layers, respectively. The change ∆I leak in the leakage current is expected to be linear as a function of the fluence, Φ, that the sensor has been exposed to, for a range between 10 11 and 10 15 1-MeV n / cm 2 [7]. So ∆I leak can be described as where α is the current-related damage rate and V the silicon volume. The coefficient α Parameter Value shows a behaviour dependent on temperature, T a , and on time, t, as the irradiation occurs alternating with annealing, i.e. periods without collisions [9]. Short term annealing (on the order of a few days to a month) is described by the sum of a constant and exponential terms, where the first term describes the effects of stable damage. An additional logarithmic term is used to describe annealing effects for time periods longer than a year [10], The temperature dependence of the time constant τ I in Eq. (3) can be described by the Arrhenius relation [11], where k I,0 is the frequency factor and E a,I the activation energy of the process. The values of these parameters are listed in Table 1. Figure 4 also shows the predicted evolution of I leak based on this formalism using the actual running conditions in LHCb (i.e. instantaneous luminosity, duration of the fills, ambient temperature) and the 1-MeV neutron equivalent fluence computed for a given sector by integrating the simulated Fluka radiation map over the sensor area. The predictions are in good agreement with the evolution of the leakage currents in data. Significant decreases in the currents are observed, as expected, in the data during periods with no irradiation, i.e. in the long shutdown, LS1, of the LHC in 2013 and 2014, extended stops of the accelerator during the winter season and short technical stops throughout the year (one week about every three months). Depletion voltage measurements Non-ionizing energy loss (NIEL) damage in the silicon bulk leads to a change in the effective doping concentration, n eff , of the bulk and therefore to a change in the full where q is the electron charge, εε 0 the effective permittivity of silicon and D the full thickness of the sensor. As mentioned in Section 1, the initial full depletion voltage of all silicon sensors was determined in C − V scans at the foundry and during the construction of the detector. For a subset of the installed detectors, the evolution of V depl during operation has then been measured from scans of the charge-collection efficiency as a function of the applied bias voltage (CCE scans). Such CCE scans require particles from LHC collisions to be recorded using special data taking settings. They are performed about two to four times a year, in particular once at the beginning of LHC operation in spring and once at the end of operation before Christmas. Table 2 summarises the CCE scans performed between 2011 and 2017. While in the nominal data taking configuration the applied bias voltage V bias is 300 V for all TT readout sectors, during these CCE scans the V bias of one detection layer (TTaU) is scanned in the range from 60 to 400 V. The other detection layers are operated at the nominal bias voltage. The data from these layers and the other detectors of the LHCb tracking system are used to reconstruct the trajectories of charged particles produced in the proton-proton collisions. Tracks reconstructed by the rest of the LHCb tracking system, including the TT layers operating in nominal conditions, are used to interpolate the hit position in the scanned layer. Track quality requirements are applied to remove tracks with poor track fit quality or originating from random combinations of track segments from different parts of the tracking system. The pedestal-subtracted ADC values of the three readout strips closest to this interpolated hit position are then summed to give an estimate of the collected charge. As an example, Fig. 5 shows the signal height distribution in a TT read-out sector for V bias = 100 V and 400 V with δt = 0.00 ns. The incomplete depletion of the silicon bulk at V bias = 100 V induces the clearly visible decrease in signal amplitude. The most probable value (MPV) of the collected charge is extracted from a fit of this distribution, using a model consisting of the sum of two Gaussians to describe the residual contribution of noise hits due to wrong track extrapolations (blue line), convolved with the sum of two Landau distributions to describe the contribution from signal hits (red line). The second Landau distribution describes a signal component due to photon conversions in the beam pipe and detector material upstream of the TT, creating e + e − pairs that will cross the TT at the same position and deposit twice as much charge as a single track. The MPV and the width of this second Landau distribution are fixed to twice the corresponding values of the first Landau. The MPVs extracted for different sampling times δt at a given bias voltage are then fit with the empirical function describing the semi-Gaussian signal shape output by the pulse shaping circuit employed in the Beetle chip, which is composed of a differentiator (CR) followed by an integrator (RC) [12,13]. The shaping time τ , time offset t 0 and amplitude A are free parameters of the fit. As an example, Figure 6 shows the results of these fits for one of the innermost sensors and for V bias = 100 V and 400 V. The integral of the fitted function between its two zeroes t 0 and t 0 + 3τ is taken as a measure of the collected charge at the given bias voltage. This quantity, referred to as "integrated charge", is shown as a function of V bias in Fig. 7. The three highest V bias data points are fit with a constant to determine the plateau value. The data points are interpolated with a fifth-order spline [14] S and the depletion voltage is determined as the bias voltage at which S(V bias ) reaches the fraction r = 94% of its value in the plateau. The value r = 94% was chosen based on an analysis of the earliest CCE scan, taken on 2011-07-14. First, the ratio analysed readout sectors is shown in Fig. 8(a). The values of r obtained for the different readout sectors were then averaged to determine r. The systematic uncertainty on V depl due to the choice of r was estimated by applying the full method to the first CCE scan and comparing the values of V depl extracted from the spline fits with r = 94% to the values V 2011-07-04 depl extracted from the initial C − V scans and the Hamburg model. The residual distribution of V depl and V 2011-07-04 depl is shown in Fig. 8(b); the spread of this distribution is taken as systematic uncertainty. A second source of systematic uncertainty is the choice of a fifth-order spline to interpolate the measurements. To estimate this uncertainty, the entire procedure was repeated using a simple linear interpolation between consecutive measurements instead of the spline fit. The distribution of the differences in V depl obtained using the two types of interpolation are shown in Fig. 8(c) for all analysed readout sectors; the spread of this Parameter Value distribution is taken as systematic uncertainty. These two uncertainties are summed in quadrature. The change ∆n eff in the effective doping concentration is described in the Hamburg model by three different mechanisms, described in detail in Refs. [7,[15][16][17]: 1. a contribution associated to the effective (incomplete) removal of donor atoms and the addition of stable acceptor atoms due to radiation-induced changes in the band structure of the silicon crystal. This contribution, referred to as "stable damage", is described by ∆n c (Φ 1 MeV-n,eq ) = n c,0 1 − exp(−c Φ 1 MeV-n,eq ) + g c Φ 1 MeV-n,eq ; 2. a temperature-dependent contribution due to the annealing of the induced defects, which can be described by ∆n a (Φ 1 MeV-n,eq , t, T a ) = Φ 1 MeV-n,eq g a exp − t τ a (T a ) ; 3. a contribution due to the combination of individual defects in the silicon lattice, leading to stable defects ("reverse annealing"), described by ∆n r (Φ 1 MeV-n,eq , t, T a ) = Φ 1 MeV-n,eq g r 1 − 1 1 + t/τ r (T a ) . The sum of the three contributions describes the total change of n eff : ∆n eff = ∆n c + ∆n a + ∆n r . Figure 9: Measured effective depletion voltage (red points) in the area of two TT sensors: one just above the beam pipe (a) and one further away (b), whose position is indicated in Figure 11(a). The black point and error bars corresponds to the CCE scan used to calibrate the ratio r. The statistical contribution to the total uncertainty is indicated by the solid error bars. The predicted evolution of the depletion voltage, based on the initial depletion voltage measured after sensor production, the running conditions and the Hamburg model, is shown as a solid black line. The grey bands show the uncertainty on the predicted evolution of V depl , while the black dashed lines account for the ±2.5 V uncertainty on the measurement of the initial depletion voltage V 0 depl in CV scans. The values of the model parameters are listed in Table 3. Their values were measured in dedicated irradiation campaigns and can be found in literature [7]. The parameters τ a and τ r have a temperature dependence described by the Arrhenius relation, cf. Eq. 4, with parameters E aa and k a,0 and E ar and k r,0 , respectively, and the ambient temperature T a is taken from sensors placed inside the detector boxes. Figure 9 shows the measured depletion voltages for a one-sensor and for a two-sensors TT readout sectors, highlighted in Figure 11(a), and their predicted evolution based on the Hamburg model described above. This calculation uses the fluence estimated from the Fluka simulations 3 , the actual running conditions and the temperature measurements in the detector boxes. For the readout sectors closest to the LHC beam pipe, dedicated measurements only including tracks traversing the sensors within 75 mm from the beam axis are performed. Fig. 10 shows the measured values of V depl of the different TT read-out sectors in the different CCE scans as a function of the 1-MeV-neutron equivalent fluence estimated from the Fluka simulations and the integrated luminosity collected in LHCb. It also shows the expected evolution of V depl based on the stable damage contribution n c . Fig. 11(b) shows the absolute change of V depl in the innermost part of the detector between July 2011 and September 2017. Good agreement between the measurements and the predicted evolution of V depl is observed. From the measurements it can be concluded that a type inversion in the silicon sensors of the TT is not expected until the end of the current LHCb data taking period (2019). Conclusion The evolution of the radiation damage in the LHCb Tracker Turicensis has been monitored using measurements of leakage currents and effective depletion voltages. The latter are performed with data collected in dedicated charge collection efficiency scans. The obtained results show good agreement with predictions based on phenomenological models. At the end of 2017, the innermost sensors, which experience the highest fluence, have not yet reached the point of type inversion, and no modifications to the operation procedure of the detector are expected in its last year of operation. The detector will be replaced as part of the LHCb upgrade [18] during the long shutdown LS2 of the LHC. Figure 11: Sketch highlighting the TT sectors analysed during the various CCE scans (a). The greyed out sectors were excluded due to poor statistics. The two sectors highlighted with dashes, just above the beam pipe and below the beam pipe on the left hand side, are those which evolution is shown in Figure 9(a) and Figure 9(b), respectively. Absolute change in V depl in the innermost region of TT in September 2017 (b).
4,891.6
2018-09-13T00:00:00.000
[ "Physics" ]
A New Quasi One-Dimensional Compound Ba3TiTe5 and Superconductivity Induced by Pressure We report systematical studies of a new quasi-one-dimensional (1D) compound Ba3TiTe5 and the high-pressure induced superconductivity therein. Ba3TiTe5 was synthesized at high pressure and high temperature. It crystallizes into a hexagonal structure (P63/mcm), which consists of infinite face-sharing octahedral TiTe6 chains and Te chains along the c axis, exhibiting a strong 1D characteristic structure. The first-principles calculations demonstrate that Ba3TiTe5 is a well-defined 1D conductor and thus, it can be considered a starting point to explore the exotic physics induced by pressure via enhancing the interchain hopping to move the 1D conductor to a high dimensional metal. For Ba3TiTe5, high-pressure techniques were employed to study the emerging physics dependent on interchain hopping, such as the Umklapp scattering effect, spin/charge density wave (SDW/CDW), superconductivity and non-Fermi Liquid behavior. Finally, a complete phase diagram was plotted. The superconductivity emerges from 8.8 GPa, near which the Umklapp gap is mostly suppressed. Tc is enhanced and reaches the maximum ~6 K at about 36.7 GPa, where the spin/charge density wave (SDW/CDW) is completely suppressed, and a non-Fermi Liquid behavior appears. Our results suggest that the appearance of superconductivity is associated with the fluctuation due to the suppression of Umklapp gap and the enhancement of Tc is related with the fluctuation of the SDW/CDW. Introduction The 1D system has attracted much attention due to its novel physics and unique phenomena, which are dramatically different from two-or three-dimensional systems 1,2 . When the motion of electrons is confined within 1D, the electrons cannot move without pushing all the others, which leads to a collective motion and thus, spin-charge separation. In such a case, the concept of "quasi-particles" with charge and spin degrees of freedoms is replaced with the collective modes, and the electronic state in 1D system is predicted by Tomonaga-Luttinger liquid (TLL) theory. Pressure is a unique technique to tune the interchain hopping to gradually transform the 1D conductor to high dimensional metal (HDM), during which many interesting physical phenomena emerge, such as superconductivity. The properties of quasi-1D conductors dependent on the strength of interchain coupling have been extensively explored in the organic compounds, such as (TMTTF) 2 X and (TMTSF) 2 X salts, which exhibit a 1D conducting characteristic with the overlap integrals ratio of different axis t a :t b :t c ~ 200:20:1 [3][4][5][6][7] . However, the pressure-temperature phase diagrams were only plotted in a narrow range of pressure (generally less than 2 GPa) for each quasi-1D organic compound. For (TMTTF) 2 X salts, the conducting chains along the a axis are less coupled. When temperature decreases, they successively undergo metal-insulator transition induced by Umklapp scattering (U-MIT) and ordered phase transition such as charge order, both of which can be gradually suppressed by pressure. While for (TMTSF) 2 X salts, the enhancement of interchain coupling t b completely suppresses the U-MIT. Within the perspective, (TMTSF) 2 X can be considered as the high pressure phase of (TMTTF) 2 X. At low temperature, (TMTSF) 2 X salts exhibit HDM behavior due to the single-particle interchain hopping. For (TMTSF) 2 PF 6 , a spin density wave (SDW) state forms in the HDM region 8 . With applying pressure, the SDW transition is suppressed gradually and superconductivity is induced [9][10][11] . When the interchain coupling further increases, such as in (TMTSF) 2 ClO 4 , superconductivity appears with the complete suppression of the SDW 12 . Besides the organic system, the inorganic quasi 1D conductors have also received considerable attention. For example, in the compound Li 0.9 Mo 6 O 17 the ratio of conductivity along the b-, c-, and a-axes is about 250: 10: 1 13 . It has been evidenced to be a TLL state in high-temperature region [14][15][16] . Li 0.9 Mo 6 O 17 undergoes a dimensional crossover from a 1D conductor to 3D metal at ~24 K. The dimensional crossover destabilizes the TLL fixed point, induces an electronic SDW/CDW and thus, leads to a crossover from metal to semiconductor 17 . By decreasing the temperature further, Li 0.9 Mo 6 O 17 exhibits a superconducting transition at 1.9 K 13, [17][18][19][20] . M 2 Mo 6 Se 6 (M=Rb, Na, In and Tl) is another interesting quasi-1D system with 4d transition metal, which consists of conducting (Mo 6 Se 6 ) chains along the c axis, and the chains are separated by M cations in the ab-plane [21][22][23][24] . The interchain coupling is controlled by the size of M cations and increases with the sequence of Rb, Na, In and Tl. Rb 2 Mo 6 Se 6 undergoes a CDW transition at about 170 K; while for Na 2-δ Mo 6 Se 6 , In 2 Mo 6 Se 6 , and Tl 2 Mo 6 Se 6 , superconductivity appears with T c about 1.5 K, 2.8 K, and 4.2 K, respectively 21,24 . Recently, the remarkable quasi 1D superconductor of K 2 Cr 3 As 3 with T c~6 .1 K and its related superconductors of Cr-233-type (Na/Rb) 2 Cr 3 As 3 and Cr-133-type (Rb/K)Cr 3 As 3 have been reported [25][26][27][28][29] . Their conducting chains are double-walled subnanotubes [(Cr 3 As 3 ) 2-] ∞ along the c axis and separated by alkali metal. It is interesting that the superconducting T c decreases monotonously with the increase of the distance between the adjacent conducting chains of [(Cr 3 As 3 ) 2-] ∞ in Cr-233-type materials, which is tuned by the radius of the cations of alkali metal 27 . Anyway, for inorganic quasi 1D conductor, the superconductivity dependent on interchain coupling has been less studied. Another important series is R 3 TiSb 5 (R = La, Ce) due to its strong 1D structure characteristics, which consist of face-sharing octahedral TiSb 6 chains and Sb-chains, and these chains are separated by R atoms [30][31][32] . If ignoring the contribution of the La atoms, the band structure calculation on the [TiSb 5 ] 9substructure of La 3 TiSb 5 compound suggests a well-defined 1D conductor 30 . However, the complementary calculation proves that there are non-negligible contributions of La to the density of state (DOS) at the Fermi level. The La 3+ ions in La 3 TiSb 5 are not perfectly ionic, which leads to a 3D band structure 32 . It is strongly indicated that La atoms do not play the role of interchain separation but serve as a bridge for electrons hopping among the chains. It is important to find an ideal 1D conductor with a simple structure to systematically explore the interchain hopping modulated Umklapp gap, the emerging SDW/CDW and superconductivity. Here, we used the metastable compound Ba 3 TiTe 5 via the substitution of the rare-earth metal La in La 3 TiSb 5 by alkali earth metal Ba, and for the charging compensation, the group V A element of Sb was replaced with the group VI A element of Te, which was synthesized under high-pressure and high-temperature (HPHT) conditions. The structure of Ba 3 TiTe 5 consists of infinite octahedral TiTe 6 chains and Te chains along the c axis, and our calculation proved its band structure has a well-defined 1D conducting characterization. On this basis, to explore the sequence emergent phenomena, we further employed high pressure, which is clean without impurity disturbance and can also effectively and continuously tune the interchain hopping. Finally, we plotted a complete phase diagram with the single material of Ba 3 TiTe 5 , and within a wide pressure range to present the evolution from 1D conductor to HDM and the emergent physics, during which the pressure-induced superconductivity was observed. Our results indicate that the fluctuation due to the suppression of Umklapp gap and SDW/CDW responds to the appearance of superconductivity and the enhancement of T c , respectively. Materials and Synthesis The lumps of Ba (Alfa, immersed in oil, >99.2% pure), Te powder (Alfa, >99.99% pure) and Ti powder (Alfa, >99.99% pure) were purchased from Alfa Aesar. The precursor BaTe was prepared by reacting the lumps of Ba and Te powder in an evacuated quartz tube at 700℃. Ba 3 TiTe 5 was synthesized under HPHT conditions. The obtained BaTe powder, Te, and Ti powder were mixed according to the elemental ratio of stoichiometric Ba 3 TiTe 5 , and ground and pressed into a pellet. The pre-pressed pellet was treated using a cubic anvil high-pressure apparatus at 5 GPa and 1300 °C for 40 mins. After the high-pressure and high-temperature process, the black polycrystalline sample of Ba 3 TiTe 5 was obtained. Measurements The ambient X-ray diffraction was conducted on a Rigaku Ultima VI (3KW) diffractometer using Cu K α radiation generated at 40 kV and 40 mA. The in situ high-pressure angle-dispersive X-ray diffraction was collected at the Beijing Synchrotron Radiation Facility at room-temperature with a wavelength of 0.6199 Å. Diamond anvil cell with 300 μm cullet was used to produce high pressure, and silicone oil was used as the pressure medium. The Rietveld refinements on the diffraction patterns were performed using GSAS software packages 33 . The crystal structure was plotted with the software VESTA 34 . The dc magnetic susceptibility measurement was carried out using a superconducting quantum interference device (SQUID). The resistance was measured by four-probe electrical conductivity methods in a diamond anvil cell made of CuBe alloy using a Mag lab system. The diamond culet was 300 μm in diameter. A plate of T301 stainless steel covered with c-BN powder was used as a gasket, and a hole of 150 μm in diameter was drilled in the pre-indented gasket. Fine c-BN powders were pressed into the holes and further drilled to 100 μm, serving as the sample chamber. Then, NaCl powder was put into the champ as a pressure-transmitting medium, on which the Ba 3 TiTe 5 sample with a dimension of 60 × 60 × 15 μm 3 and a tiny ruby were placed. The pressure was calibrated using the ruby florescent method. At each pressure point, the anvil cell was loaded into the Mag lab system for the transporting measurement with an automatically controlled temperature and magnetic field. Calculations The first-principles calculations based on density functional theory implemented in VASP were carried out within a primitive cell with an 8×8×16 k-point grid 35 . The projector augmented wave pseudopotentials with Perdew, Burke, and Ernzerhof (PBE) exchange-correlation and 450 eV energy cutoff were used in our calculation 36,37 . The experimental lattice parameters obtained from XRD were adopted. Results and Discussion The X-ray diffraction of Ba 3 TiTe 5 at ambient condition is shown in Fig. 1(a). All the peaks can be indexed by a hexagonal structure with lattice constants a = 10.1529 Å and c = 6.7217 Å, respectively. The space group of P6 3 /mcm (193) is used according to the systematic absence of hkl. Finally, the Rietveld refinement was performed by adopting the crystal structure of R 3 TiSb 5 (R = La, Ce) as the initial model 30,31 , which smoothly converged to χ 2 = 2.0, Rp = 3.1% and Rwp = 4.4%. The summary of the crystallographic data is listed in Table I. The schematic plot of the crystal structure is shown in Fig. 1(b, c). Fig. 1(b) is the top view with the projection along the c axis, displaying the triangular lattice form; while Fig. 1(c) is the side view to show the chain geometry. From Fig. 1(b, c), we can see that the crystal structure of Ba 3 TiTe 5 consists of infinite face-sharing octahedral TiTe 6 chains and Te chains along the c axis, which are separated by Ba cations. The distance between the adjacent TiTe 6 chains is given by the lattice constant of a=10.1529 Å, which is significantly large, and responsible for exhibiting a quasi-1D structure characteristic. To explore the 1D characteristics of Ba 3 TiTe 5 from a band structure perspective, we calculated the band structure, partial density of state (PDOS) and Fermi surface via first-principles calculations for Ba 3 TiTe 5 under ambient pressure, as shown in Fig.1(d-f). The hallmark of the band structure is that the bands with the k-point path parallel to the k z -direction intercept the Fermi level; while for k-point path perpendicular to the k z -direction, the band dispersion is always gapped. Thus, Ba 3 TiTe 5 is a well-defined 1D conductor with the conducting path along the z-direction. Additionally, from the PDOS, it can be seen that the DOS near the Fermi level is dominated by the 3d-orbitals of Ti. The 5p-orbitals of both Te(1) and Te(2) from the TiTe 6 chains and Te chains, respectively, contribute to the DOS near the Fermi level as well, which proposes that both the TiTe 6 chains and Te chains are conducting chains. The DOS from Ba 6s-orbital, presented in Fig. 1(e) as well, is close to zero at Fermi level and can be ignored, which suggests that the conducting chains are well separated by Ba 2+ ions. Fig. 1(f) displays the calculated Fermi surfaces. There are four sheet-like Fermi surfaces perpendicular to the k z -direction, and the bottom sheet can be shifted by the wave vectors k 1 and k 2 to nest with the above two sheets, respectively. Therefore, the Fermi surfaces are unstable and the transport property of 1D conductor should be described by TLL theory. The resistivity measurement at ambient pressure for Ba 3 TiTe 5 was carried out, as shown in Fig. 1(g). The resistivity increases as the temperature decreases, exhibiting a semiconducting behavior. The inset is the ln(ρ) versus reverse temperature. By fitting the ln(ρ)-1/T curve according to the formula of ρ ∝ exp(Δ g /2k B T), where k B is the Boltzmann's constant, the band gap ofΔ g can be estimated to be 232 meV. For a 1D conducting system, Umklapp scattering has important influence on the electron transfer, which usually results in a correlation gap and insulating state [38][39][40][41] . Beside the Umklapp scattering effect, non-zero disorder in 1D conducting system tends to localize the electrons as well. If the disorder dominates the localization, the electron transport should be described by the model of various range hopping. Here, the resistivity following the Arrhenius law within the measured temperature range proves that the Umklapp scattering effect should play the role for the metal-insulating transition, since the Umklapp process produces a real correlation gap. Therefore, the inconsistency between the measured resistivity and the calculated results should arise from the Umklapp scattering effect. In addition, the magnetic susceptibility measurement shows that Ba 3 TiTe 5 is non-magnetic in the measured temperature range from 2 K to room temperature, as shown in Fig. S1. High pressure is an effective way to tune the lattice of a crystal structure. In the 1D system, it can significantly decrease the distance between adjacent conducting chains, thus, enhance the interchain hopping and move the 1D conductor to HDM, during which rich interesting physics are induced, such as the SDW/CDW and superconductivity. Therefore, continually compressing a 1D conductor can provide a potential pathway to understanding the rich phase diagram and the fundamental underlying mechanism. Here, high-pressure X-Ray diffraction experiments for Ba 3 TiTe 5 were performed first to study the structural stability and the pressure dependence of the lattice parameters, as shown in Fig. S2-S8. Within the highest measured pressure of 50.6 GPa, the hexagonal structure of Ba 3 TiTe 5 is stable, and the distance between the adjacent conducting chains decreases gradually by 12.1%, which is ideal to explore the exotic emergent physics dependent on the interchain hopping tuned by pressure. Therefore, we carried out the resistance measurements under high pressure, as shown in Fig. 2(a,b). Although the resistance decreases with increasing pressure when the pressure is lower than 7 GPa, it is still very high (~10 6 Ω at 6.7 GPa and 2 K). At the pressure of 8.8 GPa, the resistance drops dramatically by four orders of magnitude down to ~10-100 Ω, which suggests that the Umklapp scattering induced gap should be mostly suppressed. A closer view at this pressure shows there is a downward trend in the low-temperature region of the resistance curve (seen in Fig. S9(a, b)). The downward transition temperature increases from ~5 K to ~7.5 K with initial pressure increases, and then decreases to ~5.4 K at 15.9 GPa. When increasing pressure further, the downward transition temperature begins to increase again, and the downward behavior gradually develops into a superconducting transition, which persists to the highest measured pressure of 58.5 GPa. Therefore, we suggest that the superconductivity appearance is associated with the fluctuation due to the suppression of the Umklapp gap. In addition, there is an unknown hump independent of pressure at around 150 K, as shown in Fig. 2(b), which has nothing to do with the superconducting transition and therefore, was not discussed in this work. Fig. 2(c) displays the superconducting transition dependent on the magnetic field at 17.3 GPa. At zero field, the onset transition temperature is about 3.9 K, and the resistance drops to zero at ~2.8 K. Upon applying the magnetic field, the transition is gradually suppressed. Taking the criterion of the onset transition as the superconducting transition temperature, the curve of H c2 versus T c is plotted in the inset of Fig. 2(c), where the slope of -dH c2 /dT| Tc is ~3.07 T/K. Using the Werthamer-Helfand-Hohenberg formula μ 0 H c2 Orb (0)= -0.69 (dH c2 /dT) T c and taking T c =3.8 K 42 , the upper critical field limited by the orbital mechanism is estimated to be μ 0 H c2 Orb (0)~8 T. Another mechanism determining the upper critical field is the Pauli paramagnetic effect. The upper critical field is estimated with the formula μ 0 H c2 The enlarged view of the resistance curves is plotted in Fig. 3(a,b) and Fig. 3(d,e) to display more detailed information. The resistances below 36.7 GPa are normalized by R(150K). Above 18.9 GPa, besides the superconducting transition, a metallic state starts to develop at relatively high temperature and is then followed by a resistance upturn while temperature decreases, forming a resistance minimum at T m marked by the arrow shown in Fig. 3(a). T m shifts to low temperature with increasing pressure. At 28.7 GPa, the metal-semiconductor crossover (MSC) T m was determined by the minimum value of the temperature derivative of resistance dR/dT (shown in Fig. S10). Although the crossover temperature cannot be unambiguously determined when the pressure exceeds 28.7 GPa, it is clear that the upturn is completely suppressed at 36.7 GPa. Fig. 3(b) shows the superconducting transition below 36.7 GPa. The T c increases from 4.3 K to 6.4 K when pressure increases from 18.9 GPa to 36.7 GPa. T c and T m as a function of pressure are plotted in Fig. 3(c); when pressure increases, T m decreases while T c increases. The T c reaches the maximum at the critical pressure of 36.7 GPa, where the MSC is completely suppressed. The phenomenon of MSC has been reported in the quasi-1D compounds of Li 0.9 Mo 6 O 17 and Na 2-δ Mo 6 Se 6 15, 17-19, 21, 44 . Several mechanisms can cause the MSC or MIT, such as Mott instability, SDW/CDW formation, and disorder-induced localization. For Li 0.9 Mo 6 O 17 , the MSC can be gradually suppressed and tuned to be metallic by the magnetic field, suggesting the MSC is the consequence of the SDW/CDW gap (less than 1 meV) formation 19 . Above T m , Li 0.9 Mo 6 O 17 is evidenced to exhibit TLL behavior 15,16 . A dimensional crossover happens at T m and causes the destabilization of the TLL fixed point, leading to the formation of an electronic SDW/CDW, which is suggested to be the origin of MSC in Li 0.9 Mo 6 O 17 17 . While for Na 2-δ Mo 6 Se 6 , the MSC temperature T m is sample dependent and ranges from 70 K to 150 K due to a small variation of Na stoichiometry. It is speculated that the MSC arises from the localization induced by disorder 21 . For an ideal 1D conducting system, the Fermi surface is unstable and the system is TLL state. When increasing the interchain hopping to move the 1D system towards HDM, the Fermi surface nesting established in the quasi 1D conducting system can usually give a SDW/CDW transition and open a gap. In the case of Ba 3 TiTe 5 under high pressure, for example at 19.5 GPa, the Fermi surface nesting can be observed, as will be discussed in the following. Therefore, the MSC found in Ba 3 TiTe 5 under high pressure is suggested to arise from the SDW/CDW transition, and the MSC T m in Fig. 3(a, c) should correspond to the SDW/CDW transition temperature. The T c increases with SDW/CDW suppression and reaches the maximum when the SDW/CDW transition is suppressed to zero. It is speculated that the superconductivity is enhanced by the fluctuation of SDW/CDW. Fig. 3(d,e) shows the temperature dependence of resistance with pressure exceeding 36.7 GPa. There is an obvious hump below the onset T c . The resistance curve demonstrates a two-step superconducting transition. In fact, the two-step superconducting transition has been reported in other quasi-1D superconductors, where the onset transition is ascribed to the superconducting fluctuation along individual chains, and the hump signifies the onset of transverse phase coherence due to the interchain coupling 21,[45][46][47] . Here, the lower temperature transition should be attributed to the transverse phase coherence since the two-step transition feature becomes more pronounced when the interchain coupling is enhanced by pressure. The T c versus pressure in this pressure region is plotted in Fig. 3(f). The T c monotonously decreases with further increasing pressure. The normal state of resistance between 10 K and 60 K is fitted by the formula R=R 0 +AT n , as shown in Fig. 3(d), where R 0 is the residual resistance, A is the coefficient of the power law, and n is the exponent. The R 0 value ranges from 0.07 Ω to 0.11 Ω. The size of our sample for high pressure measurements is about 60 μm × 60 μm with the height about 15 μm. Thus, the residual resistivity can be estimated to be 1.0-1.6×10 -4 Ω-cm, which is comparable with that reported in (TMTTF) 2 AsF 6 7 . The obtained exponent n dependent on pressure is plotted in Fig. 3(f), which shows that n increases from 0.9 to 1.8 as pressure increases, i.e., the system develops from a non-FL to a FL state. To further demonstrate the crossover from a non-FL to a FL state, the temperature and pressure dependent exponent in the metallic region is plotted in Fig. 5, where the color shading represents the value of exponent n. The n value approaching 2 near 26.2 GPa should be due to the effect of MSC. When the pressure exceeds 36.7 GPa, the Fermi surface nesting should be broken, as will be discussed in the following, and the FL state develops gradually as pressure increases. It is interesting that the non-FL behavior appears at the critical pressure where the SDW/CDW is wholly suppressed. Therefore, the non-FL behavior may be caused by the SDW/CDW fluctuation. When the system is turned away from the instability of SDW/CDW, the FL state develops gradually. Although the non-Fermi liquid behavior is generally observed in two-dimensional system, it was also reported in organic TMTSF salts, an archetypal quasi-1D system 7,48,49 . In fact, the superconductivity of TMTSF salts share the common border with SDW and the magnetic fluctuation gives rise to the linear temperature dependence of resistivity at low temperature 49 . To help understand the above emergent phenomena, we carried out the calculations of band structure, PDOS and Fermi surface for Ba 3 TiTe 5 under different high pressures, which are presented in Fig. 4(a-c) for 19.5 GPa and Fig. 4(d-f) for 42.2 GPa, respectively. For the pressure of 19.5 GPa, the main difference from ambient pressure is that the conduction band bottom around Γ and M points sinks down and just touch the Fermi level and thus, produce small electronic Fermi pockets, which means that the electrons have coherent interchain hopping. These conduction bands are very flat, suggesting the interchain electron mobility is small. Besides the newly formed Fermi pockets, the four Fermi sheets warp slightly, so that the bottom sheet can only roughly nest with the second sheet from the top with the vector k 2 . According to the electron response function, where χ(q) is the generalized susceptibility, f k is the occupation functions of the single-particle states and E k the single-particle energy, if part of the Fermi surfaces nest, the susceptibility χ(q) should significantly increase and thus, the Fermi surfaces lose the stability, which generally induces a formation of SDW/CDW. Therefore, the MSC observed experimentally in the pressure range of 15-30 GPa should arise from the formation of SDW/CDW induced by Fermi surface nesting. For higher pressure of 42.2 GPa, the band near Γ point further sinks down and crosses the Fermi level, which displays a more 3D-like metal character. The Fermi surfaces become more complex. The Fermi surface around Γ point is more like qusi-1D; while the Fermi surfaces around the A point are between 2D-3D. Thus, the Fermi surfaces loss the nesting and the SDW/CDW are completely suppressed under this pressure, which agrees well with the experimental results. Overall, the ambient 1D electronic state is gradually changed to high dimension but still with anisotropic band structure as the pressure increasing. The above emergent phenomena induced by pressure are intrinsic to Ba 3 TiTe 5 . First, the possibility of the superconductivity coming from other impurities can be ruled out. Within the X-ray resolution limit, no discernable impurity phase was found in the specimen even in the X-ray diffraction pattern re-plotted with the intensity in logarithmic scale, as shown in Fig. S11. What's more, if there is any impurity containing Ba, Ti, or Te, only the Te is superconducting under high pressure and the pressure dependence of T c for Te is totally different from Ba 3 TiTe 5 50,51 . Second, the pressure dependence of superconductivity, MSC associated with SDW/CDW and non-Fermi liquid behavior can be reproduced, as shown in Fig. S12 (a-b), which confirms the intrinsic properties of Ba 3 TiTe 5 . Based on the above experiments and discussions, the final temperature-pressure phase diagram of Ba 3 TiTe 5 is plotted in Fig. 5. At ambient pressure, the quasi-1D conductor Ba 3 TiTe 5 exhibits semiconducting behavior with a gap of about 232 meV due to the U-MIT. After the suppression of U-MIT, SDW/CDW emerges due to the Fermi surface nesting, which leads to a MSC. Subsequently, the SDW/CDW is gradually suppressed by pressure. Superconductivity appears at 8.8 GPa, where the Umklapp gap has been suppressed completely, and the T c increases with the suppression of the SDW/CDW. It reaches the maximum of ~6 K at 36.7 GPa, where the normal state of resistance presents a non-FL behavior due to the SDW/CDW fluctuation. With further increasing pressure, the system develops from a non-FL to a FL state since it is away from the SDW/CDW instability. Our results suggest that the pressure-induced superconductivity in quasi-1D conductor Ba 3 TiTe 5 is initiated by the fluctuation due to the suppression of the Umklapp gap and enhanced by the fluctuation of the SDW/CDW. Conclusions The novel quasi 1D Ba 3 TiTe 5 conductor was synthesized and extensively studied at high pressure. The conducting paths are the TiTe 6 and Te chains, which are separated by Ba cations and thus, present a quasi 1D conducting characteristic. For Ba 3 TiTe 5 , a complete temperature-pressure phase diagram was obtained within a wide pressure range, which presents the evolution from 1D conductor to HDM and the emergent physics. During the increase of pressure, the increased interchain coupling transformed the ambient 1D conductor to high-dimensional metal, during which the pressure induced SDW/CDW, superconductivity, and non-FL behavior appeared. The superconducting transition temperature T c reaches the maximum accompanied by non-FL behavior when the SDW/CDW gap is suppressed to zero. The superconductivity emergence is closely associated with the suppression of the Umklapp gap and is enhanced by the fluctuation of the SDW/CDW.
6,630.4
2019-10-25T00:00:00.000
[ "Physics", "Materials Science" ]
Efforts to Improve the Achievement of Science Learning Outcomes in Grade IV Elementary School are Assisted Through Guessing Games Despite the usage of learning media, science learning outcomes remain low. The kids' learning outcomes, which are still below the KKM, demonstrate this fact. Therefore, the goal of this study is to enhance class IV students' scientific learning outcomes by using picture guessing games. In order to improve science learning outcomes relating the properties of light in class IV at SD 1 Sungai Pedada Kec. Tulung Selapan Kab. OKI Semester I for the 2023-2024 academic year, the researcher intends to conduct research in class IV at SD 1 Sungai Pedada by adopting an image guessing game. This study, which encompasses two cycles of classroom action research, use observation techniques to measure increases in teacher and student engagement, exams to measure increases in student learning outcomes, and the gathering of research-related data. Thirteen male and twelve female fourth-graders made up the research subjects. Complete learning outcomes of 71,20 in cycle I and 86,48 in cycle II could be obtained based on the data analysis results; there was a 15.28% difference in the average interval score percent. There was a 5,08% rise based on the pre-cycle comparison of cycles I and II. It has been determined that using the picture guessing game in science classes in class IV SD 1 Sungai Pedada can enhance student learning outcomes. INTRODUCTION According to Triwiyanto (2014), education is the process of teaching and learning that occurs throughout a person's life with the goal of enhancing their capacity to complete formal, non-formal, and informal learning opportunities.It is possible to characterize it in terms of education as a deliberate and consistent endeavor to establish a learning environment that supports a student's learning process during the learning process.The goal of education is to maximize each student's potential so they can grow into people who can benefit their own community, state, and country.To satisfy fundamental human needs, such as the capacity to consider how to survive and thrive in this world, education is a must. Learning science is a process of discovery that motivates students to participate and become actively involved, rather than only memorizing facts.The learning model requires the application of all learning approaches, strategies, methods, techniques, and tactics (Kelana & Wardani, 2021).An successful learning strategy in the context of Natural Science (IPA) education must inspire students to actively engage in the process of discovery.This implies that in addition to learning already-known material, students will participate in learning exercises to solve problems and develop a deeper comprehension of science education.Each learner will benefit from an efficient and fulfilling learning experience as a result. Innovative and enjoyable methods can be used in elementary classrooms to improve the outcomes of science instruction, particularly when it comes to information on the characteristics of light.Using a photo guessing game is one efficient technique.Students can grasp these concepts more actively and enthusiastically when gaming components are included into the learning process.Students can practice their interactive recognition of light qualities by guessing photos.Students may be inspired to collaborate with one another and think creatively as a result of this learning.It is believed that by using a playful approach, pupils will comprehend the subject matter more fully and achieve greater learning outcomes. It can be challenging for educators to present curriculum in a way that supports high-quality learning.There are still flaws and restrictions in the way natural science is taught in schools, particularly when it comes to the use of games.This will continue to happen as long as professors of natural science think that using games to teach and learn is not that vital. As defined by Dewi et al. (2021), Natural Science (IPA) is an ordered body of knowledge derived from reality as it is revealed by natural events and developed through scientific procedures and attitudes.Natural science (IPA) is a journey of discovery as well as the mastery of a body of knowledge made up of concepts, ideas, and facts.It is envisaged that science education will serve as a means of teaching pupils about the natural world and about themselves, with opportunity to apply what they learn to real-world situations. How well students control the instructional information is determined by looking at their learning results (Kusrini, 2022).What a person learns via his learning activities is the real outcome.These findings demonstrate how someone approaches or participates in their educational endeavors (Johannes, 2021).Science education must help cultivate students' scientific mindsets.An attitude that is flexible, critical, open, creative, meticulous, and environmentally conscious is what is known as a scientific attitude.This idea comes up during the science learning process, but it becomes especially relevant after the understanding and application phase (Kumala, 2016). Learning outcomes demonstrate the effectiveness of learning activities following student participation.The learning outcomes explain the methodology by which the degree of student success in comprehending the subject matter is assessed.The goals of learning are the same as the accomplishments of learning.the benefits that pupils have acquired after going through the classroom learning process.Students' learning outcomes affect their capacities during the learning process, which affects their grades.The student's ability to modify their behavior continues even after the learning exercise itself.The benchmark of student learning outcomes is employed to demonstrate that learning objectives are met as planned. Including games that are appropriate for the child's learning style in the process is a crucial step in raising learning achievement.It is believed that by introducing games into the classroom, kids will become more engaged, active, and enthusiastic about learning.This will foster an enjoyable and productive learning environment and have a good effect on student progress.Thus, it might alter how people learn.It is, nevertheless, the hardest lesson to comprehend because of the kids' disinterest in science.The learning outcomes for students are impacted by this. On February 26, 2024, observations were made at SD 1 Sungai Pedada Kec.Tulung Selapan Kab.OKI .The class IV homeroom teacher observed the application of the lessons, particularly the science lessons.The utilization of the lecture style by educators and a lack of resources to provide students with examples of the subject covered in books are two issues that teachers encounter during the teaching and learning process. Thus, its influence on the learning results of students in scientific classes is minimal.The study concentrated on the qualities of light, a scientific subject that presents challenges in the field, during field observations.This is among the causes of children's lack of confidence. Previous researchers used an image guessing game in class IV to enhance student learning outcomes in plant parts courses.They conducted this investigation.Thus, prior research serves as a guidance for researchers.Consequently, photo guessing games have piqued the interest of researchers who hope to enhance student learning results and boost children's self-confidence. Everyone enjoys playing games, but elementary school kids especially do.Children can learn about the different images seen in the game Guess the Picture in addition to having fun (Izatusholihah, 2021).Playing picture guessing games has an impact on kids' verbal, cognitive, and psychomotor development.Furthermore, because they can exchange expertise and learn new things through this game. A teacher might utilize the photo guessing game as a means of stimulating pupils' interest in the subject matter.By using photo guessing media, teachers can pique students' curiosity about the picture they supplied, which will motivate them to pay attention to the lesson topic (Mufidah & Badrus, 2022).Games are used to present information and can make learning enjoyable.Putri (2022) lists the photo guessing game as one of the activities that people find entertaining to play.Picture guessing games help kids learn new information, enhance their comprehension so they can think more critically, solve issues, ask and answer questions, and assist language development. In the photo guessing game, picture cards may include many sets of images or other items.Each student's playing cards should match the size of the learning resources used in the class.The game's objective is to motivate and inspire children to respond in the expected manner.claims that playing photo guessing games has several benefits for kids' learning, one of which is that they can use the picture to guess the picture that is still a mystery. Procedures for the picture guessing game that researchers employ to enhance learning outcomes: each student receives the game's rules, or instructions, prior to beginning the game.It is the turn of each pupil to select the keywords that the teacher has prepared by moving forward.Students mimic the actions in the image after selecting pertinent keywords.The teacher describes the image to the class after they have guessed what it is. Sight is connected to certain characteristics of light.The majority of students mistakenly think that when light strikes an item during the seeing process, our eyes emit the light; in reality, light is reflected by our eyes.Enhancing students' capacity for critical, rational, methodical thought as well as their disciplined, impartial, and truthful demeanor in daily life are among the objectives of science education in the classroom (Kurnia, 2022).When learning, pupils do not comprehend the qualities of light because they just look at books or pictures; they also find it easy to forget abstract concepts when teachers explain them without using actual media.Consequently, educators are forced to impart the teachings that students have acquired again (Istidah et al., 2022).Light has a wide range of characteristics, including the ability to go straight, reflect, pass through transparent things, be refracted, and split into several hues (Erfan & Maulyda, 2021). The aim of this study was to investigate how class IV students at SD 1 Sungai Pedada were using the photo guessing game. in order for students to complete the KKM in science classes about the properties of light in accordance with the learning outcomes, particularly in science learning. RESEARCH METHOD A research study's soundness is guaranteed by a set of procedures known as research methodologies.Initial thoughts and impressions from earlier study are Mualimin and Cahyadi (2014), research is an intentional activity conducted in a room with the goal of resolving issues or enhancing the caliber of learning.As a result, deliberate behaviors that take the form of learning activities are observed in the classroom simultaneously. The process of conducting classroom action research involves creating a lesson plan that outlines the activities that both teachers and students will carry out, providing the tools or resources needed, such as instructional aids and visual media, observing students' work and processes, analyzing data from observations and student work, and displaying the design's outcomes.This involves modeling the execution of the steps by taking into account the time required for implementation and the manner in which the action is carried out. Teachers can enhance classroom learning and school programs overall by implementing classroom action research as a strategic method.To accomplish learning objectives, learning activities can include a variety of tactics or strategies (Pahleviannur et al., 2022).PTK seeks to apply instructional strategies that are appropriate for the issues and stage of student development in order to improve and increase learning.Teachers, students, learning resources or media, and the environment make up one of the components of a management learning strategy. Teachers employ various approaches or plans known as learning strategies to accomplish certain learning goals.Learning techniques are intended to assist teachers in imparting the necessary knowledge, abilities, and comprehension to their students.The learning challenge is precisely defined by the employed approach.Problems that come up during the classroom learning process must be observed and investigated by teachers.Testing, observation, and data collection can assist in achieving this.Teachers can formulate precise and pertinent PTK objectives by precisely identifying the issues that arise.Teachers also need to set specific, well-defined goals.By employing this technique, educators can pinpoint issues within the class, create clear objectives, and focus their efforts on achieving these objectives.As a result, this learning technique aids educators in properly planning and carrying out instruction. There were twenty-five class IV pupils at SD 1 Sungai Pedada who served as the research sample.There are twelve female and thirteen male pupils.SD 1 Sungai Pedada class IV was selected as the location because many students in this class have not yet attained the KKM and have low Natural Science (IPA) learning results.It is known that nine male and eight female students, totaling seventeen, had grades in the pre-cycle that nevertheless fell short of the KKM.There were three pupils in cycle II-two male and one female-compared to the fourteen in cycle I, which consisted of eight male and seven female students.The time frame for this study is February 26, 2024-March 1, 2024.The process of gathering scientific data for a certain goal is known as research methodology.157 Among the research tools at SD 1 Sungai Pedada are the following: looking for, documenting, and creating the picture guessing game. Teachers in class IV at SD 1 Sungai Pedada participated in the data analysis, which was done on February 17, 2024, for the pre-cycle.Following the completion of the precycle, data processing was done.On Monday, February 26, 2024, cycle I was held following the pre-cycle.The characteristics of light were the subject matter covered in cycle I.The second cycle will resume on Tuesday, March 1, 2024. RESULTS AND DISCUSSION In the pre-cycle before the picture guessing game was held, 61.00%.In the first cycle, it was found that students were still awkward at playing the picture guessing game, unable to determine which picture was suitable or appropriate.The advantage of cycle I learning is the picture guessing game which makes it easier to explain material regarding the properties of light so that students understand it more clearly and easily.This results in more active and enthusiastic student involvement in learning, which is reflected in the percentage increase in the average value of cycle I.Based on the results of the table before the cycle, cycles I and II, learning was carried out using a picture guessing game regarding the properties of light for class IV students at SD 1 Sungai Pedada Semester I for the 2023-2024 academic year, improving student learning outcomes in detail, such as showing the lowest score in every cycle.There are three cycles displayed: precycle, cycle I, and cycle II.Precycle has the lowest value of 45, cycle I has the lowest value of 55, while cycle II has the lowest value of 67.In addition, it can be seen that the previous average value students' score of 61,00 for Natural Sciences (IPA) lessons is still less than the minimum standard of perfection, namely: 70.After the first cycle was carried out, the students' average score increased to 71,20 for cycle I, and for cycle II it became 86,48.With the values above, it is clear that the learning process by applying the picture guessing game about the properties of light produces much better results than the usual learning method without using the picture guessing game.Regarding the average results of student learning completeness obtained from cycle I, it was 71,20 and cycle II was 86,48, so there was a difference in the comparison of the average interval scores of 15.28%.This graph provides information about changes in average values from cycle to cycle.It can be seen that the average score increases from pre-cycle to cycle I, and then increases again from cycle I to cycle II.This graphic explanation shows that there is a consistent increase in student scores.This states that there is an increase in performance from cycle to cycle and proves that the learning methods or strategies applied in that cycle are effective in increasing student achievement.So based on the results of the interval scores above, by using the picture guessing game, the material on the properties of light in science learning in class IV Semester I SD 1 Sungai Pedada was categorized as improving student learning outcomes well. CONCLUSION There are a number of shortcomings in the Natural Sciences learning program that were discussed during cycle I, one of which is that the students were not accustomed to learning through an image guessing game that was based on the characteristics of light.There was a 15.28% difference in the average score interval between cycle I's results, which were 71,20, and cycle II's findings, which increased by 86,48. It can be determined that class IV students at SD 1 Sungai Pedada Semester I 2023/2024 academic year have improved their learning outcomes by employing a picture guessing game on the properties of light material using a picture guessing game for studying Natural Sciences (IPA).As a result, games can aid educators in the process of teaching.This description suggests that in order to develop gaming materials that let educators use graphic materials like images and photos, creativity is required.When used properly, games can enhance learning by bringing excitement, diversity, and interest to a number of educational initiatives (Mufidah & Badrus, 2022). As was already noted, the author would like to offer recommendations for structuring the learning process.Teachers must constantly be creative in order to enhance the learning process that follows.Even if innovation happens quickly, it still demands rigorous planning and clear, pre-planned plans.The innovation that is implemented must be guided by the desires fulfilled, including the approach and plan for achieving a learning objective.Targets are set for invention because it will spur further innovation and advancement in the evolution of life.It is imperative for educators to offer a diverse range of educational activities to stimulate students' interest in learning, such as using educational games in the classroom. Because the major objective is to enhance the teacher's learning process in order to boost student activity and learning outcomes, innovation in solving learning difficulties takes the shape of adopting effective learning models.This idea might have been done earlier by other scholars or individuals.Subsequently, a researcher applies a learning strategy through methodologies, media, and other means by integrating or combining two learning models (Haerullah & Hasan, 2021).Using creative and innovative learning strategies, such picture guessing games, is how innovation is used to meet KKMrecommended student learning objectives. Researchers conduct innovation activities during the observation or observation stage in collaboration with colleagues who are experts in the science learning section.Observations are made of both student activities during the learning process and teacher activities designed by researchers to support learning. Figure 1 . Figure 1.Pre-Cycle, Cycle I, and II Comparison ChartsJudging from the graphic image above, it shows that the results of student learning completion in the pre-cycle average were recorded with a score of 61,00, cycle I was recorded with a score of 71,20.So the average obtained before cycle and cycle I, from these results there is a 10.2% difference between the average score intervals. Education Achievment : Journal of Science and Research Volume 5 Issue 1 March 2024 Page 152-161 156 combined to generate the problem formulation.Class action research is the method used here.Classroom Activities According to
4,334.4
2024-03-06T00:00:00.000
[ "Education", "Physics" ]
Precipitation behaviour in AlMgZnCuAg crossover alloy with coarse and ultrafine grains Crossover aluminium alloys have recently been introduced as a new class of coarse-grained age-hardenable alloys. Here, we study the evolution of precipitation of the T-phase — -phase — in a 5xxx/7xxx crossover alloy with coarse- and ultrafined microstructures. Both alloys were examined using differential scanning calorimetry, X-ray diffraction and in situ transmission electron microscopy. The ultrafine-grained alloy revealed significant different and accelerated precipitation behaviour due to grain boundaries acting as fast diffusion paths. Additionally, the ultrafine-grained alloy revealed high resistance to grain growth upon heating, an effect primarily attributed to inter-granular precipitation synergistically with trans-granular precipitation of T-phase. GRAPHICAL ABSTRACT IMPACT STATEMENT The effect of coarse and ultrafine grains on the T-phase precipitation behaviour in novel aluminium crossover alloys was investigated. Thermal stability of ultrafine grains was achieved through controlled T-phase precipitation. Introduction To broaden the property profile while simultaneously improving sustainability [1], crossover alloying emerged as a promising strategy for research in the field of aluminium alloys [2].Such an approach was firstly developed for a crossover alloy merging the 5xxx and 7xxx [2][3][4][5] aluminium alloy systems, and recently, it has been used to produce new alloys between 6xxx and 8xxx alloys [6].In the 5xxx/7xxx crossover system, the Tphase -Mg 32 (Zn, Al) 49 -was identified as the hardening precipitate.Although the T-phase of the Al-Mg-Zn ternary system is already known for decades [7,8], the scientific interest has been rather low before introducing the crossover concept.Since then, however, the T-phase became the focus of intensive research in materials science.The T-phase is beneficial when used to inhibit grain CONTACT S. Pogatscher<EMAIL_ADDRESS>Metallurgy, Montanuniversität Leoben, Franz Josef-Strasse 18, Leoben 8700, Austria growth or to generate particle stimulated nucleation [4], or even increase resistance to corrosion and hydrogen damage [9][10][11].The age hardening potential by adding Cu and/or Ag was investigated, as reported by Stemper et al. [3,5], as well as by other groups [12,13].Nevertheless, the precipitation sequence in coarse-grained (CG) Al-based crossover alloys is not yet fully exploited.Several groups presented different precipitation sequences as summarized by Stemper et al. [2].Cu modifies the precipitation sequence as proposed by Hou et al. [14]: supersaturated solid solution (SSSS) → Guinier-Preston, fully coherent (GPI-zone) → T", fully coherent (GPIIzone) → intermediate T', semi-coherent → equilibrium T, incoherent.Moreover, Tunes et al. [15] found T-phase surviving upon heavy ion irradiation.Very recently the resistance of a crossover alloy against irradiation was strongly improved by reducing the grain size [16].Specifically, these ultrafine-grained (UFG) crossover alloys aimed for applications in extreme environments, the precipitation behaviour has not been studied in detail so far, although it is known that a reduction in grain size to the nanometer scale may significantly change the precipitation behaviour [17,18].The decrease in grain size is not only affecting the precipitation sequence of a given alloy, but consequently also its final mechanical properties [19][20][21][22].The process of precipitation within UFG regime exhibits variations compared to the CG counterpart.For instance, in cases where precipitates exist initially, severe plastic deformation (SPD) can result in the fragmentation or even dissolution of these precipitates into the matrix.This effect can lead to a state resembling a SSSS condition [23].Conversely, when a SSSS has no initial precipitates, SPD can induce dynamic precipitation [24].Accelerated precipitation kinetics, as evidenced by Luo et al. [25], highlight an essential distinction compared to the CG material.This expedited precipitation phenomenon is attributed to the considerable density of defects adjacent to grain boundaries (GBs), potentially reaching values of up to 10 17 m −2 [26].Moreover, the intermediate precipitation steps are often bypassed [17], leading directly to the precipitation of the equilibrium phase, even at lower temperatures.Therefore, precipitation sequences known from CG alloys need to be revised and reinvestigated.This study aims at systematically closing the knowledge gap in the precipitation behaviour of 5xxx/7xxx crossover alloys in different grain size regimes.The evolution of precipitates in both CG and UFG microstructural regimes is herein investigated through DSC measurements and characterized using X-ray diffraction (XRD) techniques and in situ transmission electron microscopy (TEM).A comprehensive evaluation on the overall thermal stability of the UFG structure is also performed. Materials and methods The chemical composition of the investigated crossover alloy is Al-4.9Mg-3.7Zn-0.6Cu-0.2Ag(determined via optical emission spectroscopy in wt.-%.).The CG alloys were processed by hot-and cold-rolling from 12 mm to 1.5 mm, followed by solution heat-treatment at 465 • C/35 min and water quenching.High-pressure torsion (HPT) was carried out under a nominal hydrostatic pressure of 4 GPa for 10 revolutions at a rotational speed of 10 min/revolution using a disk with 12 mm height and 30 mm diameter.The investigations reported in this paper were performed after both the CG and UFG alloys experienced a storage time of 30 days at RT.It is important emphasizing that all thermal analysis experiments were performed at 10 Differential Scanning Calorimetry was carried out using a Netzsch 204DSC F1 Phönix device.Nitrogen was used both as a purge and protective gas (each 20 ml/min). XRD measurements (Bruker AXS D8 Advance DaVinci diffractometer operating with Cu Kα radiation) of fast cooled samples were performed to identify phases that precipitated.All experiments were performed in Bragg-Brentano geometry.Quantitative data on phase fraction was obtained through Rietveld refinement [27] that was performed using the software package Topas 6 by Bruker.Further details of the Rietveld refinement process for phase fraction estimation can be found elsewhere [28].In addition, a series of prolonged isothermal aged specimens were measured to obtain information about the near-equilibrium state of the phases.All provided information of phase fraction within this research is corresponding to wt.-%. Scanning Transmission Electron Microscopy (STEM) was carried out using a Thermo Fisher Scientific Talos F200X electron microscope instrument.Thin foils were prepared by twin jet electro-polishing using a solution of 25 vol.-%nitric acid and 75 vol.-%methanol at a temperature range of −18 to −25 • C and a voltage range of 12 to 14 V. High angle annular dark field (HAADF), bright field (BF-TEM) and energy-dispersive X-ray spectroscopy (EDX) measurements were used.In situ heating was carried out using a micro-electromechanical system (MEMS) and a Protochips Fusion Select in situ heating/cooling holder with an uncoated e-chip.Preparation for in situ measurements were carried out according to literature [29].The material was heat-treated within a TEM using a linear heating-rate of 10 Thermodynamic assessments were carried out using Thermocalc 2023a with database TCAL8 to determine the phase fraction of the alloy system upon heating.not observed, but it may be overlaid by the precipitation signal in the DSC heat-flow [32]. Precipitation sequence The first peak at 34 • C has been attributed to the formation of G.P. zones [14,25].Their partial dissolution can be seen upon further heating to 114 • C. G.P. zones have been shown to enhance the age-hardening behaviour of the crossover alloy [12], even more pronounced when Cu and/or Ag is present.[3,33,34]. At 233 • C, formation of metastable T"-phase precipitates is expected [14].A peak at 271 • C is not reported in the literature and is therefore assumed to be related with the nucleation of metastable T'-phase.It is not clear from literature whether the shoulder region at 306 • C is anticipated to be the transformation of T"-phase → T'-phase [14], the transformation of T' → T [35] or nucleation of T-phase [36,37].The endothermic peak at 385 • C represents most likely the dissolution of small-size T'-phase particles [14,30].The dissolution temperature of T-phase is reported between 430 and 485 • C, which was not clearly observable during the DSC run in our experiments [38][39][40]. Reducing the grain size affects the precipitation behaviour of aluminium alloys [17], but as already mentioned, no comprehensive investigation for 5xxx/7xxx crossover alloys yet are known so far.As can be seen in Figure 1, the UFG alloy does not show an exothermic peak at low temperatures.Since UFG microstructures are rich in microstructural defects which provide fast diffusion paths [41], G.P. zones may have already formed during RT storage time, so that only their dissolution becomes visible as an endothermic peak at 110 • C. Such behaviour was also observed in an UFG AA-7075 alloy [42].The large exothermic peak at 183 • C potentially relates to the maximum formation of a metastable phase.The following shoulder may be the transition to a more stable phase with its dissolution at higher temperature.However, no further insight can be derived from the DSC curves. It is noteworthy that both the CG and UFG alloys exhibit similar characteristics.It is quite feasible that the two exothermic peaks between 200 • C and 300 • C in the CG alloy have merged into one major peak at 183 • C in the UFG alloy.Additionally, the precipitation temperature is shifted to lower temperatures, indicating higher precipitation kinetics in the UFG alloy, which can be attributed primarily to accelerated diffusion.As previously indicated, the type of precipitate strongly depends on the Zn/Mg ratio.As reported in [43,44], when the Zn/Mg ratio is low, only T-phase precipitates are observed [3,45].Hence, given the alloy composition in our study, with a Zn/Mg ratio of 0.72, we expect to find exclusively T-phase particles.To validate these findings alongside our results and to study kinetics of the actual precipitates formed, we conducted XRD measurements at the peaks marked by arrows in Figure 1. Phase evolution To identify the precipitated phases, XRD measurements were performed.After linear heating the samples to the targeted peak temperatures of 34, 114, 233, 271 and 306 • C for the CG alloy and 110, 183 and 233 • C for the UFG alloy, we added an isothermal ageing time of 0, 1, 10 and 100 h, respectively to evaluate kinetics. The X-ray diffractograms of the CG samples are shown in Figure 2(a-c).In addition to the diffractograms, also the standard peak positions of the fcc-Al matrix [46] and the T-phase [47] are shown.Note that Bigot et al. [48] observed that the equilibrium T-phase and its precursor exhibit a hardly distinguishable crystal structure.Consequently, XRD does not allow to distinguish between the precursor and the equilibrium T-phase, but can indicate that T-phase or precursors are present. The lower peak temperatures, in particular 34 and 114 • C, are not shown in Figure 2, because only reflections of the Al matrix were observed.They are thus not displayed in Figure 2.However, with increasing temperature, reflections peaks of T-phase become more distinct from 233 • C/0 h up to 233 • C/100 h of isothermal ageing.Their increase is marked with black arrows within Figure 2.This is also the case for 271 and 306 • C. The increasing intensity of T-phase reflexes with increasing temperature and duration can be interpreted as the progressive formation of precipitates with T-phase structure. The T-phase fraction in the CG alloy as a function of the ageing time is shown in Figure 2(d).When isothermal ageing was conducted at 233, 271 , and 306 • C in the as-heated conditions (0 h), the phase fractions were measured to be 3.3%, 4.1% and 3.5%, respectively.After 100 h of isothermal ageing, the phase fractions at each temperature were 8.8%, 9.5% and 8.9% indicating an approaching (quasi-)equilibrium state.This also fits very well to calculated values of 11.3%, 9.9% and 8.6% for the T-phase fraction from Thermocalc. The X-ray diffractograms of the UFG material are displayed in Figure 2(e-g).Similar to the CG alloy, the UFG alloy shows an intensity increase of reflection peaks of precipitates with T-phase structure with increasing temperature and increasing ageing time. The T-phase fraction in the UFG alloy as a function of the ageing time is shown in Figure 2(h).Even in the as-heated state (0 h), volume fraction with a Tphase structure of 1.8%, 3.7% and 7.8% were determined in the relatively low temperature range of 110, 183 and 233 • C. It can be assumed that this observation is due to the fact that precipitates in UFG-structured materials can easily form via the benefits of preferential grain boundary diffusion [49].After 1 h at 110 • C the phase fraction increases to 3.1%.When samples are heated at 183 • C/1 h and 233 • C/1 h, phase fractions of 9.0% and 9.4% were observed.Ageing for 10 h leads to an increase of the phase fraction of the 110 • C sample, resulting in 6.3%.Measurements at 183 and 233 • C showing 10.3% and 9.6%, respectively.However, at 100 h of isothermal ageing, all three samples exhibit again a (quasi-)equilibrium.The values of phase fraction for 110, 183 and 233 • C are 9.1%, 10.0% and 10.5%, respectively. In summary, XRD measurements revealed, apart from Al matrix, the sole presence of phases with T-phase structure in both materials.However, the important question of the local position of the precipitates in the UFG material, at GBs or matrix, cannot be derived from XRD data.This will be answered by STEM investigations in the next section. Precipitate characteristics STEM investigations were carried out at 233, 271 and 306 • C for the CG alloy and at 183 and 233 • C for the UFG alloy in the as-heated condition, as indicated in Figure 1.As displayed in Figure 3(a), the investigations at 233 • C revealed for the CG alloy that the main alloying elements Mg and Zn are present in the T-phase (note that no distinction is made between precursors and equilibrium of the T-phase).The elemental mappings does not clearly show an enrichment of Cu and Ag within the precipitate at this state.The morphology appears in a round shape and shows similarities as reported by Stemper et al. [3].Only very few elongated precipitates can be found as shown in Figure 3(a) in the HAADF image.The size of the particles is in average 6.7 ± 0.7 nm. At 271 • C, the incorporation of Cu and Ag within the T-phase was detected as shown in Figure 3(b).A detailed examination of the HAADF image reveals that the majority of the precipitates exhibit a spherical morphology.The presence of Cu and Ag within the T-phase suggests their potential role in modifying the characteristics and properties of the precipitates.The particle size increased to 10.4 ± 1.4 nm. At 306 • C it is visible that Cu and Ag are clearly involved within the particles (Figure 3(c)).Cu may show a core/shell tendency, but a more detailed investigation is needed to fully clarify this issue.The morphology did not change significantly and their shape is predominantly spherical, but the size increased to 14.9 ± 5.2 nm. The analysis of the UFG alloy at 183 • C reveals the formation of elongated T-phase, consisting mainly of the alloying elemens Mg and Zn, and distributed preferably discontinuously at GBs (Figure 3(d)).This observation is an indication that no fully coherent precipitates participate in the first steps of the precipitation sequence, and that increased diffusion along GBs plays an important role.A similar UFG structure is also reported in literature [50,51].The T-phase thickness (i.e.transverse length) were measured to be 7.1 ± 1.3 nm. When the sample is heated to 233 • C, precipitation within the matrix is visible in the UFG alloy Figure 3(e).Precipitation at GBs still occurs discontinuously, but the elongated T-phase slightly increased in thickness (12.4 ± 2.9 nm).At both precipitation sites, all alloying elements are incorporated into the T-phase.Precipitates within the grains appear to have both elongated and round shapes. Thermal stability Another important question arises regarding the thermal stability of both the UFG structure in terms of grain growth (recrystallization) and the T-phase precipitates in terms of growth and dissolution.We investigated this behaviour using in situ TEM heating experiments.The results are shown in Figure 4. Upon heating to 230 • C, small precipitates can be seen primarily at GBs, which is consistent with the results presented in Figure 3.The UFG microstructure has not changed up to 280 • C which can presumably be attributed to the pinning effect of T-phase particles on the GBs.At approximately 300 • C, coalescence is noted and precipitates grow in size due to Ostwald ripening [52].These particles, as indicated with an arrow, gradually diminish in size and eventually disappearing completely at 346 • C. It should be noted that the volume fraction of T-phase decreases with increasing temperature (solvus temperature according to Thermocalc simulation 450 • C).Consequently, the precipitates partially dissolve, exemplified in Figure 4. Simulatenously, when T-phase particles are dissolving, the average grain size of the UFG microstructure is increasing. However, the average grain size is still in the UFG regime.Given the significant amount of energy stored in GBs [53], UFG alloys are known to be susceptible Conclusion Coarse and a novel ultrafine-grained AlMgZnCuAg crossover alloy were investigated and the major differences were revealed using DSC, XRD and (in situ) TEM techniques.Grain size effects on the precipitation sequence were unravelled for an UFG aluminium crossover alloy for the first time.The grain size affects the precipitation behaviour and following conclusions can be drawn: (1) In both alloys, precipitation is governed by particles with T-phase structure-type.Isothermal ageing at different temperatures up to 100 h did not change their crystal structure.(2) Kinetics of precipitation is different between CG and UFG alloys.The UFG alloy reaches equilibrium Tphase fraction faster and at lower temperature when compared with the CG alloy.This is most likely due to fast diffusion at GBs. (3) While for the CG alloy transgranular precipitation dominates, the UFG alloy is characterized by discontinuous precipitates at GBs (intragranular) accompanied by precipitates within the matrix. (4) Precipitation of the T-phase at GBs leads to a high thermal stability reflected by a resistance to grain growth in the UFG alloy up to 280 • C. Figure 1 ( Figure 1(a,b) showing BFTEM images of the CG and UFG alloy, showcasing the difference in their grain sizes.The CG alloy reaches an average grain size of 54.6 ± 2.6 µm while for the UFG alloy, the average grain length reaches 294.3 ± 109.8 nm and the average grain width 91.4 ± 36.6 nm.The precipitation sequence for the CG and UFG alloy determined by DSC is displayed as heat-flow signals in Figure 1(c).The CG alloy shows exothermic peaks at 34, 233, 271, 306 and 445 • C and endothermic peaks at 114 and 385 • C. The development of precipitates in the CG alloy shares similarities to findings in experiments conducted by other research groups [14,30,31].Defect recovery of the UFG alloy was Figure 1 . Figure 1.BFTEM images of (a) the coarse-and (b) ultrafined alloy.(c) shows the DSC heat-flow curves of a coarse-grained (blue continuous line) AlMgCuZnAg crossover alloy after solution heat-treatment (465 • C/35 min) and ultrafine-grained (green dashed line) AlMgCuZnAg crossover alloy.The sample was stored at RT for 30 days.The DSC experiments were performed with a linear heating rate of 10 • C • min −1 . Figure 2 . Figure 2. X-ray diffractograms of the coarse-and ultrafine-grained crossover Al alloy.The samples were heated with a linear heating rate of 10 • C • min −1 up to (a) 233 • C, (b) 271 • C, (c) 306 • C and (e) 110 • C, (f) 183 • C, (g) 233 • C, respectively.The plot in in (d) and (h) display the phase fraction of T-phase as a function of temperature and duration of isothermal ageing of the CG alloy and the UFG alloy, respectively.Note that the phase fraction was determined by Rietveld refinement.Black arrows pointing out the increase of reflection peaks. Figure 3 . Figure 3. STEM HAADF and EDX elemental mappings of the coarse-and ultrafine-grained crossover Al alloy.The samples were heated with a linear heating rate of 10 • C • min −1 up to (a) 233 • C, (b) 271 • C, (c) 306 • C for the CG alloy and to (e) 183 • C, (f) 233 • C for the UFG alloy. Figure 4 . Figure 4. BFTEM images illustrating in situ TEM experiments conducted as a function of temperature on the UFG-AlMgZnCuAg alloy.The alloy underwent heat treatment using a MEMS heating holder, employing a linear heating rate of 10 • C • min −1 .Images are extracted from the video file; the scale bar displayed at RT applies to all micrographs.
4,634
2023-11-15T00:00:00.000
[ "Materials Science" ]
Empowering educators: A training for pre-service and in-service teachers on gender-sensitive STEM instruction Starting early in life, children, especially girls, experience obstacles when it comes to developing interest in STEM. Although teachers face an important task in promoting girls (and boys) in STEM, they often encounter hurdles in doing so. A three-month-long training for pre-and in-service teachers in elementary education was developed to counter this phenomenon. An important training feature was teaching ideas for STEM classrooms. Teachers ’ evaluation of the training and teaching ideas, changes in their self-concept INTRODUCTION Knowledge, competencies, and reflected attitudes regarding STEM (science, technology, engineering, and mathematics) are prerequisites for successfully coping with global and societal changes in the economic and working worlds (OECD, 2020).New fields of work are emerging that require an integration of mathematics, computer science, natural sciences, and technology.STEM knowledge and competencies reach beyond academic settings and individual needs, impacting society as a whole when it comes to, e.g., health and prosperity (OECD, 2020).In contrast to its importance, studies point to the obstacles hindering the development of students' STEM competencies (Lane et al., 2022;Mazana et al., 2019), or STEM achievement being strongly dependent on children's socio-economic background and gender (Itzlinger-Brunefort, 2020).Furthermore, STEM subjects are not among the mostliked school subjects, with girls often shying away from them (Luttenberger et al., 2018); a majority of girls feel estranged from STEM subjects as early as in their formative elementary years (Luttenberger et al., 2018). With the importance of STEM in mind, teachers are faced with the task of promoting STEM education and supporting students from their early educational years onward (Lange et al., 2022).However, educational systems in Europe face strong hurdles in fulfilling this aim.These range from a low percentage of teachers choosing STEM as their teaching subjects, general teacher shortages, and teachers' critical attitudes toward STEM (Ortiz-Revilla et al., 2023).Given this context, a training concept for pre-and in-service teachers in Austria was developed that focuses on STEM instruction 2 / 13 in elementary education (age range six to ten).The concept aims to address potential reservations of teachers and present teaching ideas that not only convey STEM knowledge in an interesting and gender-sensitive way but also incorporate essential 21 st century skills, making them appealing to both girls and boys. Barriers to & Inequalities for STEM Students in Elementary Education International studies on students' STEM achievements show for a majority of countries that boys outperform girls.This is most notably in the 2019 TIMSS assessment, where in two-thirds of the participating countries boys outperformed girls in mathematics (Suchań et al., 2019).There are mixed results for Austria, where the present study took place.On the positive side, in the 2019 TIMSS assessment, Austrian students in elementary education (age range six to 10 years) scored in the upper performance range in mathematics compared to the EU.However, these same students scored only in the middle range in science.STEM barriers impacted girls more than boys.For instance, in 2019, a significantly higher number of boys than girls were able to solve complex mathematical problems (Itzlinger-Bruneforth, 2020). Gender differences are not limited to performance, but also concern personal characteristics and attitudes.They are accompanied by girls' lower academic selfconcept in comparison to boys, more anxiety (especially in mathematics), lower interest in STEM, and a preponderance of gender stereotyped career aspirations (Godec et al., 2024;Luttenberger et al., 2018;Martynenko et al., 2023).A large part of these obstructive attitudes can be attributed to stereotypes about females' abilities in STEM (Bieg et al., 2015;Ertl et al., 2017).Girls internalize stereotypes about lower math abilities, regarding themselves as less gifted than boys.In assessment situations, the internalized stereotype affects the perception of task difficulty, and is related to increased strain and tension, and decreased performance (Ertl et al., 2017).Over the course of childhood and adolescence, self-depreciatory assessment, and anxiety lead to avoidance of STEM, detrimental learning behaviors, and lower performance (Else-Quest et al., 2010). Parents also shape their children's educational values and self-assessments through their attitudes and personal background toward STEM (Eccles, 2005;Wildmon et al., 2024).Parents' beliefs do not necessarily rely on objective assessments, as they for example may maintain stereotypical evaluation patterns (Ertl et al., 2017;Rodriguez-Planas & Nollenberger, 2018).Their views of STEM in turn serve as a frame of reference, meaning that they may transfer their own STEM attitudes to their children.Mothers in particular influence their daughters' attitudes toward mathematics, self-assessments, and mathematics anxiety (Casad et al., 2015).In addition to gender, socioeconomic and family-related factors influence children's developments in STEM from an early age onwards (Jones et al., 2022;Nugent, 2015).Learning to participate in STEM is a form of cultural capital developed in childhood through engagement with parents, as well as out-of-school science experiences over time (Claussen & Osborne, 2013;Ennes et al., 2023).Children's family and social contexts also differ regarding the availability of science resources, or STEM role models such as parents and other adults (Eccles, 2005;Jacobs & Bleeker, 2004;Luttenberger et al., 2018;Rodriguez-Planas & Nollenberger, 2018;Watt et al., 2019).So, children do not start school as blank slates in terms of STEM, but bring with them knowledge, experiences, and attitudes that were taught at home (Eccles, 2005).Studies furthermore point out that from elementary education onwards, students steadily lose interest in STEM fields, and cease seeing these subjects as a viable option for their future or as part of their potential success (Christidou, 2011;Potvin & Hasni, 2014).In this context, teachers face the crucial task of designing STEM instruction that not only imparts knowledge but also integrates 21 st century skills, motivates students, and fosters joy in learning.Positive learning experiences form the basis for promoting children's interests and attitudes toward learning content. Barriers to STEM for Teachers in Elementary Education Although teachers are essential in promoting students' STEM interest and knowledge, they often are also affected by barriers to STEM.This is especially seen with elementary education teachers, who themselves are not necessarily among those exhibiting a high level of self-concept, self-efficacy, or a positive attitude toward Contribution to the literature • A comprehensive training concept for pre-and in-service elementary school teachers to promote girls in STEM was designed; it focuses on pedagogical-didactic knowledge and STEM-related attitudes and illustrates gender-sensitive STEM didactics using good-practice examples.• Gender-sensitive STEM teaching ideas were developed and evaluated for a target group that has been neglected in research: Elementary school students in need of support due to deficits in knowledge.• STEM teaching examples that are geared towards promoting girls (e.g., by including female-role models) are also rated positively by boys and enhance the overall quality of learning and teaching in STEM. EURASIA J Math Sci Tech Ed, 2024, 20(6), em2452 3 / 13 STEM.The profession of an elementary education teacher is mostly regarded as one in which social competences, but not STEM competences, are of crucial importance.Therefore, it is not surprising that in many countries, in-service and pre-service teachers in elementary education are largely female, often with very critical attitudes toward, gender stereotypes about (Kollmayer et al., 2018), and even high anxiety levels regarding STEM subjects like mathematics (Foley et al., 2017). In elementary education, teachers have an especially significant influence on their students' formation of attitudes, beliefs, and feelings toward a subject or knowledge domain (Furner & Bermann, 2003).However, teachers themselves may suffer from gender stereotypes (Tiedeman, 2000(Tiedeman, , 2002)).Female elementary school teachers particularly influence girls, and not always in a desired way.For example, studies on mathematics have shown that the teacher's level of math anxiety influences the achievement of the girls in their classes, as well as the girls' formation of self-efficacy and self-concept in mathematics (Beilock et al., 2010).Teachers, of course, may also have a positive influence on their students and promote positive attitudes, high degrees of motivation, and a sense of self-efficacy and self-concept (Bayanova et al., 2023).Teachers' attitudes, beliefs, and feelings are also related to their teaching style and instructional strategies, e.g., the degree to which they include challenging tasks in mathematics instruction (Thompson et al., 2022). Teachers' beliefs, attitudes, and feelings can be altered through training and education (Foley et al., 2017).As the relationship between teachers' attitudes, beliefs, feelings, and classroom practice is dynamic, with each influencing the other (Russo et al., 2020), learning (and applying) effective and motivating teaching skills are especially important. Role of STEM in Elementary Education in Austria With the exception of mathematics, the current Austrian elementary education curriculum does not include other specific STEM subjects; teachers can merely integrate STEM topics into their "general studies."The purpose of this subject is to provide students with the opportunity to explore their immediate environments and acquire knowledge about the world.Students here should be supported in understanding their natural, cultural, social, and technical environments (BMBWF, 2023a). The curriculum lists a variety of topics, which can be addressed in this subject, ranging from the cultural heritage of Austria and Europe, everyday knowledge, ethics, geography, and STEM.General studies are not specifically STEM-related, and it is mostly up to teachers to determine how much STEM content they include.In light of the role that international organizations like the OECD (2020) attribute to STEM education, STEM topics are generally underrepresented in the Austrian elementary curriculum.At the same time, the results of school achievement studies on science and mathematics point to gender inequalities in Austria on the elementary education level.Overall, these findings suggest that more attention should be paid to promoting girls (and boys) when it comes to STEM. Development of Education & Further Training Concept "Girls, Go for MINT!" for Elementary Education Teachers In light of the above findings, a training and further education concept and intervention "Girls, go for MINT!" ("MINT" being the German acronym for STEM) was developed for pre-service and in-service teachers in elementary education.Thus, a target group that is still underrepresented in STEM education research (Phuong et al., 2023) was addressed. A model & recommendations for trainings on STEM education With the SciMath-DLL professional development three-component model, Brenneman et al. (2019) developed a model for training educators in STEM.The model builds up on important didactic elements like learning about STEM content and teaching, including reflection on practice and professional development, opportunities for discussion, addressing beliefs and attitudes about STEM.With such didactic elements, teachers' skills of independent teaching, reflection on their own professional behavior, motivation, and positive attitudes should be strengthened (Way et al., 2022).Furthermore, Brenneman et al. (2019) emphasize that such training should offer the participating educators' opportunities to implement their knowledge in pedagogical practice, i.e., in the classroom. As it is described in the next paragraph, the training intervention "Girls, go for MINT!" incorporated features that Brenneman et al. (2019) regard as important in their model.A key feature of the training intervention are socalled teaching ideas for different STEM domains.They were developed to appeal to girls (as well as boys) and were implemented by the training participants in their education practice.They incorporate didactic elements that have found to be important for gender-sensitive STEM instruction, e.g., studies have found that STEM content and instruction are more appealing to girls when their relevance for everyday life and importance for society are highlighted; when the use of social skills and cooperative learning forms are emphasized; when active engagement and interest (Häussler & Hoffmann, 1998;Wodzinski, 2009), and self-efficacy are promoted, e.g., by hands-on tasks in which the children experience the outcomes of their actions (Inan & Inan, 2015;Lange et al., 2022). According to Wodzinski (2009), even though such characteristics are important to address girls in STEM, they do not impair boys' learning.Additionally, in each teaching idea a connection between learning contents and specific professions is drawn and female role models who have made special achievements in the corresponding field are presented (see González-Pérez, 2020 on importance of role models). Description of training & further education concept The training "Girls, go for MINT!" encompasses three parts with different learning opportunities for the participants: (1) introductory learning sequences on gendersensitive teaching, motivation and interest, and the role of stereotypes for children's learning and development, (2) teaching ideas for the STEM elementary education classroom, and (3) opportunities for the participants to reflect upon their roles and responsibilities in teaching elementary school children, while reflecting on their own attitudes, beliefs, and emotions concerning STEM. The teaching ideas incorporate the didactic features described above.Additionally, in each teaching idea, female role models are presented.Teaching ideas pertaining to four different STEM fields were developed, and all of them fit well into elementary level curricula from grades one to four (BMBWF, 2023a): 1. Science-Human body & skeleton: The teaching idea relates to the human body, in particular the skeletal system and organs.Illustrative materials including an anatomic model, skeleton, and use of a stethoscope support exploratory activities.The development of scientific thinking, methods of scientific work and exploration, and the examination of the relationship between humans and nature are the focus of the embedded tasks. Technology-Building, constructing, & chain reactions: The teaching idea for technology deals with forces in everyday life, and how these can be integrated into a chain reaction.Competency areas of the elementary-level curriculum such as building and constructing, exploring technology, and the practical use of technology products are addressed.In addition, analytical thinking and understanding of technical principles, functions and operations are included, which can also be found in the field of computer science and mathematics. Computer science-Coding: This teaching idea incorporates unplugged programming activities, where children learn coding concepts without using a device, as well as learning with robot devices such as Bee-Bots.Children learn analytical thinking, an understanding of technical principles, and functional and operating methods (Sun et al., 2021).The topic of coding was chosen because it is central to many STEM activities and professions.The concept of STEM explicitly includes mathematics, computer science, natural sciences, and technology, making it obvious to develop a teaching idea for computer science. 4. Mathematics-Measures & sizes: Measures and sizes are illustrated in tasks for measuring the human body.Concepts of size and measures are used throughout our experiences in and outside of school and are fundamental to understanding formal science processes and concepts.The principle of measuring and comparing sizes is an essential basic mathematical skill that plays an important role in everyday life in particular, as well as in the entire field of science and technology.Conceptualizing size and quantity has a developmental progression that is likely affected by multiple cognitive components (Jones et al., 2011). The learning ideas reflected the interdisciplinary and integrated nature of STEM, e.g., by overlapping subject areas or integrating concepts that are fundamental to all STEM fields (for integration of STEM-fields in instruction, see also Seebut et al., 2023;Zhang & Zhu, 2023).Each teaching idea was designed to take up approximately three hours (180 minutes), while allowing teachers large degrees of freedom regarding the time frame, materials, and general design instruction. Research Questions The present study aims at an evaluation of the developed education and further training program regarding assessments by teachers and their elementary education students.The following research questions are investigated: 1. Pre-service teachers: Do the pre-service teachers prefer specific teaching ideas over others with regard to overall pedagogical quality and gendersensitive instruction?To which degree do the preservice teachers evaluate the teaching ideas positively or negatively regarding quality and gender-sensitive instruction?Does the training intervention change participants' academic selfconcept in STEM? 2. Elementary education students: Do girls and boys differ in their assessments of learning with the teaching ideas?To which degree do elementary education students evaluate teaching ideas and learning with them positively or negatively? Sample of Pre-Service Teachers, Time Schedule, & Variables The training and further education program "Girls, go for MINT!" was implemented at a university college of teacher education in Austria within a larger preparatory course for 34 pre-service teachers who had registered voluntarily for teaching in the "summer school 2022" program.Summer school in Austria works with students of elementary as well as secondary education who require targeted support.Summer school was introduced as a nationwide educational program by the Austrian Ministry of Education in 2021.It is always held in the last two weeks of summer vacation and aims to reduce students' deficits from the previous school year and prepare for coming one (BMBWF, 2023b). Only the sample of female teachers (n=32) was investigated.At the start of the training program, 31 of them were bachelor students, with one pursuing a master's degree.Their degrees required them to select a specialized subject at the start of their studies even though they later can teach all main subjects in elementary education.Ten women (30.30%) had chosen language didactics; eight (24.24%) a subject focusing on fostering specific needs; five (15.15%) sports and health; five (15.15%) children's needs in the school entrance phase; four (12.12%)STEM; and two (6.06%) media didactics.Six (18.75%) teachers had taught at a school part-time, while 26 (81.25%) had gathered didactic experiences as part of school internships (one missing).None of the volunteering teachers dropped out of the training or summer school. Each pre-service teacher was assigned to a specific school by the school authority of that respective federal state in Austria.Altogether, the pre-service teachers taught summer school at 27 different schools (some participants taught at the same school). The training intervention "Girls, go for MINT!" was implemented as blended learning, including a Moodle learning environment, Webex videoconferencing, and in-person learning. Sample of Students in Elementary Education & Variables The pre-service teachers conducted the teaching ideas with 330 summer school children (164 girls and 166 boys) in grade 1 to grade 4 (the children's grade in the previous school year).Each child experienced one or two of the teaching ideas. After learning with an idea, children used three items to assess the degree to which they experienced joy in learning and found learning exciting, plus the degree to which their summer school class had enjoyed learning.Items were formulated in such a way that they always referred to the specific content, e.g., "I had fun measuring my own and my classmates' bodies" or "my class liked the chain reactions". All items were answered on a five-point Likert scale ranging from one (negative assessment) to five (positive assessment).To support the children, scale values were also expressed using smiley symbols and, if necessary, items were read to them. Evaluation of teaching ideas Two research questions were investigated: whether the pre-service teachers preferred specific ideas over others, and the degree to which they positively (or negatively) evaluated an idea.Table 2 shows the descriptive statistics for the assessments of the teaching ideas.Differences between assessments of the teaching ideas were evaluated with a repeated measures multivariate analysis of variance (MANOVA, Wilks-Lambda F[12, 20]=2.290;p≤.05, η 2 =.579).Evaluations differed significantly for the first three items (suitability for pedagogical practice: Greenhouse-Geisser F[2.301, 93]=5.324,p≤.01, η 2 =.147; role models: F[2.668, 93]=6.513,p≤.001, η 2 =.174; girls' interest: F[2.705, 93]=3.309,p≤.05, η 2 =.096).Pair-wise comparisons and the direction of differences between the teaching ideas are shown in Table 3.Only a statistical tendency was found for the assessment of children's overall interest; Greenhouse-Geisser F-value was employed due to lack of sphericity, F[2.334, 93]=2.747,p=.063, η 2 =.081.Furthermore, the degree to which pre-service teachers positively or negatively assessed the teaching ideas was investigated.For each item assessing a specific teaching idea, a one-way t-test with Bonferroni adjustments was carried out to investigate whether the mean value of teachers' evaluation deviates significantly from the mean of the scale (α-level of .0125 in the case of four items; examples for methodology, e.g., in Paechter & Maier, 2010;Taleb et al., 2017).The respective t-values are shown in Table 3.For the items on suitability of the teaching idea for pedagogical practice and on promotion of children's interest, the pre-service teachers' assessments deviated significantly from the mean, speaking in favor of positive evaluations.Concerning the assessment whether a teaching idea is suitable for presenting female role models for STEM, a significant value was found only for the teaching ideas on science and technology; they were favorably evaluated.Concerning the assessment whether a teaching idea is especially suitable for promoting girls' interests, no significances were found, i.e., all teaching ideas were evaluated by values lying in the middle of the scale (neither favorable nor unfavorable evaluations). Pre-service teachers' changes in self-concept It was investigated whether participants' academic self-concept in STEM would change over the course of the training.Table 4 shows the descriptive statistics for pre-service teachers' assessments of their social academic self-concept at the start and end of the training intervention.MANOVA suggests a positive change in the self-concept (Wilks-Lambda F[2, 30]=3.410;p≤.05, η 2 =.185) between the two points in time.However, only the difference for the first item was significant (first item: F[1, 31]=5.957,p≤.05, η 2 =.161; second item: F[1, 31]=2.417,p=.130, η 2 =.072). Students in Elementary Education It was investigated whether girls and boys differ in their evaluations of the teaching ideas. In a first step, ANOVAs considering the hierarchical structure of the sample (i.e., that individual children are nested within summer school groups) were conducted to test for differences between female and male students (SPSS MIXED procedure with fixed factor gender; Field, 2018).Random intercepts and random slopes were added to ANOVAs.Random intercept models allow the mean values between groups to differ; they consider that intercepts may vary across groups.Random slope models allow each group to have a different relation between independent and dependent variables; they consider that slopes may vary across groups.Both the random intercept and the random slope model were tested against the baseline model that did not account for the multi-level structure.The random intercept model did perform significantly better than the baseline model in seven of 12 cases (4.80≤χ²[1]≤18.18)and the random slope model did perform significantly better than the baseline model in three of 12 cases (13.45≤χ²[1]≤14.36).However, neither the variance of the intercepts nor the variance of the slopes was significantly different from zero in any of the 12 cases.With no significant difference between intercepts and between slopes, MANOVA with no accounting for the multi-level structure can be conducted and used for interpretation.This has also the benefit that alpha inflation is not an issue.Hence, a MANOVA was conducted for the three assessments for each teaching idea to test for differences between girls and boys. MANOVA results reject the assumption of gender differences for all four teaching ideas (science: F[3, 162]=0.395,p=.813, η 2 =.023; technology: F[3, 40]=0.317,p=.813, η 2 =.023; computer science: F[3, 76]=1.269,p=.291, η 2 =.048; mathematics: F[3, 86]=0.389,p=.767, η 2 =.013).Table 5 shows the students' evaluations of the teaching ideas with values for the whole sample as well as for girls and boys separately.Furthermore, the degree to which children positively or negatively assess the teaching ideas was investigated.One-way t-tests with Bonferroni adjustments were carried out for each idea and item (α-level of .167for three items).These investigated the assumption that the mean value of students' evaluations deviates significantly from the mean of the scale.The respective t-values are shown in Table 5.As all p-values were below p≤.001, all tests were significant. Evaluation of teaching ideas regarding overall pedagogical quality The pre-service teachers assessed the suitability of the teaching ideas for the teachers' pedagogical practice and their potential to promote children's interest.It was investigated whether specific teaching ideas were assessed more positively or negatively than others.Hardly any differences were found in the assessments. Only regarding the suitability of the teaching ideas for teachers' pedagogical practice did the teaching idea for coding (computer science) receive a more critical assessment.However, despite these (smaller) differences, all teaching ideas received very positive evaluations (significantly higher than the scale mean of 3.50), with means ranging between 4.50 and 5.50 on a sixpoint Likert scale. Evaluation of teaching ideas regarding gender-sensitive instruction The pre-service teachers assessed the capacity of the teaching ideas (and thus of the didactic concept) to raise girls' interest, as well as their suitability to present female role models in STEM.The teaching idea for science was regarded as the most suitable to boost girls' interest and present female role models; it received better evaluations than the ideas for mathematics (measuring) and coding.Although both the idea for science and the one for mathematics tie into the "human body" topic, the former was rated more positively.This result is all the more interesting, as both ideas include features of gender-sensitive didactics.A possible explanation for this could be that for the teachers, the science teaching idea was strongly associated with the domains of biology and medicine.These belong to natural science domains that are less frequently seen as "typically male" and have a high proportion of women in the respective professional domains and vocational programs (Ertl et al., 2017).Studies furthermore point out that teachers of young children (in many countries almost exclusively women) are not equally receptive to all STEM fields.This was also evident in a study by Pendergast et al. (2017) in which the female teachers preferred topics in life science over earth science or physical science.Pendergast et al. (2017), but did not specifically investigate gender-sensitivity of instruction. The teaching ideas were evaluated more critically regarding the features of gender-sensitive instruction.For all teaching ideas, the assessments of their suitability to especially promote girls' interests were only in the middle of the scale.Concerning their suitability to present female role models in STEM, only the ideas for science and technology received evaluations above the scale mean.This result is even more interesting, as all teaching ideas include characteristics that, according to Math Sci Tech Ed, 2024, 20(6), em2452 9 / 13 research, are important and suitable for appealing to girls: linking STEM content to application and everydaylife topics, immediate sense of accomplishment through hands-on experiences, opportunities for social interactions and collective engagement, gender-neutral language, incorporation of female role models, etc. (Häussler & Hoffmann, 1998;Stephenson et al., 2022;Wodzinski, 2009).Furthermore, the teachers' assessments did not mirror the girls' assessment.They evaluated all ideas very favorably, also those, which according to the teachers were less suitable for gendersensitive instruction. These results raise the question of whether teachers recognized that the design features used are important for promoting girls in STEM, and which features they perceive as important.They also emphasize the need for addressing issues of gender-sensitive STEM education in teacher education, as well as the requirement for more research on pre-service teachers' conceptions of gendersensitive instruction. Pre-service teachers' changes in academic self-concept In the study the pre-service teachers' social selfconcept concerning teaching in STEM was measured at the beginning and at the end of the training.Studies of teacher training in STEM emphasize that it is important to address not only subject matter and pedagogical knowledge, but also subjective job-related attitudes such as self-concept in training.These are factors that are also important for enjoyment and satisfaction in a teaching profession (Brenneman et al., 2019;Lange et al., 2022).Bagiati and Evangelou (2015) emphasize the importance of confidence in professional skills due to experiences as a facilitator for teachers to introduce technical learning contents in the classroom for young children. STEM research has shown that elementary education teachers in particular (predominantly women) tend to have a more critical self-concept in STEM and often shy away from STEM (Kollmayer et al., 2018).These results fit the present study in which only a few of the preservice teachers had chosen STEM as a special subject.Given this, it is of importance that the female teachers were able to develop a more positive self-concept in STEM during the training from the beginning to the end of the nearly three-month training program.During their training, they received ample opportunities to learn more about teaching in STEM, both in general and with a specific focus on girls.They also had the opportunity to apply their new knowledge and skills directly in the summer school classroom.In this regard, the training displayed success that went beyond cognitive learning goals, while also affecting attitudes. Elementary Education Students' Evaluations Two questions were investigated: whether the teaching ideas were attractive for the children, and whether they were appealing to both girls and boys.Possible reservations by the pre-service teachers concerning the teaching ideas' potential to appeal to girls could not be confirmed.No differences between girls' and boys' evaluations could be found.This aspect is even more interesting and important in light of how the ideas specifically incorporated female role models. Altogether, the results speak for the didactic measures taken to increase the appeal of STEM education for both girls and boys (didactic measures that are recommended for gender-sensitive instruction, see Dierickx et al., 2022;Stephenson et al., 2022).As learning with teaching ideas was positively evaluated by both, boys and girls, the measures seem to have been appealing to both genders.The inclusion of the above didactic measures would also be supported by research on gender-sensitive STEM instruction, e.g., Wodzinski (2009, p. 583 on gender-sensitive didactics in physics) has pointed out that "orientation of teaching towards girls also benefits boys and improves the quality of … teaching" and "if the lessons are directed towards the girls, it is also right for boys …" Limitations This study is not without limitations.For future research it would be desirable to include a control group for measuring the temporal development of the teachers' academic self-concept.Also, teachers' and children's assessments could have been influenced by the novelty of the teaching approach or by social desirability.The results do not allow the identification of specific didactic characteristics appealing to girls.It can merely be concluded that the array of different didactic characteristics in each teaching idea was important for the promotion of girls' (and boys') interests and learning. CONCLUSIONS & PRACTICAL IMPLICATIONS The study was targeted at an important but still under-researched group in STEM, which according to Phuong et al. (2023) is in need of support and more research: elementary education students and their teachers.International achievement tests like TIMSS identify challenges for the development of STEM competencies in this cohort of students, with girls being particularly impacted (Itzlinger-Bruneforth, 2020).In response, an education and further training concept was designed, implemented, and evaluated.Teachers gained not only knowledge and professional skills over the course of the training, but also developed a higher, more confident self-concept (Lange et al., 2022).Although the training content received overall positive evaluations, instructors exhibited preferences for certain topics, assuming similar preferences among their students.However, these presumptions were not consistently accurate.Particularly concerning gender disparities, 10 / 13 educators tended to underestimate girls' levels of motivation and interest. The following implications and recommendations for teacher training and design of teaching ideas emerge from this research. Addressing professional skills and attitudes in STEM teacher training: While educators are faced with the crucial yet challenging task of supporting children in STEM, they themselves encounter obstacles within the field (Foley et al., 2017;Kollmayer et al., 2018).However, children's advancement in STEM is not solely influenced by teachers' expertise; their attitudes also play a pivotal role (Feierabend et al., 2024;Lange et al., 2022).Hence, our training intervention effectively targeted both professional skills and the STEM self-concept of pre-service teachers. 2. Explicitly addressing equity issues in STEM teacher education: Since pre-service teachers are susceptible to gender-stereotyped attitudes, it is advisable to openly discuss and reflect on equity matters during training sessions (Archer et al., 2022;Chowdhuri et al., 2023).Our intervention incorporated this approach in different ways, particularly in theory-based modules, reflection sessions, and through the integration of goodpractice examples. Implementing good-practice teaching ideas in the training materials: The above arguments also speak in favor of incorporating good-practice examples that provide educators with practical guidance. Application of gender-sensitive didactic elements in STEM: The teaching ideas developed in our intervention embraced gender-sensitive practices, including featuring female role models, facilitating hands-on experiences (Stephenson et al., 2022), and referencing the everyday lives of both girls and boys (Dierickx et al., 2022).The results indicate that the teaching ideas could engage the interests of both genders. 5. Need for evaluation by students: Teachers cannot always accurately assess the abilities and preferences of their students (Rebmann et al., 2015;Seidel et al., 2021).This is also evident in our study in the comparison of the pre-service teachers' assessments of the teaching ideas' gender fairness and the actual assessments of the female and male students. 6. Implementation of STEM already in elementary education: Internationally, education systems vary in when STEM subjects are introduced.As described, science, technology and engineering are not systematically introduced in elementary education in Austria.The present study shows that this is indeed possible and that these subjects arouse pupils' interest. Altogether, the results speak in favor of the training intervention, especially concerning the teaching ideas.The teachers learned about good-practice examples for STEM instruction, which they mainly evaluated positively.The intervention was accompanied by changes in the teachers' academic self-concept.However, the results point out gaps in knowledge about teachers' attitudes, their professional knowledge about gender-sensitive didactics, and further needs for teacher training.Another important result concerns the instructional design of the STEM teaching ideas; they speak for didactic characteristics like the ones implemented in the teaching ideas and offer ideas for designing instruction in the STEM classroom. Table 1 shows time schedule, contents, aims, and set-up of the training. Table 1 . Time schedule, contents, aim, & set-up of the training Table 2 . Descriptive statistics for pre-service teachers' evaluations of teaching ideas after introduction at in-person workshops (for all assessments n=32) Table 3 . Results of one-sample t-tests for each item & significance levels for MANOVA post-hoc comparisons Table 4 . Pre-service teachers' academic self-concept at start & end of training intervention Table 5 . Elementary education students' assessments of teaching ideas
7,694.4
2024-06-01T00:00:00.000
[ "Education", "Engineering" ]
ATP7B knockout disturbs copper and lipid metabolism in Caco-2 cells Intestinal cells control delivery of lipids to the body by adsorption, storage and secretion. Copper (Cu) is an important trace element and has been shown to modulate lipid metabolism. Mutation of the liver Cu exporter ATP7B is the cause of Wilson disease and is associated with Cu accumulation in different tissues. To determine the relationship of Cu and lipid homeostasis in intestinal cells, a CRISPR/Cas9 knockout of ATP7B (KO) was introduced in Caco-2 cells. KO cells showed increased sensitivity to Cu, elevated intracellular Cu storage, and induction of genes regulating oxidative stress. Chylomicron structural protein ApoB48 was significantly downregulated in KO cells by Cu. Apolipoproteins ApoA1, ApoC3 and ApoE were constitutively induced by loss of ATP7B. Formation of small sized lipid droplets (LDs) was enhanced by Cu, whereas large sized LDs were reduced. Cu reduced triglyceride (TG) storage and secretion. Exposure of KO cells to oleic acid (OA) resulted in enhanced TG storage. The findings suggest that Cu represses intestinal TG lipogenesis, while loss of ATP7B results in OA-induced TG storage. Introduction The absorption of lipids and essential trace elements, including copper (Cu), is predominantly mediated by specific cells of the small intestine. Dietary intake and processing of lipids has to be considered in metabolic diseases of Cu homeostasis, like Wilson disease (WD) and Menke disease (MD) [1,2]. Excess Cu is toxic and usually manifests with increased liver Cu load and Cu excretion. Low Cu is frequently associated with impairment of various biochemical processes and growth inhibition. The molecular mechanism that governs uptake and intracellular metabolism of Cu and lipids by intestinal cells is not fully understood. Infant rhesus monkeys revealed decreased Cu retention suggesting a reduced intestinal Cu absorption following Cu exposure [3]. MD patients suffer from Cu deficiency, caused by mutation of Cu transporter ATP7A, a ubiquitously expressed Cu exporter, resulting in deficiency of Cu in the body due to Cu accumulation in the enterocyte. In contrast, WD patients are characterized by Cu overload of various tissues, prominently liver and brain, due to mutation of Cu transporter ATP7B [4]. High accumulation of Cu in the liver is followed by increased oxidative stress (e.g. HMOX1, MT1, and SOD1), mitochondrial damage and reduced ceruloplasmin (CP) release [5,6]. In liver, Cu is mainly imported by Cu transporter 1 (CTR1) and divalent metal transporter 1 (DMT1), although a minor role of multidrug resistance protein (MDR1) was reported [7]. A CTR1-mediated uptake of intestinal Cu was shown in mice [8]. Cu inside the cell is distributed to other cell compartments, like mitochondria or via ATOX1 to the trans-Golgi-network (TGN). At the TGN, ATP7B provides Cu for incorporation into enzymes, e.g. CP and hephaestin (HEPH) in the intestine. Metallothionein (MT) is a low molecular metal-binding protein that is essential for the intracellular homeostasis of various metals resulting in cytoprotection and detoxification [9]. Excess Cu affects the intracellular localization of ATP7B and leads to vesicular storage or ultimately to excretion via bile [10]. ATP7A delivers Cu to lysyl oxidase, tyrosinase and peptidyl alpha-monooxygenase. In the enterocyte, ATP7A is commonly believed to represent the main Cu exporter, leading to the release of Cu into the blood for whole body distribution. Absence of ATP7A was shown to increase the intracellular accumulation of Cu in intestinal cells [11]. ATP7B is also expressed in enterocytes [12], however it´s functional role in human intestinal cells is largely unexplored and most evidence was previously derived from WD animal models. Lower Cu concentrations were observed in duodenal tissue of Atp7b -/mice as compared to wildtype suggesting that functional loss of ATP7B results in decreased uptake/storage [13,14]. Pierson et al. suggested that Atp7b mostly resides in the vesicular compartment suggesting a role in the cytosolic storage of Cu in intestinal cells rather than in the export of Cu as observed in the liver. Furthermore, a crosslink of iron and Cu homeostasis was previously suggested [15]. Cu loading of cells was implicated to affect lipid metabolism. Low hepatic Cu levels were correlated with liver steatosis in nonalcoholic fatty liver disease (NAFLD) [16]. In contrast, WD patients revealed normal serum triglyceride (TG) and cholesterol levels, but a reduction of cholesterol after hepatic manifestation [17]. Of note, studies of WD animal models revealed changes of TG and cholesterol levels in liver and intestine [14,[18][19][20]. For enterocytes of Atp7b -/mice, an impact of ATP7B on the chylomicron production was recently suggested [14]. High dietary fat increases the chylomicron production of enterocytes, which transport TGs into lymph and blood [21]. The synthesis of lipoproteins in the intestine, e.g. chylomicrons, VLDL, and HDL, depends on the availability of specific lipids, structural apolipoproteins (e.g. ApoB48 and ApoE), and export supporting proteins, like ABCA1. Cu was proposed to interfere with several processes of lipid metabolism; however the determination of the Cu impact needs further work. The purpose of our study was the generation of a human intestinal ATP7B KO cell line to study the interrelation of Cu and lipid metabolism at the level of the enterocyte. Cell culture The human epithelial colorectal adenocarcinoma cell line Caco-2 was received from American Type Culture Collection (ATCC, Manassas, VA, USA). Caco-2 cell lines were grown in DMEM High Glucose (GE Healthcare, Chicago, IL, USA) supplemented with 10% fetal bovine serum (Gibco, Carlsbad, CA, USA) and 100 U/mL penicillin/streptomycin (Hyclone, Logan, UT, USA). For differentiation, 10 5 cells were seeded on 24 mm diameter wells and grown to confluence for 14 days to allow cell differentiation [22]. Media change was performed 2-3 times a week. Cells were maintained in 5% CO 2 at 37˚C in a humidified chamber. 10 6 Caco-2 cells were seeded in standard cell culture medium. The next day, cells were transfected with 2 μg of plasmid pSpCas9(BB)−2A-Puro (PX459) V2.0 (Addgene plasmid no. 62988 was a gift from Feng Zhang [23]) containing Cas9 endonuclease and gRNA scaffold with an ATP7B-specific sgRNA sequence (5'-ATATCGGTGTCTTTGGCCGA-3'). After 24 h of transfection, a single cell dilution was performed. Standard medium containing 1.5 μg/ml puromycin was added for 72 h for selection. Sequence analysis DNA was purified using QIAamp DNA mini kit (Qiagen, Hilden, Germany). DNA sequencing was performed using Big Dye Version 3.1 (Life Technologies). For gross sequence analysis of the Caco-2 cell clones the primers 5'-AGAGGGCTATCGAGGCAC-3' / 5'-GGGCTCACC TATACCACCATC-3' were used. The respective PCR product derived from one clone (clone #1) was ligated into plasmid pCR2.1-TOPO (TOPO TA Cloning kit; Invitrogen). After transformation of the ligation mixture into E.coli, DNA from bacterial clones was isolated and forwarded to sequence analysis. The complete ATP7B coding region gene was also analyzed for Caco-2 cell clone #1 [24]. siRNA knockdown 50 nM small interfering RNA (siRNA) directed against ATP7A (AM16708; Ambion, Foster City, CA, USA) and 4 μl RNAiMax (Gibco) were incubated with cells for 24 h. A scrambled oligonucleotide with an unrelated sequence was used for control. Retroviral transduction Retroviral transduction was performed with vector pGCsamEN.ATP7B, expressing the coding region of human ATP7B using established protocols [24]. Selection of cells was performed with 1 μg/ml of blasticidin (Invitrogen) starting at day 1 after transduction. Copper accumulation 0.1 mM CuCl 2 was added for 24 h in standard cell culture medium. Cells were washed five times with PBS and the cell pellet was collected. For measurement of subcellular Cu fractions differential centrifugation was used as described before [24]. Cu concentrations were determined by atomic absorption spectroscopy (Shimadzu AA-6300, Kyoto, Japan). Bradford protein assay (BioRad, Hercules, CA, USA) was used to normalize Cu measurements. Triglyceride determination 0.1 mM CuCl 2 and oleic acid (125 μM; O3008, Sigma-Aldrich) were added in standard cell culture medium for 24 h. Cells were subjected to TG quantification (Triglyceride Quantification Assay, ab65336, Abcam) or AdipoRed Assay Reagent (PT-7009, Lonza) according to the protocol of the manufacturer. Total protein concentration was used for normalization. Cell culture supernatant was collected and triglycerides were determined by an automated cobas 8000 analyzer system (Roche Diagnostics GmbH, Mannheim, Germany). Electron microscopy Cells grown in standard cell culture medium were treated with 0.1 mM CuCl 2 for 24 h. Cell pellets were resuspended in 2.5% glutaraldehyd in Sorensen's Phosphate Buffer (0.133 M Na 2 HPO 4 , 0.133 M KH 2 PO 4 , pH 7.2) and fixed with 1% osmiumtetroxide. After dehydration, specimens were embedded and sixty nanometer ultrathin sections were prepared (Leica Ultra Cut E ultramicrotome, Vienna, Austria). Counterstaining was performed with uranyl acetate and lead. Samples were inspected on a transmission electron microscope EM 208S transmission electron microscope (Phillips, Hamburg, Germany). Measurement of lipid droplets sizes was performed using the software ImageJ according to the reference scale of each image. At least 20 cells were analyzed per condition from different experiments. Real-time quantitative PCR Total RNA was isolated by RNeasy kit (Qiagen, Hilden, Germany). Transcription was performed using 1 μg of RNA and SuperScript III (Invitrogen, Carlsbad, CA, USA). For quantitative real time PCR (qPCR) SYBR Green PCR Core Plus (Eurogentec, Liège, Belgium) and primers (S1 Table) were added. PCR was analyzed on the ABI Prism 7900 HT Sequence Detection System (Applied Biosystems, Carlsbad, CA, USA). Ct values were normalized to the expression of the house-keeping gene Actin (ΔΔCt method) and log 2 expression was calculated. ApoE ELISA ApoE protein was determined using Human ApoE ELISA Kit (EHAPOE, Thermo Scientific). Cell culture supernatants were used at 1:10 dilution and absorbance was assessed at 450 nm. Normalization was performed by total protein concentration. Statistical analysis Statistical analysis was performed by Kruskal-Wallis 1-way ANOVA and Wilcoxon Mann-Whitney-test using SPSS 22.0 software and GraphPad Prism 8. A p<0.05 value was used to indicate significance. Generation of a human ATP7B CRISPR/Cas9 knockout intestinal cell line An ATP7B knockout cell line was generated in human intestinal Caco-2 cells to study the impact of ATP7B on Cu and lipid metabolism. Caco-2 cells were transfected with a plasmid containing the endonuclease Cas9 and a guide RNA, targeting exon 2 of human ATP7B (Fig 1). Cell growth was observed in 13 clones after puromycin selection of CRISPR/Cas9 plasmid. Clones were characterized by gross sequence analysis of ATP7B exon 2 and subjected to MTT assay (S2 Table). Seven clones showed wildtype ATP7B and an almost identical viability as compared to parental cells when exposed to a toxic concentration of Cu (0.25 mM). Six clones showed significant reduced viability (<30%) and an ambiguous nucleotide sequence close to the PAM region after gross sequence analysis suggesting multiple compound deletions (S1A and S1B Fig, respectively). The cDNA of one Caco-2 cell clone (clone #1) was therefore further analyzed via bacterial cloning of the ATP7B exon 2 PCR product. Sequence analysis of the bacterial clones (n = 19) suggested a compound mutation consisting of altogether three deletions (p.L394F_A395del, p.L394FfsX9 and p.E396KfsX11) corroborating the reported polyploidy of the Caco-2 cell line (S3 Table) [25]. Importantly, a wildtype sequence could not be observed in the bacterial clones. The respective Caco-2 cell line was termed ATP7B KO (KO cells). The remaining coding sequence of ATP7B was unchanged suggesting that CRISPR/Cas9 mutagenesis induced a highly specific knockdown. ATP7B knockout cells display disturbed copper homeostasis The expression of ATP7B protein was determined in KO cells to evaluate the effect of deletions in exon 2. A knockin cell line, termed KI cells, expressing the wildtype ATP7B ORF was established from KO cells by retroviral vector. An ATP7B-specific protein band could not be detected by Western blot analysis of KO cells (S2 Fig). KI and parental cells showed similar high levels of ATP7B expression as determined by densitometric quantification of ATP7B-specific protein bands (Fig 2A). In order to characterize the sensitivity of KO cells to high Cu, different Cu concentrations were exposed to the cells. A significant drop of viable cells was observed at Cu concentrations ranging from 0.25 mM to 1.0 mM in KO cells as compared to parental and KI cells ( Fig 2B). Exposure to Cu concentrations above 0.5 mM resulted in an almost complete functional loss to evade Cu toxicity. In contrast, ATP7B knockin could fully restore survival suggesting that the CRISPR/Cas9 induced deletions are causative of the functional loss observed in KO cells. Since Caco-2 cells also express ATP7A, which was proposed to be a Cu exporter [26,27], we also assessed whether knockdown of ATP7A in KO cells further impairs the function to ATP7A knockdown did not modulate viability of KO cells (Fig 2C). However, the viability of parental cells was significantly affected by ATP7A knockdown to the same level of KO cells (48%±3 and 48%±2, respectively). The intracellular Cu storage of KO cells was determined (Fig 2D). When cells were propagated in standard cell culture medium (basal medium contained <2.5 μM Cu), an almost identical level of intracellular Cu was observed in KO and parental cells. However, KO cells revealed a significant increased level of intracellular Cu when incubated with 0.1 mM Cu for 24 h. Note, that cells cultivated at this Cu concentration showed no signs of apoptosis and proliferation arrest. After ultracentrifugation, most Cu (~80%) was observed in the supernatant of 15,000 xg fraction of the cell lysates suggesting low molecular weight, cytosolic storage (S4 Fig). As Cu and iron homeostasis seems to be related [15], the sensitivity to toxic iron was determined in KO cells. In contrast to the reduced Cu sensitivity of KO cells, exposure to iron (10 mM to 30 mM) did not alter the cellular viability as compared to parental cells (S5 Fig). Modulation of gene expression after intestinal ATP7B knockout The question was addressed whether expression of genes related to Cu and lipid homeostasis was affected in the KO cell line. A set of 27 genes were selected from PubMed. Gene expression was analyzed by RT-qPCR analysis. Cells were grown in standard cell culture medium (basal medium) or incubated with 0.1 mM Cu. Expression was compared to parental Caco-2 cells grown in basal medium (control). 19/27 genes did not show significant changes of gene expression (� 1-log 2 expression relative to control) regardless whether treated with Cu or not (S4 Table). Genes not modulated were related to Cu metabolism (ATP7A, ATOX1, CTR1, MTF1 and SOD1), iron metabolism (DMT1, EPAS1, FPN1, HEPH, MRP1 and STEAP3), lipid metabolism (HMG-CoA, LDLR, PLN2, PPARα, PPARγ, and VLDLR), and apolipoprotein synthesis (ApoA4 and ApoB100). In contrast, expression of three genes involved in protection from oxidative stress, metallothionein 1 (MT1), duodenal cytochrome B metal reductase (DCYTB), and hemeoxygenase 1 (HMOX1), were affected in KO cells (Fig 3). After addition of Cu, MT1 was significantly induced, however to similar high levels as compared to parental cells. DCYTB was significantly downregulated in KO cells regardless of Cu addition. HMOX1 expression was induced in KO cells with highest levels after addition of Cu, suggesting that a high anti-oxidative response is induced after knockdown of ATP7B in the intestinal cell line. Of note, five genes related to lipid and apolipoprotein synthesis were modulated in KO cells (Fig 3). ABCA1, a cholesterol efflux pump belonging to the ATP-binding cassette (ABC) transporters [28], was significantly downregulated in KO and parental cells after Cu treatment. APOA1, APOC3 and APOE were upregulated in KO cells regardless whether or not Cu was exposed to the cells. APOB48 was downregulated in KO cells after Cu treatment, whereas parental cells receiving the same treatment did not show modulation of mRNA level as compared to control. ATP7B knockout affects lipid metabolism in intestinal cells Cu transport was proposed to affect lipid metabolism of intestinal cells. The number of intracellular lipid droplets (LDs) was investigated in KO cells by electron microscopy (Fig 4). According to their size, two classes of LDs were defined, having large (>600 nm) and small diameters (<600 nm). The numbers of LDs were similar in KO and parental cells when cultivated in standard cell culture media (basal media contains~10 mg TG/dl) (Table 1). However, addition of Cu (0.1 mM) increased (~3-fold) the number of smaller sized LDs in KO cells and to a somewhat lesser extent (~2-fold) in parental cells. In parallel, the number of larger sized LDs decreased (~3-fold) in both cell lines after addition of Cu suggesting that maturation of large sized LDs is impaired by Cu. In order to assess the storage and secretion of lipids in the cells, the concentrations of intracellular and secreted TGs were determined (Fig 5). Levels of intracellular and secreted TGs were similar in KO cells and parental cells when grown in basal medium (Fig 5A and 5B). Addition of Cu repressed intracellular TG storage and secretion. Secretion of ApoE, a PLOS ONE determinant of various lipoproteins, was secreted at significant higher levels by KO cells as compared to parental cells (Fig 5C). Oleic acid (OA), a common long chain unsaturated fatty acid, was used to further investigate lipid metabolism of KO cells. After OA exposure, KO cells significantly accumulated higher amounts (~2-fold) of intracellular TGs as compared to parental cells (Fig 5D). However, total secretion of TGs was similar in KO cells and parental cells after OA addition, suggesting that high intracellular lipid storage does not implement an TG export of similar levels when ATP7B is lacking (Fig 5E). Addition of Cu did not affect lipid storage and secretion of parental cells following OA treatment. Discussion Cu is an important molecule of many basic metabolic processes, including inflammatory response, antioxidant defense, and lipid peroxidation. The availability of Cu was recently linked to lipid metabolism and implicated in the pathogenesis of epidemic nutritional disorders, e.g. nonalcoholic fatty liver disease (NAFLD) and obesity [16,29,30]. In this scenario, the enterocyte of the proximal intestine has a central role to accommodate the body's specific physiologic needs, including the uptake and processing of Cu and lipids. We explored the impact of copper transporter ATP7B to mediate Cu and lipid metabolism in the enterocyte, a role that was previously characterized for the hepatocyte [18,19,31]. Using CRISPR/Cas9 technology, a novel KO cell line was established from Caco-2 cells, one of the most studied human model with many biochemical and morphological characteristics of enterocytes [22,32]. The CRISPR/Cas9 mutational approach targeting exon 2 of ATP7B resulted in the generation of several cell clones carrying compound heterozygous mutations around amino acid position 395, the targeted putative Cas9 cleavage site. In contrast to a previously reported mutational approach of ATP7B, a homozygous mutation was not observed here [33]. However, only a small number of cell clones was analyzed following an undirected mutational approach. While the deleterious variants observed in the KO cell line were not previously reported from patients, exon 2 is known to harbor numerous disease-causing mutations. Nevertheless, all variants of the KO cell line caused deletion/frame-shift mutations and resulted in absence of ATP7B protein expression and impaired ATP7B function. Functional loss was indicated by Cu sensitivity of the KO cells, which could be specifically restored after re-expression of wildtype sequence. A significant elevation of intracellular Cu was observed in KO cells after exposition to 0.1 mM Cu suggesting that intracellular homeostasis is impaired in the situation of elevated Cu. The daily Cu intake by food can significantly vary and is estimated to be around 2 mg for a typical adult, suggesting that the Cu exposure of cells used in this study is within physiological ranges. Most Cu was found in the lower molecular weight fraction of the enterocyte cell lysates indicating that intracellular Cu storage is likely associated with the cytosol. Of note, at basal Cu concentrations we did not observe a higher Cu accumulation in KO cells as compared to parental cells. This finding parallels previous observations of high Cu accumulation in cell models and WD liver suggesting that the export of Cu is impaired after loss of ATP7B [34][35][36]. For hepatocytes, it is commonly viewed that the main function of ATP7B is the transport of excess Cu into the bile [1]. The role of ATP7B for Cu homeostasis in the enterocyte is however far less understood as compared to the liver. Our determination of Cu resistance in the KO cell line are suggestive of an ATP7B exporter function in intestinal cells, however the Cu efflux was not directly assessed. In addition, while reversal of high Cu sensitivity was observed after ATP7B knockin, other mechanisms, e.g. increased vesicular Cu sequestration, could be envisaged to cause evasion from toxicity. In LEC rats, where ATP7B is nonfunctional due to a deletion, high Cu levels were found in the intestine following a Cu diet corroborating our findings [37]. In contrast, AAS analysis and Xray fluorescence microscopy of intestinal cells suggested reduced Cu levels in ATP7B [13,14,38]. Differences of the Cu content in diets and cell culture media, methodology of Cu determination, and differentiation status of the intestinal cells may account for some of the controversial findings. Further work is however required to address the exact role of ATP7B in the intestine, e.g. during polarized Cu trafficking. MT1 belongs to the first line defense antioxidant and HMOX1 is an important defender of oxidative stress [39][40][41]. Downregulation of DCYTB, a metal reductase, is considered as prevention of oxidative stress [42]. Expression values of these genes indicate that KO cells do not show elevated oxidative stress at low (basal) Cu concentrations, but respond to Cu treatment by induction of an oxidative stress defense mechanism that is not mounted in parental cells. Various other genes, implicated to maintain intracellular Cu homeostasis, were not modulated by the low/high Cu exposure of KO cells suggesting that the regulation of basic Cu transporters, importantly ATP7A, CTR1, DMT1 and SOD1, is preserved under the experimental conditions used in the study. In duodenal samples of WD patients ATP7A was increased and CTR1 was reduced, whereas expression of DMT1 and ATP7B was unaffected suggesting that some genes may be regulated by additional factors in patients [43]. ATP7A was suggested as a major Cu exporter of enterocytes [2]. High intestinal Cu concentrations were observed in patients having Menkes disease and in IEC-6 intestinal cells [11,44]. In this regard it is of interest that our simultaneously impairment of ATP7A and ATP7B did not synergistically increase the Cu sensitivity of Caco-2 cells suggesting that both transporters are involved in the escape from toxic Cu. This result is surprising, since the double knockout cells, which have no option to export Cu either by ATP7A or ATP7B, show the same resistance as cells with only one downregulated transporter. We did not assess gene expression and Cu accumulation in double knockout cells and it could be argued that other genes might be modulated, e.g. CTR1 is downregulated/endocytosed resulting in an overall reduced Cu loading. However, CTR1 mRNA was not downregulated in the KO cell line. Further experiments, also addressing expression levels of proteins are needed. Taken together, our data suggest that similar to liver, where ATP7B presents exporting functions, this is also operative in the enterocyte, extending previous findings in knockout mice [14]. Cu metabolism was suggested to affect iron homeostasis [11,42,45]. A modulation of genes related to iron homeostasis was observed for intestinal cells [11,42]. Our results on gene expression related to iron homeostasis as well as toxicity of KO cells after iron exposure did not support a role of ATP7B in iron homeostasis. However additional studies, including intestinal double knockouts, are needed to determine the role of the two Cu transporters for regulation of iron homeostasis in the enterocyte. In a portion of WD patients, alterations in serum TG/cholesterol and appearance of liver steatosis suggested interrelation of Cu homeostasis and lipid metabolism [1,17,18,31]. To gain more insights in the molecular mechanism, different aspects of lipid metabolism were investigated in KO cells. One important finding of our study was the increased level of intracellular TGs in KO cells after exposition to OA, one of the most commonly found long-chain monounsaturated omega-9 fatty acids in nature. TG storage is proposed as a protective mechanism against fatty acid-induced lipotoxicity [46]. OA was previously shown to be readily takenup by Caco-2 cells which resulted in high-efficient secretion of TG-rich particles, prominently chylomicrons and VLDLs [47][48][49][50]. In hepatic cells, OA induces synthesis of hepassocin (HPS), a cytokine implicated to be reduced in NAFDL [51]. It is therefore tempting to speculate that such pathways are involved in the establishment of NAFDL in WD patients. The overall level of TG secretion by KO cells was unchanged as compared to parental cells suggesting that the release of excess intracellular TGs is hampered in the absence of ATP7B. In skeletal muscle cells, OA was found to accelerate lipid oxidation in a SIRT1-PGC1alpha-dependent mechanism [52]. Although we did not address lipid oxidation in our study, our finding of increased TG accumulation in KO cells suggests that ATP7B is involved in OA-induced lipid metabolism, at least in the enterocyte. Our findings of increased levels of lipoprotein gene expression, namely ApoA1, ApoC3, and ApoE, are suggestive of an increased lipoprotein secretion in the absence of ATP7B. The level of ApoB100 expression, one prominent protein of various lipoprotein classes [53], was however not affected in KO cells. Of note, ApoE, found in highest quantities in VLDL, was induced in KO cells. Several studies suggested a direct link between Cu deficiency and dyslipidemia with increased concentrations of ApoE [54,55]. Similarly, overexpression of ApoE resulted in a combined hyperlipidemia phenotype [56]. However, ApoE expression of Caco-2 cells might be higher as compared to intestine and the relative role of intestine-secreted ApoE is not known [57,58]. It is however conceivable that secretion of lipoproteins, which prominently carry TGs rather than cholesterol, like VLDL, is increased in enterocytes in the absence of ATP7B. This is in line with findings of reduced cholesterol levels in animal models of WD [18,20]; however contradicting results were reported by others [19]. Cu displayed significant effects on the processing of lipids in KO and parental Caco-2 cells. Whereas the number of large size LDs was reduced, numbers of smaller sized LDs were increased suggesting that maturation of small sized LDs to large size LD, including chylomicrons [59], is hampered by Cu regardless whether or not ATP7B was expressed. The Cu-induced effect on LD generation may not be directly related to ApoB48 mRNA expression, since parental Caco-2 cells expressed normal levels. Lipid transporter ABCA1, reported to contribute to the production of HDL particles via release of phospholipid and free, unesterified cholesterol from the plasma membrane [28], was repressed by Cu in intestinal cells regardless of ATP7B expression. Downregulation of ABCA1 might therefore represent an intestinal response to Cu that may manifest in WD patients resulting in low cholesterol [17,20,60]. We also found a reduced intracellular TG storage and secretion that was independent of ATP7B expression when Cu treatment of the cells was performed in basal cell culture medium. Exposition of intestinal cells with Cu may thus prevent overall cellular lipid uptake resulting in decreased TG storage and secretion. Interference of Cu absorption and long-chain free fatty acid has been observed in rats. It was speculated that Cu may directly bind to free fatty acids and/or interact with the glycocalyx of the intestine [61,62]. In a situation of Cu deficiency, rats showed increased intestinal levels of TGs suggesting that low intestinal Cu can stimulate lipogenesis [63,64]. In contrast, Cu treatment resulted in decreased intestinal TG storage of enterocytes corroborating the findings of this study [51]. Conversely, when a high fat diet was used for animals lacking ATP7B, increased hepatic enzymes for cholesterol synthesis but normal values of serum TGs and cholesterol were found [60]. Taken together, our findings reveal a crucial role of Cu and ATP7B in the storage, processing, and secretion of lipids in a human enterocyte model. Supporting information S1 Fig. ATP7B CRISPR/Cas9 treatment of Caco-2 cells. (A) Cu resistance of cell clones (green) was examined by MTT assay. Clone #1 revealed compound deletion and clone #2 harboured wildtype ATP7B. WT cells (black) were used as control. Viability of cells was determined relative to untreated cells (100%). Mean ± SD are given (n = 3). � P < 0.05. ns, not significant. (B) Gross sequence analysis of clone #1 before bacterial cloning showed ambiguous nucleotide sequences between position 1181 and 1185. Note, that nucleotide sequence could be analyzed up to a certain position, whereas thereafter the sequence was unreadable (N) due to deletions. The PAM motif is marked in yellow. Forward (top) and reverse (bottom) sequence analysis is depicted. Table. Gene expression analysis of KO cells before and after copper load. Genes related to the Cu, iron (Fe) or lipid metabolism were examined. Cells were analyzed before and after Cu exposure. Log 2 gene expression is given relative to parental (WT) cells prior Cu treatment. Mean ± SE is given (n = 3). (DOCX)
6,645.8
2020-03-10T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Post-synthesis nanostructuration of BSA-Capsaicin nanoparticles generated by sucrose excipient In the pharmaceutical industry nano-hydrocolloid systems frequently coalesce or present nanoparticle aggregation after a long storage periods. Besides, the lyophilization process used to dry nanoparticles (NPs) produces loss of their original properties after dispersion. In this work we evaluated the effect on morphology and physicochemical properties of different protective excipients during drying of bovine serum albumin (BSA) NPs loaded with different concentrations of capsaicin. Capsaicin concentrations of 0, 812, 1625, 2437, and 3250 µg mL−1 were used; subsequently, NPs were dried with deionized water (DW), NaCl (DN), sucrose (DS), and not dried (ND). We found that ND, DW, and DN treatments showed a negative effect on the NPs properties; while, DS reduced the aggregation and produced the formation of isolated nanoparticles at higher concentrations of capsaicin (3250 µg mL−1), improving their circular shape, morphometrical parameters, and ζ-potential. The stability of the BSA-capsaicin NPs was associated to complex capsaicin/amino acid/water, in which GLY/GLN, ALA/HIS, ARG, THR, TYR, and Iso/CYS amino acids are involved in the restructuration of capsaicin molecules into the surface of nanoparticles during the drying process. The secondary nanostructuration in the post-synthesis stage can improve the molecular stability of the particles and the capacity of entrapping hydrophobic drugs, like capsaicin. Results and discussion Effect of drying on yield and efficiency of BSA-capsaicin nanoparticles. The ND treatment showed a positive correlation (R = 0.8376) between the transformation of native BSA into nanoparticles and drug concentration during the coacervation process ( Fig. 1, full circles); a similar linear tendency was reported by Sánchez-Segura et al. 14 and Sánchez-Arreguin et al. 10 while the nanoparticle yield showed the highest values at 1625 and 2438 µg mL −1 of capsaicin (91.8% and 91.7%, respectively), the yield decreased for 3250 µg mL −1 ( Table 1 and red dashed ellipse in Fig. 1). This reduction is due to saturation of the amino acids responsible for entrapping the capsaicin. This result confirms previous observations that the increment of capsaicin concentration affects the transformation of native BSA into nanoparticles 10 . The interaction of BSA molecules with an hydrophobic drug, improved the yield of nanoparticles respect to simple nanostructuration of BSA in which the yield reached between 68 and 70% 7 . On the other hand, drying treatments with different excipients showed a slightly higher correlation between BSA transformed into NPs and the increment in the concentration of capsaicin. No additional BSA or capsaicin were supplemented during the drying process. The correlation factors for different treatments were R = 0.8786 for DW (Fig. 1, empty squares), R = 0.8675 for DN (Fig. 1, empty circles) and R = 0.8858 for DS (Fig. 1, full triangles). On the other hand, the DS treatment showed more affinity of the BSA to capsaicin molecules at a low concentration of capsaicin (812 and 1625 µg mL −1 , see Table 1). The maximal values in the ratio BSA/capsaicin at higher concentrations (2437 and 3250 µg mL −1 , see Table 1) were found in ND and DW, respectively. However, the DS treatment maintains a high molecular affinity (see Table 1). The yield values for all drying treatments showed a slight decrement as compared to the ND treatment; however, the DS treatment showed a lower loss of BSA during the drying process. The nanoparticle yield of the DS treatment showed its highest values, 89.9% ± 0.1 and 90.2% ± 0.4, at 1625 and 2438 µg mL −1 of capsaicin concentration, respectively; but at 3250 µg mL −1 the yield decreased ( Table 1). The post-nanostructuration effect triggered by the excipients and drying process has not been described yet for BSA nanoparticles; however, biopolymer reorganization by temperature and chemical stimulus was studied previously on nanoparticles. Yang et al. 18 found that nanoparticles formed by blocks of bis(pyrene)-Lys-Leu-Val-Phe-Phe-Gly-polyethylene glycol (BP-KLVFFG-PEG, BKP) and hydrophilic polyethylene glycol (PEG) showed a spontaneous reorganization of structure. Probably this change was due to the strong hydrophobic interactions when the BKP self-assemblies in water. A similar effect was observed by Costa et al. 19 ; they found that microcapsules of Chitosan/ELRs (biomimetic elastin-like recombiner) showed stimuli-responsive effect caused by different solvent temperatures. The synergy of these stimuli produces significant changes probably as a result of a layer rearrangement of chitosan that allowed improving the control of the permeability of these multilayer systems. www.nature.com/scientificreports/ On the other hand, the encapsulated capsaicin recovered from ND treated samples showed a linear correlation (R = 0.9404) with respect to capsaicin concentration, 0, 812, 1625, 2437, and 3250 μg mL −1 (Fig. 2, full circles). During the coacervation process, it was also possible to observe the effect of BSA saturation (Fig. 2, red ellipse). Drying treatments with water, sucrose, and NaCl excipients showed similar linear trends with R = 0.9547, R = 0.9523, and R = 0.9457, respectively (Fig. 2). The correlation showed slight changes, probably due to the amino acid-capsaicin interaction in the nanoparticles being affected by the drying process, it is possible that low quantities of capsaicin were delivered from the NP surface to the supernatant during reconstitution of the nanoparticles. De Freitas et al. 20 reported that BSA-capsaicin nanoparticles stored at freezing conditions (− 4 °C) showed a lower drug release reaching 7% of capsaicin entrapped, compared to NPs stored at room temperature that reached 31% and NPs stored under refrigeration (4 °C) showed 18% of released capsaicin after 3 months. Regarding the Encapsulated Efficiency percentage (EE%), it showed large changes between different drying treatments (Table 1); actually, the EE% was more affected by the drying process than any other variables reported in this work. A capsaicin concentration of 2437 μg mL −1 produced the highest EE% for all four drying treatments; 75.6 ± 3.3% for ND, 60.0 ± 1.6% for DW, 64.4 ± 4.7% for DN, and 72.7 ± 2.2% for DS (Table 1). www.nature.com/scientificreports/ Therefore, the highest capsaicin loss occurred when deionized water or NaCl were used as protective excipients in the drying process. It has been reported that albumin NPs suffer loss of loaded drug impacting negatively the pharmacokinetics, drug delivery, and therapeutic efficacy 21 . This has been partially resolved by using mannitol, sucrose, and trehalose excipients during the freeze-drying of BSA nanoparticles, allowing a reduced loss of drug load of up to 1% 1 . Structural changes of BSA-capsaicin nanoparticles after drying treatments. Reported Fourier Transform Infra Red (FTIR) spectra from pure capsaicin and native BSA 10,22,23 were compared against the spectra from our samples. The solid line in Fig. 3a shows the FTIR spectrum from our native capsaicin; the phenolic 4-OH group was assigned to the peak at 3506 cm −1 , this resonance showed slight changes with respect to previous reports 10,24 . In this work we found a peak at 3443 cm −1 probably due to photochemical oxidation of capsaicin during analysis (Fig. 3a, red ellipse). The 4-OH of capsaicin, show high reactivity and decomposition during analysis of the chemical structure by NMR, HPLC, and FTIR 25 . A third peak was found at 3283 cm −1 and corresponds to the amide stretching bond N-H of the capsaicin molecule. The peak at 2922 cm −1 was associated to the aliphatic bond C-H stretching vibration. Finally, the resonances at 1637 and 1516 cm −1 were assigned to the C-C and C-O stretching vibrations, according to Leela et al. 26 . The dashed line in Fig. 3a, shows the spectrum for native BSA; similar profiles have been observed by Bronze-Uhle et al. 3 and Sánchez-Arreguin et al. 10 . The N-H functional group associated to amide A showed a broad stretching with a maximal peak at 3288 cm −1 . The second maximal peak, found at 1646 cm −1 , corresponds to a stretching of the carbonyl group (C=O) of amide I. The subsequent regions showed some intense peaks associated to functional groups with double molecular behavior. C-N stretching and N-H bending of amide II was identified at 1515 cm −1 . The CH 2 bending groups were associated to the peak at 1393 cm −1 , while the vibrance at 1250 cm −1 was correlated to C-N stretching and N-H bending in amide III. All treatments (ND, DW, DN, and DS) and formulations of capsaicin (0, 812, 1625, 2437, and 3250 μg mL −1 ) showed deformations of the peaks on the region from 3682 to 3104 cm −1 ; this spectral region corresponds to the N-H amide A of the BSA, and overlaps with the 4-OH group and NH amide bond of capsaicin (see blue rectangles in Fig. 3b-e). For all treatments with capsaicin concentration of 3250 μg mL −1 the NH amide bond signal of pure capsaicin (N-H stretching) in the region from 3404 to 3216 cm −1 increased as shown by the black line in the green rectangle of Fig. 3b-e. The spectral analysis of each treatment serves to identify possible structural changes in the nanoparticles after the drying procedure. The DW and DN treatments at 3250 μg mL −1 showed an intense peak deformation in the 3404 to 3216 cm −1 region as displayed by the black lines in the blue rectangles in Fig. 3c,e, respectively. These perturbations were probably due to the instability of the capsaicin during the drying process; deionized water and NaCl excipients showed no protection capacity against changes generated in the interaction of the amino acid of albumin/capsaicin/water. As Fig. 3d shows (see blue and green rectangles) the spectral resonances of the overlapping peaks of N-H amide and A-/-NH amide bonds of capsaicin (3404-3216 cm −1 ) for the DS treatment did not show perturbations in intensity and functional groups for all concentrations. This observation suggests the low deformation of the peak and the low distance between bands of transmittance were probably due to the ordered interchange of the amino acid of BSA in the microstructure and the subsequent migration of capsaicin from the core to the surface of the nanoparticles 10,14 . This leads to an homogeneous distribution of capsaicin patches on the surface of NPs dried with sucrose. The DS treatment showed the best protective effect on the functional groups of amino acids of BSA NPs; a similar effect was observed by Lee and Timasheff 27 they found that sucrose does not perturb the spectral fingerprint of several proteins during the drying process. The stabilization of proteins by sucrose excipient has been proposed by the formation of polyhydric alcohols that induce a conformational change in some functional groups of proteins, producing milder changes. In this study, during the drying process with sucrose, the capsaicin experienced an equilibrium of repulsive forces between capsaicin and hydrophobic amino acid of BSA, into the nanoparticles. During this process, the restructuration of capsaicin is carried out, and the capsaicin passed from the core to the surface of the nanoparticles; we found this restructuration is less violent when NPs are dried with the sucrose excipient. The efficiency of excipients in the drying process of NPs has been evaluated through the presence of free H 2 O or molecular water (O-H) content in the FTIR spectrum at 1644 cm −1 wavelength 28 . In this work, this region corresponds to the overlapping of amide I and II bands of BSA with the hydrophobic side chain of the capsaicin (1710-1480 cm −1 ), see the red rectangle in Fig. 3b-e. The presence of molecular water was observed in nanoparticles dried with and without water, while in the DS and DN treatments the presence of O-H was not observed. Moreover, a protein-sugar interaction was not observed for the DS treatment; this peak was reported at 1580 cm −1 , which is ascribed to the H-bond interaction with the carboxylate groups 29 . Morphology of nanoparticles. Regarding the NPs size, they increased as the concentration of capsaicin increased; this trend has been observed before 10,14,17 . Beside size changes, the TEM images showed morphological changes in the nanoparticles depending on the drying treatment. As shown in Fig. 4a (for more images see Fig. S2 of Supporting Information), the ND treatment showed a transitional change of shape in nanoparticles from circular shape, for 0 µg mL −1 of capsaicin concentration, to elliptical shape for 1625 µg mL −1 . It can also be observed that at higher capsaicin concentrations NP coalescence is more extensive and small aggregates are produced. The treatment DW produced large particles with an elliptical shape and secondary particles branching from the structure and forming columnar aggregates as seen in Fig. 4b (for more images see Fig. S2 of Supporting Information). It is also noticeable that the surface of the NPs was slightly rougher; a similar effect was observed 20 . It is probable that the dual effect of freeze drying and high capsaicin concentrations produced columnar aggregates with quasi-spherical shape. Among the four treatments, DN resulted in the most negative effects regarding NPs shape. In this drying procedure even the small nanoparticles with 0 µg mL −1 of capsaicin fused between them as can be appreciated in Fig. 4c (for more images see Fig. S2 of Supporting Information). Similar coalescence was observed for the other capsaicin concentrations; also it is worth noting that as the drug load increased, the structural complexity of the NPs decreased, probably due to amino acids disassembling and producing an amorphous coagulated protein. Interestingly, the DS treatment resulted in better morphology even than the ND case. We found that the sucrose at 1 mmol, allowed to protect the circular shape of nanoparticles observed at 0 µg mL −1 of capsaicin, see Fig. 4d (for more images see Fig. S2 of Supporting Information) and improved the morphology of the NPs loaded with capsaicin; this was probably because the removal of water molecules increased sucrose concentration leading to the formation of a sol, the residual water molecules produced nanofluidic drag (microcirculation phenomena) 30 . This drag induced the separation between NPs due to shear forces or strong capillary forces 27 . Physical separation of NPs produced an improvement in the morphology of the dried nanoparticles with sucrose. Morphometric analysis of the nanoparticles. Several studies have reported that the morphometry of nanoparticles changes as a function of the increment of capsaicin during the synthesis of NPs 10,14,17 . In this study, as aforementioned in the morphology analysis section, we observe that the shape of the NPs was affected by the drying process. In order to evaluate this change, TEM images of isolated nanoparticles were analyzed by digital image analysis (DIA). As seen in Fig. 5a all the treatments showed an increment of effective diameter (Ed) with increasing drug load. The Ed for the ND treatment changed by a factor of 5.0 from 0 to 3250 µg mL −1 of capsaicin concentration; for the other treatments the factors were 3.4, 2.9, and 3.3 for DS, DW, and DN procedures, respectively. While three of the treatments show similar behavior, DS deviates for high concentrations reaching a plateau, see dashed red rectangle in Fig. 5a. This means the sucrose excipient reduces the negative effect over drying NPs and their coalescence when the drug load is augmented. In the case of the DN treatment, we propose the Na + and Cl − ions altered the pH and the net charge on the protein surface through amino acid hydrogens interacting with different ions in solution, resulting in an acidic pH change. At pH 4.9 there is a lack of electrostatic repulsion and thus amorphous aggregates are readily formed through nonspecific interactions 3 . Additionally, the increase in capsaicin concentration generates a more hydrophobic environment, which increases the formation of aggregates. Future work may be directed towards establishing possible relations between formulation and drying processes. www.nature.com/scientificreports/ In contrast to the Ed parameter, the shape factor (Sf, related to circularity) and aspect ratio (Ar, related to ellipticity) showed only small changes against capsaicin concentration for DW, DN, and DS treatments, see empty squares, empty circles and full triangles in Fig. 5b,c. On the other hand, samples with no drying treatment, ND, showed affectations with the concentration as can be seen in Fig. 5b,c, full circles; thus, the change of shape was related to the drying process. This means that the particles not dried show gradual changes of Sf and Ar associated to an increment of capsaicin concentration. Also, interestingly, the treatment with sucrose (DS) resulted in the shape factor closest to 1, i.e. rounder particles, and the lowest aspect ratio, i.e. less ellipticity as can be seen in the full triangles of Fig. 5b,c. This is related to less aggregation and branching as a consequence of reduction in the coalescence. A possible explanation for our results with the sucrose excipient is that when it is incorporated into the colloidal system, it could exert pressure to reduce the surface of contact between the BSA molecules of near nanoparticles due to decrease radius of gyration (spatial expansion), thus inhibiting the unfolding of the BSA in the nanoparticles 26 , and consequently improving the hydrodynamic shape of the NPs. In contrast, the loss of circular shape for the ND treatment is attributed to the increased coalescence due to higher content of capsaicin crystals, affecting the coagulation of the BSA molecules 14 . For the DW and DN procedures, the aggregation was probably due to a change in the surface charges of amino acid, thus altering the electrostatic properties of BSA 31 . The use of sucrose to dry nanoparticles improves the pharmaceutical formulations, in which, the circular shape and elliptical shape of nanoparticles are important factors for the internalization into the cell 28,32 . ζ-Potential and hydrodynamic diameter of aggregates. In a previous work, we observed an increment of the ζ-potential as the drug load increased from low to medium capsaicin concentrations 10 . In this study, we found that the electric charge is affected by the drying process. In particular, while we observed somewhat similar values of the ζ-potential for the ND, DW, and DN treatments, a higher value of this parameter was found for the NPs subjected to the DS procedure for all drug concentration procedures (Fig. 6a, compare full circles, empty squares, and empty circles to full triangles). Also, we measured relatively large aggregates for the ND, DW, and DN treatments, while the DS samples presented smaller aggregates for all capsaicin concentrations (Fig. 6b, compare full circles, empty squares, and empty circles to full triangles). The increased electronegative values at 3250 µg mL −1 of capsaicin concentration for the ND, DW, and DN drying treatments (Fig. 6a, full circle, empty square, and empty circle) were not observed in the previous report. They are attributed to exposition of capsaicin molecules in the surface of NPs, and liberation of amino acids with negative charge on the Nernst layer of the particle. Eisele et al. 33 describe that the negative charges on the surface of BSA are generated from deprotonation of the carboxyl end of acidic amino acids (glutamate and aspartate). According to Yang et al. 18 , the reorganization in some biopolymer nanoparticles was affected by the surface properties and internal bonds (H-bonds). In this study probably the molecular interaction between some amino acids and capsaicin affects the stability and morphology of nanoparticles due to the changes in hydrophilic/lipophilic balance, thus affecting the self-assembly process and even structures and morphologies of self-assembled materials. Capsaicin has the capacity to form a stable protein−ligand complex with BSA that mostly involve hydrophobic and electrostatic interactions 34 . However, this mechanism is not capable of binding great quantities of capsaicin, so probably there exists an alternative mechanism related to drug sites I and II in the BSA molecule (commonly called hydrophobic cavities) that facilitates the incorporation of hydrophobic drugs into the structure of BSA 3 . The modification of the drug sites I-II displays some effects such as the coalescence between nanoparticles, loss of circularity, www.nature.com/scientificreports/ aggregation, aberrant morphology and changes in surface of nanoparticles (ζ-potential). In nanoencapsulation of hydrophilic drugs (5-fluorouracil, vinorelbine tartrate and salicylic acid), the coalescence and aggregation of particles were not observed 3,5,6 . NPs dried with sucrose showed low aggregated size (351.1 ± 30.9 nm; see Fig. 6b, full triangles); this could be attributed to sucrose inhibiting an irreversible formation of aggregates 26 because during the interaction of sucrose with the protein, the sucrose does not crystallize during vacuum drying 28 . This effect probably produces minor inter-particle contact and insulation of surface electrostatic charges allowed a controlled reorganization of capsaicin molecules with the amino acids of the nanoparticles. Stability of the nanoparticles after drying treatment. Electrophoresis was used to determine molecular size and purity of proteins; moreover, it verified the homogeneity of the protein samples, as well as the number and molecular size of subunits 35 . In SDS-PAGE the native BSA protein migrates in response to an electrical field through pores of acrylamide gel. In this experiment, polyacrylamide gel (12% SDS-PAGE), at a constant voltage of 70 V for stacking gel and 80 V for resolving gel, allowed to determine the purity of the BSA and its molecular weight (MW). The molecular weight of native BSA was found to be 66 kDa (Fig. 7a-d, lane 2) according to the MW marker ( Fig. 7a-d, lane 1). The BSA-capsaicin nanoparticles did not show migration in the acrylamide gel for any of the drying treatments, therefore, the nanoparticles were found in the loading well ( Fig. 7a-d). The ND treatment at 812 µg mL −1 capsaicin concentration (Fig. 7a, black arrow on line 4), and DW at 1625 µg mL −1 (Fig. 7b, black arrow on line 5), showed a trace of free albumin at 66 kDa probably as a result of the dissembling of some albumin molecules caused by the drying process. A similar effect was observed by Wang et al. 36 , they found that the delivery of BSA from biohybrid nanoparticles increased after the exposition of NPs to excipients with low pH. The SDS-PAGE showed an intensity dependent on incubation time. This post- Our results indicate that the BSA molecules showed high chemical stability into the structure of NPs and maintained their assemblage during the drying process. In contrast, Tarhini et al. 7 found the BSA nanoparticles migrate in the gel of acrylamide and showed an intense band at 66 kDa of MW, this implied that the BSA NPs showed instability and a high degree of disassembly. Since the BSA NPs are constituted by several BSA molecules, a possible symptom of disassembly of the NPs is the presence of free albumin at 66 kDa. The orientation of the capsaicin from the core towards the surface of the NPs implies the reorganization of the hydrophobic amino acids and probably their loss during this process; in order to investigate this phenomenon, the quantification of the free amino acid was carried out. Quantification of free amino acids from dried nanoparticles with different concentrations of capsaicin allowed to establish two groups of amino acids. Group 1 includes 10 amino acids with a relevant presence in the NPs after ND, DW, DN, and DS treatments; while group 2 includes 8 amino acids with low presence, both groups are detailed in Table 2 (presence of amino acids and retention time). Notably, in group 1 serine (SER) was not observed in the ND treatment at any capsaicin concentration but was present for the other drying procedures and phenylalanine (PHE) showed irregular presence on DW, DN, and DS treatment. Regarding group 2, the DS and DN treatments showed a higher percentage of the total amino acids present in group 2, 12.27% and 6.75% respectively (see Fig. 8). The two treatments presented a larger number of amino acids of group 2, with 6 amino acids; in particular, DS had the lowest percentage of total amino acids of group 1 and the highest of group 2 (Fig. 8). Overall, sucrose was the excipient that showed the best post-drying properties; probably, this was due to the low loss of amino acids of group 1. The molecules of sucrose reduce the instability and dragging of those amino acids that had strong water-binding into the BSA protein 28 . DS had the highest percentage of group 2 amino acids, GLY/GLN, ALA/HIS, ARG, THR, TYR, and Iso/CYS, this observation could be related to the stability of capsaicin during the reorganization into the surface of the NPs. Anand et al. 34 describe 14 possible interactions with amino acid of BSA, of which five are strong interactions (Tyr400-capsaicin, Asn401-capsaicin, Lys524-capsaicin, Phe506-capsaicin, and Phe550-capsaicin); it is possible that PHE, TYR, and LYS interact strongly with capsaicin resulting in increased NP stability. Our results also suggest that the dynamical formation of complexes of capsaicin-amino acid (BSA)-water affects the structural stability of the BSA nanoparticle and its capacity to reorganize hydrophobic drugs like capsaicin. To our knowledge, this is the first report that supports the secondary post-synthesis restructuration of nanoparticles and the interaction between amino acids with hydrophobic drugs. Conclusions In this study, NPs of BSA were loaded with increasing concentrations of capsaicin to evaluate the effect and changes on properties of nanoparticles after drying with several excipients. We found that the nanoparticles dried with sucrose 1 mmol reduced their aggregation and the formation of isolated NPs at the highest concentrations of capsaicin (3250 µg mL −1 ) leading to an improvement of their circular shape, reducing the elongation of the Table 2. Amino acids liberated during dried processes of BSA-Capsaicin nanoparticles. a Not peak was registered. b One peak was registered. Protective excipients of BSA-capsaicin nanoparticles and vacuum drying. Nanoparticles of albumin have been typically dried without water or deionized water. In order to compare the effect of excipients in the drying treatment versus traditional drying with deionized water and without drying treatment (diluted in deionized water), as a negative control, we used two protective excipients, NaCl (10 mmol, pH 9.4) and sucrose 1 mmol. The samples (1000 µL) were taken from stock of NPs and washed by three cycles of centrifugation at 12,485×g, 10 min, at room temperature (MC-12V, DuPont, Newtown, CONN, USA), and the dispersion of the pellet was carried out in deionized water, during every cycle. After the final cycle, the first sample that was not dried was maintained in deionized water (ND), the second sample was resuspended in 1000 µL of deionized Quantification of BSA transformed in nanoparticles and encapsulated capsaicin. The quantification of encapsulated capsaicin in NPs was done by extraction with acetonitrile as described by Sganzerla et al. 37 and modified by Sánchez-Segura et al. 14 and Sánchez-Arreguin et al. 10 . The BSA nanoparticles yield were calculated from recovered BSA after extraction of capsaicin. While, the encapsulated efficiency (EE%) was calculated from capsaicin quantified by HPLC as described by Bhalekar et al. 38 . The entire procedure is described in Supporting Information (see S1) 10,14,37,38 . Fourier transform infra red spectroscopy (FTIR). Determination of Fourier transform infrared spectra of pure capsaicin, pure BSA and BSA-capsaicin NPs dried were performed by method described by Sánchez-Arreguin et al. 10 . These experiments are explained in Supporting Information (refer to S2) 10 . Determination of ζ-potential, and hydrodynamic diameter of aggregates. The ζ-potential and hydrodynamic diameter of aggregates were determined by method previously reported by Sánchez-Segura et al. 14 and modified by Sánchez-Arreguin et al. 10 as explained in Supporting Information (S3) 10,14,39 . Transmission electron microscopy (TEM) and morphometric analysis of nanoparticles. The morphology of the resuspended nanoparticles was examined by TEM. The experimental preparation of the samples and operating conditions of the microscope were similar to those previously reported by Sánchez-Segura et al. 14 , Sánchez-Arreguin et al. 10 and modified by Castro-González et al. 40 . On the other hand, the morphometric parameters were calculated with equations proposed by Syverud et al. 41 . Parakhonskiy et al. 32 and Bouwman et al. 42 and modified for nanoparticle description by Sánchez-Segura et al. 14 and Sánchez-Arreguin et al. 10 . The procedure is detailed in the Supporting Information (see S4) 10,14,32,40-42 . Quantification of disassembled BSA by polyacrylamide-gel electrophoresis. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) was performed using the Tris-glycine buffer system of Laemmli 43 with modifications. In addition, to estimate the protein molecular weight (MW) we used the EZ-Run pre-stained Rec protein ladder with bands in the range of 10-170 kDa (Fisher Scientific, Waltham, MA, USA). To reference the MW of native BSA, lyophilized BSA (66 kDa), was dissolved in deionized water to stock concentration [500 µg mL −1 ] (Equitech-Bio, Kerrville, TX, USA). From stock, the solution was adjusted to 20 µg per lane. Finally, the BSA-capsaicin nanoparticles after drying procedures were dissolved and adjusted at a concentration of 20 µg per lane. The samples were resolved on 12% SDS-PAGE at a constant voltage of 70 V for stacking and 80 V for resolving. Then, gels were washed three times with deionized water for 5 min each and were boiled for 1 min with CBB staining solution (0.025% Coomassie dye, G-250 in 10% acetic acid). Subsequently, gels were washed again with deionized water for 5 min and clarified to visualize polypeptide bands. Gels were captured (Documentation Systems Bio-Rad) at 600 dpi resolution in tagged image file (.tif) format with 1580 × 1489 pixels in grey scale. In this format, 0 was assigned to black and 255 to white in the grey scale. Extraction and derivatization of free amino acids from BSA-capsaicin nanoparticles. The identification and quantification of amino acids were determined according to Bidlingmeyer et al. 44 with modifications. The NPs (powder) were washed with 1000 µL of deionized water and were homogenized for 15 min and sonicated for 10 min at 25 °C. The samples were centrifuged at 12,485×g, for 6 min, at room temperature, the supernatant was discarded, and the pellets were dried during 20 min. The derivatization of the samples was carried out by addition of 20 µL of methanol/water/triethylamine (2:2:1) solution; then, the samples were dried for 30 min at 45 °C. The samples were resuspended with 20 µL of methanol/water/triethylamine/phenyl isothiocyanate (7:1:1:1) solution and incubated for 30 min at 25 °C, subsequently, the samples were dried. Finally, they were dissolved with 200 µL of sodium acetate trihydrate 0.1 M, pH 6.5 and were stored at − 70 °C. The samples were resolved with Shimadzu ultra-fast liquid chromatography (UFLC) prominence series system (Shimadzu, Kyoto, Japan), equipped with a LC-20AD pump coupled to a DGU-20A degassing unit, a SPD-20A dual wavelength detector, a CMB-20A system controller, a SIL-20A HT auto-sampler and a CTO-20A column oven. There were employed as the components separation role. The system was controlled by LabSolution software ver. 5.87 SP1. Chromatographic separation was performed on a C 18 column (Agilent Technologies Eclipse XDB-C18 4.6 × 150 mm, 5 µm). Separation conditions were mobile phase A: sodium acetate trihydrate 0.1 M, pH 6.5; mobile phase B: acetonitrile/water (4:1), flow rate at 0.9 mL min −1 ; column temperature 40 °C, and UV detection at 254 nm. Analytical standards were used to confirm the identity of the peaks. Calibration solution was based on a solution of 21 amino acids standard reference material LAA21, (Sigma-Aldrich, St Louis, MS, USA) containing l-alanine, l-arginine hydrochloride, l-asparagine, l-aspartic acid, l-cysteine hydrochloride, l-cystine, l-glutamic acid, l-glutamine, glycine, l-histidine hydrochloride, trans-4-hydroxy-l-isoleucine, l-leucine, l-lysine hydrochloride, l-methionine, l-phenylalanine, l-proline, l-serine, l-threonine, l-tryptophan, l-tyrosine, and l-valine. The standard samples were prepared by injecting 60 μL in real triplicates.
7,305.2
2021-04-06T00:00:00.000
[ "Chemistry", "Materials Science" ]
Edinburgh Research Explorer Ablation of the canonical testosterone production pathway via knockout of the steroidogenic enzyme HSD17B3, reveals a novel mechanism of testicular testosterone production Male development, fertility, and lifelong health are all androgen-dependent. Approximately 95% of circulating testosterone is synthesized by the testis and the final step in this canonical pathway is controlled by the activity of the hydroxys-teroid-dehydrogenase-17-beta-3 (HSD17B3). To determine the role of HSD17B3 in testosterone production and androgenization during male development and function we have characterized a mouse model lacking HSD17B3. The data reveal that developmental masculinization and fertility are normal in mutant males. Ablation of HSD17B3 inhibits hyperstimulation of testosterone production by hCG, al-though basal testosterone levels are maintained despite the absence of HSD17B3. the adult phenotype, showing that, as in | INTRODUCTION Male development, fertility, and lifelong health and wellbeing are all androgen-dependent. Perturbed androgen action at any stage of life, but particularly during aging, 1 significantly impacts the quality of life, and low androgens are an independent risk factor for all-cause early death. 2 The long-established paradigm of androgen action identifies testosterone (T) as the key androgen driving male development and function. The final step in this canonical pathway is controlled by the essential activity of a single testis-specific 17-ketosteroid reductase enzyme, hydroxysteroid dehydrogenase 17beta 3 (HSD17B3). During development in mice, HSD17B3 is expressed in testicular Sertoli cells where it functions to produce testosterone from androstenedione derived from the fetal Leydig cells. Expression of HSD17B3 switches from Sertoli cells in fetal life to the adult Leydig cell population in prepubertal life that produces testosterone directly. 3,4 The importance of HSD17B3 for testicular testosterone production in humans is highlighted in differences/disorders of sex development involving mutations in HSD17B3 that reduce its function. Individuals with perturbed HSD17B3 function are under-masculinized at birth, with hypoplastic to normal internal genitalia (epididymis, vas deferens, seminal vesicles, and ejaculatory ducts), but with female external genitalia and the absence of a prostate. [5][6][7] Inadequate testosterone production by the testis reduces the availability of dihydrotestosterone in peripheral tissues leading to under-masculinization of the external genitalia during development. During puberty, increased luteinizing hormone (LH) stimulation of testicular androgen production leads to an increase in production and secretion of androstenedione (the precursor of testosterone). A diagnostic marker of HSD17B3 deficiency is, therefore, a high androstenedione to testosterone ratio in the blood. 8 Conversion of androstenedione to testosterone/dihydrotestosterone in peripheral tissues promotes masculinization at puberty, failure of which is another diagnostic marker of HSD17B3 deficiency. The presence or absence of male internal reproductive organs and external genitalia is critically dependent upon signaling via the androgen receptor (AR) during the masculinization programming window in fetal life. 9 Stimulation of AR with androgens prior to this window does neither induce premature masculinization, 10 nor can stimulation with androgens after this window has closed, drive the establishment of male internal genitalia if AR is blocked during the window. 9 During the programming window, testicular testosterone is the androgen that drives internal masculinization and it is curious, therefore, that some individuals lacking HSD17B3 show signs of internal masculinization during fetal life. 11 This may reflect the nature of the specific mutations that reduce, but do not completely ablate, testosterone production in the testis. Alternatively, it may suggest the presence of another mechanism driving testosterone production that does not require the function of HSD17B3. To address this, we have characterized a mouse model in which HSD17B3 has been completely ablated (complete knockout model). Consistent with the range of diagnoses in humans lacking HSD17B3, knockout mice develop normal internal genitalia at birth. In adulthood, as with humans lacking HSD17B3, knockout mice also display a high androstenedione to testosterone ratio. Surprisingly, however, HSD17B3 knockout mice continue to produce testosterone, display normal spermatogenesis and are fertile, something that is widely accepted to be fundamentally dependent upon the 17-ketosteroid reductase activity of HSD17B3 and its ability to produce testosterone. Together, these data identify an as yet uncharacterized, alternative mechanism driving testosterone production in the testis that functions independently of HSD17B3, which has implications for our understanding of testosterone production in males. | Breeding of transgenic mice The knockout model Hsd17b3 tm(lacZ;neo)lex was provided by Lexicon, Taconic, and generated using classical gene targeting approaches. A selection cassette was inserted into exon 1 (chromosome 13) to generate a subsequent frameshift mutation and was undertaken on genetic background 129/ svEv-C57bl/6. Heterozygous Hsd17b3 tm(lacZ;neo)lex ; +/− (referred as Hsd17b3 +/− ) males and females were bred together to generate Hsd17b3 +/+ , Hsd17b3 +/− , and Hsd17b3 −/− animals. Animals were identified by genotyping from ear or tail DNA for the presence of Hsd17b3 and the presence of Measurement of other enzymes able to convert androstenedione to testosterone identifies HSD17B12 as a candidate enzyme capable of driving basal testosterone production in the testis. Together, these findings expand our understanding of testosterone production in males. | Trophic stimulation To examine the testicular response to acute trophic stimulation in vivo, adult animals were given a single IP injection of 20IU human chorionic gonadotrophin hCG (Pregnyl, Organon), and then, sacrificed 16 hours later. Serum was collected for measurement of testosterone levels. | Lentiviral production Lentiviral particles contained CMV-mouse Hsd17b3 and GFP (emGFP) transgenes separated with an IRES site or CMV-emGFP alone. Shuttle vectors were packaged with a third generation Lentiviral vector plasmid pseudotyped for VSV-G, concentrated to a viral titer of > 5 × 10 9 TU/mL in serum free media. 12 | Lentiviral injection Procedures were performed under anesthesia by inhalation of isoflurane and under aseptic conditions. In brief, the lower abdomen was shaved and sterilized before a small incision was made in the abdominal muscle wall. Testes were exposed through the incision and, under a dissection microscope, the efferent duct and areas around the rete testis were isolated from surrounding adipose tissue taking care not to rupture blood vessels and to keep the testes moist at all times with sterile saline. Up to 10 µL of the lentiviral particle suspension were introduced into the seminiferous tubules of adult (day 120) Hsd17b3 −/− males and control littermates via the rete testis (similar to techniques reported by Ref. [13]) using a glass micropipette (outer diameter: 80 µm, beveled) (Biomedical Instruments, Germany) and a microinjector (Eppendorf Femtojet; Eppendorf, Germany) at a pressure of 25 hPA. Delivery of particles was monitored by the addition of Trypan Blue dye to the viral particles (0.04%). Testes were then carefully replaced back into the abdominal cavity, taking care to ensure access to the scrotal compartment was present, and incisions were closed using sterile sutures. Mice were injected subcutaneously with 0.05 mg/kg buprenorphine (Vetergesic; Ceva Animal Health Ltd, UK) while anesthetized and allowed to recover on a heat pad while being monitored. Animals were culled (as described below) and testis recovered 7 weeks postsurgery and imaged in cold phosphatebuffered saline with a Leica MZFLIII microscope (Wetzlar, Germany) and an epifluorescent green fluorescent protein (GFP) filter. Testes were weighed and fixed separately in Bouins fixative (Clin-Tech Ltd, UK) for 6 hours before being transferred to 70% of ethanol prior to processing into paraffin wax and sectioned at 5 µm for histological analysis. | Tissue collection Animals were culled at different key points of testis development by CO 2 inhalation and subsequent cervical dislocation. Blood was obtained by cardiac puncture for hormonal profile analysis. Plasma was separated by centrifugation and stored at −80°C. Bodyweight and reproductive organ (testis and seminal vesicle) weights were recorded. Collected tissues were either fixed or frozen for RNA or protein analysis. | Quantitative RT-PCR Quantitative RT-PCR was performed as previously described 14 with minor modifications. RNA concentration was estimated using a NanoDrop One spectrophotometer (Thermo Fisher Scientific) and cDNA was prepared using the SuperScript VILO cDNA synthesis Kit (Invitrogen). cDNA quality was assessed using Universal ProbeLibrary Mouse ACTB Gene Assay (Sigma). Real-time PCR was carried out on the ABI Prism 7900HT Real-Time PCR System (Applied Biosystems) in 384-well format using TaqMan Universal PCR Master Mix (Applied Biosystems) and the Universal Probe Library (Roche). 14 Details of each assay are listed in Table 1. | Immunostaining Tissues were fixed in Bouins for 6 hours, stored in 70% of ethanol, and embedded in paraffin or in resin (see below). Sections were processed as previously described. 15 Succinctly slides were dewaxed and rehydrated prior antigen retrieval in 0.01 M citrate buffer (pH 6.0). Successive steps to reduce the endogenous peroxidases activity and to block the nonspecific activity were undertaken and followed by incubation overnight at 4°C with the primary antibodies Table 2. After washing, slides were incubated for 30 minutes at room temperature with the appropriate secondary antibody conjugated to peroxidase. Sections were incubated with fluorescein Tyramide Signal Amplification system ("TSA", Perkin Elmer) according to manufacturer's instructions. Sections were counterstained in Sytox Green (Molecular Probes, life technologies, Paisley, UK) and mounted in PermaFluor mounting medium (Thermo Scientific, UK). Slides were imaged using a LSM 710 confocal microscope and ZEN 2009 software (Carl Zeiss Ltd, Hertfordshire, UK). Control sections were incubated with no primary antibody. At least three different animals from each group were tested and processed simultaneously. | Stereology and histology For stereology, testes were embedded in Technovit 7100 resin, cut into 20 µm sections, and stained with Harris' hematoxylin, as previously described in Ref. [15]. The number of Leydig cells and Sertoli cells was determined by the optical disector method using an Olympus BX50 microscope fitted with a motorized stage (Prior Scientific Instruments, Cambridge, UK) and Stereologer software (Systems Planning Analysis, Alexandria, VA). | Intratesticular steroid extraction A fragment of frozen testis was weight and homogenized in lysis buffer (50 mM Tris pH 7.4, 1% of deoxycholate, 0.1% of SDS). Steroids were extracted from testis lysate or from plasma using diethyl ether, dried under a constant stream of nitrogen, and then, resuspended in the appropriate buffer for analysis. | Hormone analysis Concentrations of the gonadotropins luteinizing hormone (LH) and follicle-stimulating hormone (FSH) were assessed using the Milliplex Map Pituitary Magnetic Bead Panel Kit (PPTMAG-86K; MilliporeSigma, Burlington, MA, USA) inter-assay cv <20%, intra-assay cv <15%, range 24.4-100 000 pg/mL for FSH and 4.9-20 000 pg/mL for LH according to the manufacturer's instructions. A Bio-Plex 200 suspension array system was used to measure plasma concentrations and data were analyzed with Bio-Plex Manager software (Bio-Rad Laboratories, Hercules, CA, USA). Our experimental intra CV for FSH assay <6.5 and for LH assay <3.4. The steroids in mouse plasma and testis lysate were measured using an isotope-dilution TurboFlow liquid chromatography-tandem mass spectrometry method as previously described. 14,16 For progesterone, 17-OH-progesterone (17-OHP), androstenedione, and testosterone the limits of quantification were 0.036 nM, 0.1 pM, 0.012 nM, and 0.042 nM, respectively, and the interday variation, expressed as the relative standard deviation for these analytes were testosterone was ≤5.3% and ≤4.2% for low and high spike levels of control materials, respectively, included three times each in every analytical batch. Chemical analyses were performed at the Dept. of Growth and Reproduction, Rigshospitalet, Copenhagen University Hospital. | Image analysis Histological slides were analyzed and photographed using a Provis microscope (Olympus Optical, London, UK) fitted with a DCS330 digital camera (Eastman Kodak, Rochester, NY). For immunofluorescence, slides were imaged using a LSM 710 confocal microscope and ZEN 2009 software (Carl Zeiss Ltd, Hertfordshire, UK). Images were compiled using Adobe Photoshop CS6 and Adobe Illustrator 2019 (Adobe System Inc, Mountain View, CA, USA). | Statistical analysis Data were analyzed using Graph Prism version 8 (GraphPad Software Inc, San Diego, CA, USA). Statistical analyses involved Student t test or one-or two-way ANOVA with the appropriate post hoc tests (Tukey's multiple comparisons or Dunnett's tests). When required, data were normalized using log transformation. Values are expressed as means ± SEM *P < .05, **P < .01, ***P < .001, ****P < .0001. | HSD17B3 is localized to Sertoli cells in fetal life and Leydig cells in adulthood Expression of HSD17B3 has been reported to occur in different cell-types depending on the stage of development of the testis. 3,17 Using immunofluorescence to localize HSD17B3 in fetal and adult testes, we confirmed that expression of this enzyme is restricted to Sertoli cells at e16.5, ( Figure 1A) (and this was further verified using a mouse model that lacks Sertoli cells, Figure 1A insert) 15 while, in adulthood, the expression is restricted to the Leydig cells ( Figure 1A). Taken together the data confirm the previously published data describing the spatiotemporal expression profile of HSD17B3. 3,17 | Disruption of exon 1 of Hsd17b3 produces a HSD17B3 loss of function model To understand the importance of HSD17B3 function in male development and fertility, we characterized a transgenic mouse model, in which exon 1 of the Hsd17b3 gene had been disrupted via insertion of a LacZ/Neo expression cassette ( Figure 1B,C). Interrogation of testicular cDNA by q-RTPCR confirmed the absence of Hsd17b3 transcripts in Hsd17b3 −/− mice ( Figure 1D) and lack of detectable HSD17B3 protein in the testis ( Figure 1E). As with cases of HSD17B3 deficiency in humans, there was a marked increase in the androstenedione to testosterone (D4/T) ratio in Hsd17b3 −/− mice under both basal and hyper-stimulation (+hCG) conditions ( Figure 1F). Together these data confirm the successful production of a mouse model lacking HSD17B3. | Blocking of the canonical testosterone production pathway via ablation of HSD17B3 function does not impact the development of the internal male genitalia To assess the impact on male development following the ablation of HSD17B3 function, we first analyzed the gross morphology of the genitalia and wider reproductive system in both neonatal (postnatal day 0: d0) and adult (d80) Hsd17b3 −/− animals. At birth, anogenital distance (AGD, a biomarker of androgen action during the masculinization programming window 9 was normal in Hsd17b3 −/− males (Figure 2A). Consistent with this, other androgen-dependent endpoints also developed normally: the initial segment of the epididymis was present 18 and Wolffian duct/epididymis coiled normally 19 in Hsd17b3 +/− and Hsd17b3 −/− animals ( Figure 2B). In adulthood, there was no difference in body weight between the genotypes at d80 ( Figure 2C) and the gross reproductive system of Hsd17b3 −/− males appeared unchanged with respect to control animals ( Figure 2D). Importantly, two key biomarkers of androgen action, testis weight (a biomarker of internal testicular androgens) ( Figure 2E) and seminal vesicle weight (a biomarker of circulating androgens) (Figure 2 F) did not differ from wild-type or heterozygous controls, suggesting that androgen signaling is not impacted at d80. However, a small reduction in AGD was observed in d80 Hsd17b3 −/− males ( Figure 2G), suggestive of a mild perturbation to androgen signaling in adulthood. 20 As no difference was observed in any endpoint between wild-type and heterozygous these groups were combined for downstream analyses and termed 'controls'. | Blocking of the canonical testosterone production pathway does not alter male fertility To determine the impact of the loss of HSD17B3 function on the adult testis, we first assessed the histology and cell composition of the testis. We undertook hematoxylin-eosin staining and immunofluorescence localization using specific Sertoli cell (SOX9), germ cells (DDX4), and Leydig cell (HSD3B and CYP17A1) markers at d0 and in adulthood. At both timepoints the testicular histology, cell localization, and composition of different cell types (including all stages of germ cell development) were normal in Hsd17b3 −/− animals ( Figure 3A-D and supplemental Figure S1). Similarly, numbers of Sertoli cells and Leydig cells in adulthood did not differ between control and Hsd17b3 −/− males ( Figure 3E-F). As the overall composition of the testis in Hsd17b3 −/− males was normal, we next assessed whether these males, lacking HSD17B3 function, were fertile. Upon analysis, the cauda epididymes in Hsd17b3 −/− animals were found to contain abundant spermatozoa ( Figure 3G) and when Hsd17b3 −/− males were bred, viable pregnancies resulted, with no significant difference in the number of pups born compared to matings using wild-type littermate controls as studs ( Figure 3H). These data show that overall testicular development and fertility are not altered following the loss of HSD17B3. | Blocking of the canonical testosterone production pathway promotes a functional response within Leydig cells to maintain normal basal testosterone concentrations To understand how male mice lacking HSD17B3 apparently exhibit a near-normal phenotype, we next analyzed the impact of ablation of HSD17B3 on Leydig cell function. Assessment of Leydig cell function in adult (d80) mice lacking HSD17B3 revealed a significant upregulation of Lhcgr transcripts in addition to transcripts encoding most of the steroidogenic enzymes that function in the canonical testosterone production pathway StAR, Cyp11a1, and Cyp17a1 ( Figure 4A). Lhcgr, StAR, and Cyp11a1 are known to increase expression in response to increased LH stimulation, 21 suggesting that there is significant hyperstimulation of the Leydig cell population in the Hsd17b3 −/− animals. Measurement of the circulating hormone profile in these animals confirmed this, with circulating LH significantly increased in Hsd17b3 −/− males compared to wildtype males ( Figure 4B). To investigate this further, we next determined the intratesticular concentrations of the same hormones. This revealed that, under basal conditions, progesterone, 17-OHP, and androstenedione are all significantly increased in the testis of Hsd17b3 −/− males ( Figure 4G-I), while testosterone is present in the same concentration as control littermates ( Figure 4J). Together these data suggest that HSD17B3 may act as a rate-limiting step on testosterone production in response to hyperstimulation and, in its absence, the hypothalamus-pituitary-gonadal (hpg) axis responds in a manner consistent with compensated Leydig cell failure to maintain basal testosterone production via another pathway resulting in increased circulating LH levels in the presence of normal testosterone. | HSD17B3 acts as a rate-limiting step in testicular testosterone production To address this hypothesis, we sought to rescue the steroidogenic phenotype of the Hsd17b3 −/− adult males by delivery of Hsd17b3 cDNA to the testis via lentiviral-mediated gene therapy. We first injected lentivirus into the interstitium resulting in the transduction of Leydig cells, but this also led to Leydig cell death within ten days ( Figure S2), so we chose to instead deliver the lentivirus to adult Sertoli cells (rather than Leydig cells) in wild-type and Hsd17b3 −/− males using intra-rete injection. 12 In retrospect, this provided additional information because we were now able to determine (i) whether we could rescue the phenotype observed in Hsd17b3 −/− males and (ii) whether the testis could be induced to function as a single testosterone-producing unit in adulthood, analogous to the situation in the normal fetal testis. 3,4 To validate HSD17B3 transgene delivery and expression, cDNA constructs (GFP control, or Hsd17b3) enclosed in lentiviral particles were delivered via the rete testis at d120 and the testes recovered 7 weeks later (to permit recovery from surgery and completion of an entire cycle of spermatogenesis). Transgenic expression of HSD17B3 within Sertoli cells was confirmed by immunofluorescence ( Figure 5A). We then repeated the study and processed the tissue to measure intratesticular steroid hormone concentrations, to determine the impact of restoring HSD17B3 function. Delivery of GFP lentivirus (lv GFP) did not modify the intratesticular levels of steroids away from those previously observed in untreated animals (as in Figure 4H-K). However, in Hsd17b3 −/− males treated with Hsd17b3 cDNA (lv Hsd17b3), while testicular concentrations of progesterone remained unchanged, levels of 17-OHP, and androstenedione were both reduced to levels observed in control littermates ( Figure 5A-B). Testosterone concentrations within the testis remained unchanged F I G U R E 4 HSD17B3 loss of function impacts Leydig cell but not the intratesticular basal testosterone concentrations. Comparative testicular expression of Leydig cell (A) steroidogenic transcripts in adults (d80) (n = 14 controls and n = 8 Hsd17b3 −/− , T test ***P < .001, ****P < .0001). Circulating LH (B) was measured in controls vs Hsd17b3 −/− and circulating (C) progesterone, (D) 17-OH-progesterone (17-OHP) (E) androstenedione and (F) testosterone were measured in basal and hCG-stimulated conditions in adult (d80) Hsd17b3 −/− and controls littermates animals (n = 24 (basal) n = 11 (+hCG) and n = 12 (basal) n = 6 (+hCG) Hsd17b3 −/− , two-way ANOVA, *P < .05, ***P < .001, ****P < .0001). Intratesticular hormones (G) progesterone, (H) 17-OH-progesterone (I) androstenedione and (J) testosterone in adult (d80) Hsd17b3 −/− and control littermate animals (n = 10 controls and n = 11 Hsd17b3 −/− , T test and Mann Whitney, ****P < .0001) under basal conditions. Controls correspond to Hsd17b3 +/+ males pooled with Hsd17b3 +/− males (two-way ANOVA P value = .5648) ( Figure 5A,B), however, circulating LH concentrations also fell to levels consistent with control littermates (two-way ANOVA P value = .0010) ( Figure 5B,C). Together these data show that restoration of HSD17B3 function is able to rescue the steroidogenic phenotype observed in HSD17B3 −/− animals and, also, that this | 11 REBOURCET ET al. can be achieved via gene delivery to the Sertoli cells, indicating that the testis can be manipulated to function as a single steroidogenic unit in adulthood. This also confirms that the role of HSD17B3 is to act as a rate-limiting step in the testosterone production system in the testis and cannot be fully compensated for in this role by another enzyme. However, the mechanism underpinning basal testosterone production in the absence of HSD17B3 remains unknown. | Identification of AKR enzymes underpinning basal testosterone production The production of testosterone in the absence of HSD17B3 suggests the presence of another aldo-keto reductase enzyme (AKR) that performs this role in the testis. The other known HSD17Bs capable of converting androstenedione into testosterone are HSD17B1, HSD17B5, 4,17,23,24 and HSD17B12. 25 When measured by qPCR ( Figure S3, Hsd17b1 and Hsd17b5 transcript levels were below the detection threshold in the testis of control and Hsd17b3 −/− animals in adulthood. This is consistent with previous proteomic analysis of the adult mouse testis where both HSD17B1 and HSD17B5 are not detected. 26 In contrast, Hsd17b12 transcript levels were quantifiable and show a small but significant increase in expression following ablation of HSD17B3 ( Figure 6A); HSD17B12 has previously been identified as a testis-expressed protein in both mouse and human 26 and immunohistochemistry localizes HSD17B12 to Leydig cells and more weakly in the germ cells in both control and Hsd17b3 −/− testes Figure 6B. As the only known AKR enzyme expressed in the mouse testis that is able to produce testosterone, other than HSD17B3, the significance of HSD17B12 for basal testosterone production in the testis requires further investigation. | DISCUSSION To determine the role of the steroidogenic enzyme HSD17B3 in testosterone production and androgenization during male development and function, we characterized a mouse model lacking HSD17B3. These data reveal that developmental masculinization and fertility are normal in male mice lacking the enzyme widely accepted to be critical for the canonical testosterone production pathway. Ablation of Hsd17b3 induces compensation of steroidogenic function within the testis to maintain normal testosterone levels. Reintroduction of Hsd17b3 via gene-delivery to Sertoli cells in adulthood rescues this compensation, showing that, as in development, different cell-types in the testis can work together to produce testosterone. The data also confirm that HS17B3 acts as a rate-limiting step in testosterone production but does not control basal testosterone production. Interrogation of other known aldo-keto reductase (AKR) and short-chain dehydrogenase/reductase enzymes (SDR) enzymes able to produce testosterone as a product suggest HSD17B12 as a candidate enzyme driving basal testosterone production in the testis in the absence of HSD17B3. The data show that testicular androgen production is a well-protected process given its importance in masculinization and male fertility. Testosterone is essential for masculinization of the male fetus, 27,28 and HSD17B3 is well established as the critical enzyme controlling the conversion of androstenedione to testosterone in the canonical testosterone production pathway in adulthood. Previous studies in mice 3,4 have shown that, during fetal life, androstenedione produced by fetal Leydig cells is converted to testosterone by Sertoli cells. Consistent with this, HSD17B3 is expressed in Sertoli cells during fetal life and likely acts in this role. However, our data reveal that HSD17B3 is, in fact, dispensable for basal testosterone production in fetal life, as Hsd17b3 −/− mice are normally masculinized at birth. This is in contrast to humans lacking HSD17B3 function who are under-virilized at birth. 11 The explanation for this is unclear, but it does suggest the presence of another HSD17 enzyme able to support testosterone production in the mouse fetal testis. Recently, Hsd17b1 expression was localized to Sertoli cells in mice in fetal life. 29 Ablation of this enzyme leads to disruption of the seminiferous epithelium and abnormal spermatozoa in adulthood, linking it to a role in the establishment of normal male fertility. 29 As Hsd17b1 is able to convert androstenedione to testosterone this raises the possibility that HSD17B1 is responsible for basal androgen production in fetal life, though as it is not expressed in the adult testis, is unable to explain the normal testosterone concentrations observed in the Hsd17b3 −/− adult males. Phylogenetic analysis of the hydroxysteroid family in human highlights the clustering of HSD17B3 and HSD7B12 as well as HSD17B1 and HSD17B7. 30 While the majority of the HSD17Bs share ~20% of amino acid sequence homology, F I G U R E 5 HSD17B3 acts as a rate-limiting step in testicular testosterone production. A, Immunolocalization of HSD17B3 following the intra-rete injection of control lentiviral particles (+lv GFP) and Hsd17b3 lentiviral particles (+lv Hsd17b3 (bar: 100 µm). Intratesticular hormones following intra-rete injection of control lentiviral particles (+lv GFP) and Hsd17b3 lentiviral particles (+lv Hsd17b3) that directs Hsd17b3 expression into Sertoli cells (B) progesterone, 17-OH-progesterone, androstenedione, and testosterone in adult (d80) Hsd17b3 −/− and controls littermates (n = 10 controls and n = 11 Hsd17b3 −/− two-way ANOVA, *P < .05, ** P < .01, ***P < .001) (C) Circulating LH levels (from tail vein) reduce following the reintroduction of HSD17b3 (n = 4-7 controls and n = 6 Hsd17b3 −/− , two-way ANOVA, *P < .05, ** P < .01, ***P < .001). Controls correspond to Hsd17b3 +/+ males pooled with Hsd17b3 +/− males HSD17B3 and HSD17B12 share 40% of similarity, 31 supporting their overlapping activities. Further aligning to this, while ablation of Hsd17b12 function is embryonic-lethal in mice, heterozygous Hsd17b12 males display reduced levels of androgens. 32,33 Our data also highlight Hsd17b12 as a possible candidate for basal testosterone production in the fetal and/or adult testis, as it is expressed in the testis throughout life, and transcript levels are significantly increased in Hsd17b3 −/− adult males. HSD7B12 in humans is present in Leydig cells and Sertoli cells 34 and in "Human protein atlas" (www.prote inatl as.org) 35 and has been identified in the proteome of whole adult mouse testis 26,36 and our own immunohistochemical analysis localizes HSD17B12 to Leydig cells and faintly in the germ cells in control and Hsd17b3 −/− testes. However, while Hsd17b12 is able to convert androstenedione to testosterone in mice, 37 its steroidogenic activity is largely restricted to estrone reduction to estradiol in humans, with low levels of androstenedione reduction. 25,31 Overall, such species differences could explain why we observe normal masculinization of Hsd17b3 −/− mice; while reproductive tissues are impacted in humans lacking HSD7B3. Whether this is the case, or indeed whether this explains the apparent dispensability of Hsd17b3 for basal testosterone production requires further investigation. A further explanation could be that, in humans, masculinization requires that both canonical and alternative (backdoor) androgen pathways are intact and the alternative pathway may also be dependent on a functional HSD17B3. 38,39 This alternative pathway does not appear to be of importance in fetal masculinization in the mouse. 40 HSD17B3 is a key enzyme in the biosynthesis of testosterone and models where the androgen signaling pathway is impacted, (eg, Tfm mice which lack Leydig cell-specific androgen receptor) have highlighted the importance of androgens for Leydig cell development. 41,42 While the number of Leydig cells in adult is normal, Leydig cell function is perturbed. Androgen production from puberty is under the regulation of the hypothalamic-pituitary-gonad axis and stimulation of LH induces the upregulation of its receptor and other crucial steroidogenic markers. 21 This upregulation can be seen in Hsd17b3 −/− males with significantly higher transcript levels of Lhcgr, StAR, Cyp11a1, and Cyp17a1. Increased LH and steroid levels (circulating and intratesticular) and the unresponsiveness to further hCG stimulation in the Hsd17b3 −/− males is an indicator of possible compensated Leydig cell failure. 22 The maturity stage of the Leydig cells has been shown to impact the responsiveness of the cell to stimulation. However, the increase of both circulating LH and testosterone would suggest the absence of proper feedback regulation in the hypothalamo-pituitary-gonad axis. Literature shows that the programming actions of testosterone on the brain can be mediated by androgenic or estrogen action due to aromatization of testosterone. A sex-dependent regulation has also been suggested wherein androgens in males tend to inhibit the fetal neonatal gonadotrophin releasing hormone (GnRH) input compared to females and that daily treatment with GnRH agonist can impact the normal developmental changes in Leydig cell function. [43][44][45] Kisspeptin signaling is a component of the neuroendocrine regulation of the reproductive system and a role for kisspeptin in mediating the negative feedback effects of gonadal steroids on GnRH secretion in both the male and female via the estrogen and androgen receptor has been described. 46,47 We speculate that in our Hsd17b3 −/− model, the action of the high circulating androgens, regardless of aromatization, impact the kisspeptin neurons and GnRH secretion consequently altering the negative feedback, however, this requires further investigation. While this data show that HSD17B3 is not strictly necessary for testosterone production or fertility in the mouse, the enzyme is necessary for optimal testosterone production above basal levels. In fetal life, the conversion of androstenedione, produced by fetal Leydig cells, to testosterone is undertaken by Sertoli cells. 3,42 In adulthood, in contrast, Leydig cells carry out the full canonical testosterone biosynthetic pathway while Sertoli cells act to maintain Leydig cell viability and function. 3,4,15,48 The reintroduction of Hsd17b3 in Sertoli cells in Hsd17b3 −/− males was able to partially rescue the endocrine defect in Hsd17b3 −/− males, indicating that the somatic cells of the adult testis can act cooperatively to produce testosterone. In conclusion, this study expands our knowledge of testosterone production in the mouse which has implication for our wider understanding of masculinization and male fertility. In addition, the ability to re-express Hsd17b3 in Sertoli cells and manipulate this cell type to produce testosterone suggests that Sertoli cells could be engineered to produce androgens and could form the basis of future therapy to combat the natural decline in androgens during aging.
7,100.8
0001-01-01T00:00:00.000
[ "Biology" ]
A Predictive Approach towards Using PC-SAFT for Modeling the Properties of Shale Oil Equations of state are powerful tools for modeling thermophysical properties; however, so far, these have not been developed for shale oil due to a lack of experimental data. Recently, new experimental data were published on the properties of Kukersite shale oil, and here we present a method for modeling the properties of the gasoline fraction of shale oil using the PC-SAFT equation of state. First, using measured property data, correlations were developed to estimate the composition of narrow-boiling-range Kukersite shale gasoline samples based on the boiling point and density. These correlations, along with several PC-SAFT equations of the states of various classes of compounds, were used to predict the PC-SAFT parameters of aromatic compounds present in unconventional oil-containing oxygen compounds with average boiling points up to 180 °C. Developed PC-SAFT equations of state were applied to calculate the temperature-dependent properties (vapor pressure and density) of shale gasoline. The root mean square percentage error of the residuals was 13.2%. The average absolute relative deviation percentages for all vapor pressure and density data were 16.9 and 1.6%, respectively. The utility of this model was shown by predicting the vapor pressure of various portions of the shale gasoline. The validity of this model could be assessed for oil fractions from different deposits. However, the procedure used here to model shale oil gasoline could also be used as an example to derive and develop similar models for oil samples with different origins. Introduction Models to predict the thermodynamic and transport properties of compounds are of interest to many chemical, oil, and related industries. Models to estimate the phase behavior of fluids in the system are used to design chemical processes and equipment, improve separation processes and product quality, and assess the environmental risks that are inevitably associated with these processes. Therefore, the demand for the use of equations and models applicable to complex mixtures of hydrocarbons has increased considerably [1]. While several predictive correlations for thermodynamic properties of complex mixtures such as oils have been suggested, these models were mainly developed for oils containing small concentrations of heteroatoms with aliphatic and aromatic structures. Therefore, these correlations are not particularly applicable to oils with different structures and compositions. The composition of unconventional oils is different from that of petroleum and varies depending on the source of the oil. As a result, the properties are also different. Shale oil is one such unconventional oil. It is produced by thermally processing organic-rich rocks (oil shale). The conversion technique used for oil shale has been known for a century, and is viewed as the most optimal and efficient thermochemical process to convert shale rock into oil. The advantages of this process have also been extended further for conversion of biomass into biofuel both economically and environmentally [2,3]. Although some basic property prediction correlations developed for petroleum could also be used for shale oil [4], in general, proposed correlations and models developed for petroleum do not essentially lead to accurate results for shale oil [5][6][7][8]. Therefore, these models are usually not suitable for shale oil, and more specifically for Kukersite shale oil, due to differences in composition [9]. Information about the properties of shale oil is limited, and thus, from an engineering point of view, developing such prediction models would be beneficial. Oils are generally complex mixtures with a large number of different compounds. It is not currently feasible to identify all the compounds and their concentrations in the oil; therefore, simplification for modeling is required. This is generally accomplished by lumping compounds together into groups or classes, which are termed pseudocomponents. Once the pseudocomponent is defined, the properties of all the compounds in the pseudocomponent are described using the average properties of the whole group. The oil as a whole is then modeled as a mixture of these pseudocomponents [10]. One of the simplest methods to define pseudocomponents is to split oil into fractions with narrower boiling points, often through distillation, and measure or estimate the properties of those fractions. These methods are sometimes called bulk property methods, and they do not require any information about the composition of the oil. Because it is a labor-intensive process to measure a full set of properties for many fractions, correlations have been developed for petroleum to estimate a variety of properties from a smaller set of experimental data that are commonly measured for an oil. Generally, only the distillation curve and a second property, such as the density, viscosity, or refractive index, need to be measured to be able to model an oil using these methods [10,11]. A more complex method is to analyze the composition of the oil and then use the data to define the pseudocomponents based on the molecular structure. This is commonly performed by splitting the oil into classes of molecules, for instance, using PNA (paraffins, naphthalenes, aromatics) or SARA (saturates, aromatics, resins, asphaltenes) analysis. These types of the composition analyses refer to characterization methods used to quantitatively determine the amount of each class of compound in an oil. These classes are then further divided by the size of the molecules, i.e., average molar mass or the average number of carbon atoms [10]. The properties of these pseudocomponents can then be estimated based on existing data for pure compounds with the same type of structure. For instance, the properties of paraffin pseudocomponents can be calculated from the properties of pure n-alkanes. For petroleum, there are even correlations that allow the composition of oil to be calculated based simply on the measured properties of an oil and its fractions [10]. Modern analytical techniques, such as gas chromatography and mass spectroscopy, can provide even more detailed data, allowing the pseudocomponents to be defined closer to the level of individual compounds [12]. However, these analytical techniques are expensive and time-consuming, so in industry, the simpler characterization schemes are usually used [11]. If possible, the goal is often to model the pseudocomponents and the oil as a whole using an equation of state. Equations of state allow many of the properties of a mixture to be modeled over a wide range of temperatures and pressures. Correlations for predicting the equation of state parameters for petroleum pseudocomponents have been developed with this goal in mind [10,13]. In the past, cubic equations of state have often been used. However, cubic equations of state require values for the critical properties of a pseudocomponent, and these properties can be difficult to measure or estimate accurately [10]. In the last two decades, it has become more common to use equations of state based on statistical associating fluid theory (SAFT), especially perturbed-chain statistical associating fluid theory (PC-SAFT), for modeling oils, including bio-oils [1,[13][14][15][16][17][18]. Moreover, several studies indicate the importance and applicability of these equations in the energy industry [19][20][21]. SAFT models do not require critical property values and they can more accurately calculate the liquid density of a system, which means that density data can also be used in fitting PC-SAFT parameters [1]. Indeed, some systems have been modeled using only density data for parameter fitting [22]. Modeling shale oil has received relatively little attention. As previously stated, because shale oil has a much different composition than petroleum, the validity of correlations and models developed for petroleum is predominantly questionable for shale oil. Based on our literature survey, the only correlations that attempt to provide some sort of systematic modeling framework for shale oil were published in 1930 by Kogerman and Kõll [23], in 1934 by Luts [24], and in 1951 by Kollerov [25]. All three of these publications rely in large part on the experimental data from Kogerman and Kõll [23], which was from only a single distillation of oil from an experimental generator retort. These studies focused mainly on shale oil from Estonian Kukersite oil shale, and shale oils from other deposits have received even less attention. One of the main obstacles to developing correlations for shale oil has been the lack of data [26]. The new data that have been measured now provide an opportunity to perform this modeling work [27]. Here, we provide correlations that allow gasoline fractions of Kukersite shale oil to be defined in terms of pseudocomponents, and then we present equations for calculating the PC-SAFT parameters of these pseudocomponents. Note that, based on the type of oil shale and the pyrolysis method used, the properties of shale oils can change greatly [28][29][30][31][32]. Therefore, the models developed for Kukersite gasoline shale oil probably cannot be directly used for modeling shale oils from other deposits. However, the approach developed for this purpose could be generally applied to model other shale oils. Industrial Samples Several wide Kukersite shale oil gasoline fractions with a boiling range from about 40 to about 200 • C were obtained from Eesti Energia's Oil Plant (Narva, Estonia). The technology developed for processing oil shale in this plant was described by Neshumayev et al. [33]. These wide fractions were then separated into narrow-boiling-range fractions using different distillation methods. The sample preparation and distillation methods were previously described by Järvik et al. [34]. These fractions were received from the plant at different times in the hopes of capturing the natural range of variation in the composition of shale gasoline. Over this two-year period, 6 different distillations were performed to fractionate the gasoline samples received. These differences in composition and distillation method should help ensure that the model works for a variety of Kukersite gasoline samples. Compositional Analysis Generally, the properties of oil fractions depend on the chemical composition of these fractions. Obtaining a relationship between chemical composition and the properties of gasoline fractions could allow the composition of a sample to be predicted from basic physical properties. This is important because PC-SAFT models generally require information about the composition. Several investigations were previously carried out on the composition and characteristics of Kukersite shale oil [35][36][37][38][39][40]. In these works, the chemical composition of Kukersite shale gasoline was provided for different retorts. Having considered these data, the compositions (mass%) of gasoline fractions used in this work were estimated, mainly based on the detailed data that are partially available in the study published by Gubergrits et al. [34,40]. In this study, the same technology (the Galoter process) was used to obtain shale oil. In Estonia, this technology was developed later than other processes and it is currently the main technology used to produce shale oil. For the gasoline fractions studied in this work, different properties such as the hydrogen-carbon ratio, infrared spectra, and hydroxyl group content were measured. The behavior of these properties was previously discussed [34]. Almost all the measurements were repeated at least once and if large difference was observed, then additional measurements were carried out for better reliability. Based on these properties, reasonable assumptions about the composition were made so that the changes in the main classes of compounds with average boiling points, up to 180 • C, were carefully defined. In addition to aromatics, olefins, and paraffins, the rest of the compounds were assumed to be oxygen-containing compounds. Based on the experimental data for gasoline fractions distilled below 180 • C, the concentration of phenolic compounds is quite low; therefore, phenolic compounds were disregarded for further analysis and modeling. Therefore, for developing the model, four different classes of compounds (olefin, paraffin, aromatic, and oxygen-containing compounds) were considered in total. The oxygen-containing compounds are mostly ketones, aldehydes, and ethers [41]. The change in composition of each class of compound was estimated for different average boiling temperatures. From 40 • C to 180 • C, the amount of each class of compound (i.e., olefin, paraffin, and aromatic and neutral oxygen) was carefully estimated so that these changes were consistent with the FTIR and elemental composition analysis of the studied gasoline fractions. For estimation of the amount of each class of compound, chemical group composition data provided by Luik [42] was also considered. It should be noted that the compositions provided in this work are just estimates and the exact composition is unknown because such detailed data have not been measured before. Although the objective of the present study was to develop a model for Kukersite shale oil gasoline with average boiling points up to 180 • C, the composition was estimated up to 500 • C to facilitate future studies where the modeling of all Kukersite shale oil fractions is of interest. This helps to deliver a systematic and coherent path to extend the study. However, in this study, we focused solely on shale gasoline fraction because additional variables and adjustments are required to model the phenols in the heavier portion of the oil. In other words, the analysis performed in this study is the first step in a multistep process. It was noticed that numerous fractions have similar boiling points, but differ in other properties, such as density. Because oil samples with the same boiling range can still have different compositions, the effect of density along with boiling point was also included as a second parameter to better model the variations of composition that occur in shale oil. Therefore, the composition of each class of compound was estimated such that these variations were also taken into account. For instance, having considered fractions with similar boiling points, fractions with lower densities are expected to have more olefins and paraffins and fewer aromatic compounds in their composition. To incorporate density into the correlations for composition, first, the densities of all fractions were plotted versus their average boiling point, and this trend was used to calculate the average density at a given boiling point. Compositions were also estimated for fractions with similar boiling points and higher or lower densities. The result was a dataset of fractions with different boiling points and densities along with estimates of their compositions. The estimated mass percent of each class of compound with respect to average boiling point and density were then fitted using the differential evolution optimizer in the Scipy package for Python [43,44]. All experimental data are available in [27], and the basic correlation considered to fit the variables is as follows: In Equation (1), X is the mass percent of the class of compound, T b is the average boiling point ( • C), ρ is the density (kg m −3 ), and C 0 -C 5 are constants obtained from the fitted data. The concentration for neutral oxygen compounds was calculated by difference. Therefore, separate coefficients were not obtained for neutral oxygen compounds. Shale Oil Modeling The shale oil gasoline was modeled using the PC-SAFT equation of state. The PC-SAFT equation of state was thoroughly described by Gross and Sadowsky [1]. This equation is used to predict the thermodynamic behavior of pure and multicomponent systems. For the PC-SAFT equation, the main parameters characterizing a fluid are the segment number (m), segment diameter (σ), and segment energy (ε/k). For neutral oxygen compounds, we also included a polar term, which depends on the dipole moment. The dipole moment for many ketones and aldehydes is 2.7 [45], so this value was used in the model. These parameters for the aromatic class of compounds were determined by fitting the PC-SAFT equation of state to the measured physical property data, i.e., liquid density, and the average normal boiling point of the fractions. For data analysis of experimental data and model development, Python (Version 3.9) was used. For the purpose of developing a model, it was shown by Gross and Sadowsky [1] that Equations (2)-(4) are suitable for correlating parameters for pure compounds with varying molar masses. The relation for segment diameter (σ i ) as a function of the molecular weight (M i ) is as follows: For the chain length to molecular weight ratio, the proposed relation is: In addition, the relationship for the dispersion energy parameter is given as: Here, M CH 4 is the molecular weight of methane (M CH 4 = 16.043 g mol −1 ) and q jk are constants to be fitted to pure component parameters (i refers to component i). For the n-alkane series, these constants were previously published by Gross and Sadowski [1], and these suggested relations were used as a model for paraffins in shale oil. Moreover, Ghosh et al. [46] proposed correlations obtained by fitting the homologous series of 1-alkenes; therefore, these relations were also used for olefin compounds in Kukersite shale gasoline fractions. Correlations for the neutral oxygen compounds (ketones) were obtained from linear regression between PC-SAFT parameters of several pure compounds and their molecular weights. This was implemented to find the line of best fit. Data for neutral oxygen compounds was obtained from the work published by Kleiner and Sadowski [45]. The form of the equation for neutral oxygen compounds and aromatic compounds differed from that of other compounds, for which Equations (2)-(4) were used. Below, the suggested equations for oxygen-containing compounds are shown: ε k = C 10 MW + C 11 (5) where C 6 to C 11 are coefficients obtained from the fit and MW is the molecular weight of the pure compounds used (g mol −1 ). For aromatic compounds, there are numerous correlations suggested in the literature for pure compounds and petroleum cuts. However, existing correlations yielded poor results when tested for these shale gasoline fractions. This could be expected because aromatic compounds in Kukersite shale oil might not be similar to the pure compounds used for literature correlations. Therefore, although an equation form from the literature was used, the coefficients were optimized to give better results for Kukersite shale oil. The following relations (Equations (6)-(8)) for pure component aromatic compounds were taken from the work published by Gonzalez et al. [47]: m aromatic = q 01 MW + q 11 (6) s aromatic = q 02 MW + q 12 m aromatic (7) e aromatic = q 03 log (MW) + q 13 (8) The coefficients of the correlations for aromatic compounds were fit to experimental data for shale gasoline fractions. The scheme in Figure 1 summarizes the full process to model shale oil gasoline fractions, including the development of correlations for predicting the composition of shale gasoline samples, which were used in developing the PC-SAFT model. The estimated hydrogen-carbon ratio was compared with the actual hydrogen-carbon ratio and the average absolute deviation was obtained to be 1.6%. Table 1 shows the coefficients for olefins, paraffins, aromatics, and neutral oxygen compounds. These coefficients were used to predict the composition of narrow-boilingrange fractions. Coefficients C 0 -C 5 for Equation (1) were regressed for olefin, paraffin, and aromatic compounds. Then, the content of neutral oxygen compounds was calculated by subtracting the content of other compounds from the total. Using experimental vapor pressure and density data for gasoline fractions up to 180 • C, correlation constants were optimized for predicting the PC-SAFT parameters of aromatic compounds in Kukersite shale gasoline. These constants are shown in Table 2 along with the coefficients from the literature for paraffins and olefins. For most of the fractions, densities were measured at different temperatures, while vapor pressure was only used at the normal boiling point. Coefficients C 6 to C 11 for oxygen-containing compounds were found from Equation (5), in which PC-SAFT parameters were linearly fit to the molecular weight of several ketones and aldehydes. These coefficients are given in Table 3. The root mean square percent error (RMSE) for all compounds was found to be 13.2%. This is a reasonable accuracy for a model for oil samples, especially since the wide fractions were taken at different times from the oil plant; therefore, the properties of these fractions varied. Additionally, different distillation types were used to obtain narrow boiling samples from these wide fractions. These differences ensure that a wide variety of samples and properties were used, and thus they ensure that the model developed for shale gasoline fractions could be used for a broader range of samples. The error for the three-parameter equation of state can be considered reasonable for these types of oils, considering the complexity of these mixtures and the lack of data on the detailed composition of shale oils. If the results of the prediction model for different properties are considered separately, then the RMSE for density was much lower than that of vapor pressure. Figures 2 and 3 illustrates the error percent for the vapor pressure and density of all gasoline fractions calculated using the PC-SAFT equation of state. In these figures, the x-axis indicates the normal boiling points (nBP) of the gasoline fractions. The smallest errors are mostly for calculated density values. While many errors for individual data points are below 10%, there are several data points that show higher deviations. These outliers comprise 13% of the total data points used for modeling. All these data points with large errors are vapor pressures and for samples with normal boiling points below about 100 • C. For these lower boiling points, one factor contributing to the larger relative errors is that the absolute value of the boiling point is smaller. Overall, larger deviation was seen for lower boiling fractions below 100 • C (Figure 2). The average absolute relative deviation percentage (AARD%) between model values and experimental values was calculated using the below equation: Results and Discussion In Equation (9), x calc is the calculated property value (vapor pressure or liquid density) using the model, x exp is the measured (experimental) value, and n is total number of data points. The AARD% for all vapor pressure data was 16.9% and this deviation reduced to 11.6% for fractions with average boing points above 100 • C. Additionally, for density prediction, the average absolute deviation was 1.6%, which indicated considerable reliability of the model. Furthermore, as for comparison with the model, several gasoline fractions were analyzed, and the vapor pressure curves of these fractions were plotted and compared with calculated curve in Figure 5. Some characteristic properties of these fractions were as follows: fraction 1 (T b = 395 K, ρ = 792.2 kg m −3 , MW = 112.4 g mol −1 ), fraction 2 (T b = 420 K, ρ = 809.6 kg m −3 , MW = 122 g mol −1 ), fraction 3 (T b = 425 K, ρ = 818.0 kg m −3 , MW = 126 g mol −1 ). The vapor pressures of these fractions were obtained and compared from about 343 K to 383 K. The expanded uncertainty of vapor pressure measurements at 95% confidence level (k = 2) was found to be 1.5 kPa. Within the experimental temperature range, the largest AARD% for fraction 1 was seen to be 6.8%. However, corresponding absolute deviation was 3.4 kPa. This could be expected due to the low vapor pressure of this fraction. Of all the experimental values for fraction 1, the largest absolute deviation was seen to be at 3.7 kPa. For fractions 2 and 3, the AARD% were 4.8 and 15.8%, respectively. However, despite the larger AARD% for fraction 3, the average absolute deviation was 3.1 kPa. In general, the developed model showed favorable results when modeling shale oil gasoline fractions. With this analysis as a basis, modeling could be further extended to shale oil fractions with normal boing points above 180 • C [29,34] in the future. Conclusions In this work, we presented the developed PC-SAFT equation of state model for predicting the 35 gasoline fractions. The model is using normal boiling point and density at 20 • C of a fraction as input parameters. These input parameters were used to estimate the composition of the fraction and consequently to calculate the temperature dependence of vapor pressure and density of shale oil samples. Based on literature data, shale gasoline fractions were assumed to contain four main classes of compounds: olefins, paraffins, aromatics, and oxygen-containing compounds. For estimating the composition, simple polynomial equations were developed using available literature data for the composition of Kukersite shale oil gasoline. Ready-to-use correlations from available literature for olefins and paraffins along with linear equations obtained for oxygen-containing compounds were used to develop respective correlations for aromatic compounds. The resulting model can be used as a property prediction model for shale gasoline samples with normal boiling points below 180 • C. The suitability of these prediction models for Kukersite shale gasoline was evaluated and the root mean square percent error was 13.2%. Although good results were obtained for Kukersite shale oil, due to the difference in the composition for different shale oils, the applicability of the model could be further assessed once the composition of the main classes of compounds of other shale oils is analyzed. Author Contributions: All authors contributed equally to this article. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Conflicts of Interest: There are no financial competing interests to report.
5,793.2
2022-06-01T00:00:00.000
[ "Chemistry" ]
Multifunctional auxetic and honeycomb composites made of 3D woven carbon fibre preforms Three dimensional (3D) woven composites started to find applications in various industrial sectors, mainly in aerospace and with a potential in automotive. 3D-woven fabrics can be architected to form complex and near-net-shape preforms ready for automated composites manufacturing. The 3D-woven honeycomb fabric is designed to include additional functionality into finished composites, such as positive and negative Poisson's ratios. In this study, complex honeycomb architectures were created using various weave designs to demonstrate the effects of auxetic behaviours when manufactured into a composite structure. A Staubli 3D-weaving system equipped with Jacquard UNIVAL 100 and creel of 3072 6 k carbon fibre tows were used to weave the designed honeycomb architecture. With the aid of hard polyester foam inserts, the 3D-woven fabrics were converted to honeycomb and auxetic preforms. These preforms were infused using epoxy resin to manufacture a set of honeycomb and auxetic composite structures. In comparison with the baseline honeycomb structure, it is proven that the developed auxetic composites exhibited negative Poisson’s ratio of − 2.86 and − 0.12 in the case of tensile and compression tests respectively. Multifunctional 3D woven composites have the ability to absorb energy through progressive failure, whilst maintaining gradual load profile decay beyond the failure onset 1,2 . Consequently, they are of great interest for situations where the ability to withstand crash or impact loading is a design requirement. 3D woven composites are starting to find applications in various sectors, particularly aerospace and automotive applications. Several OEMs and Tier 1 manufacturers are actively investigating these structures. In aerospace, 3D woven structures are already used in fan blades and fan casings. Development is at an early stage and there are many opportunities for improving impact performance and optimising the weight of the structure. It is important that crash structures used in vehicles like cars, buses and trains are accurately predictable and the manufacturing is repeatable. There is also an opportunity using 3D weaving to add an additional functionality into composites. 3D weaving is a specialist activity and there are very few centres capable of conducting the research needed. Textile manufacturers such as DORNIER and STAUBLI manufacture 3D weaving machines but 3D woven fabrics for composites applications are currently in their infancy. In the UK, companies such as Sigmatex UK Ltd, M Wrights & Sons and Antich & Sons have developed internal capabilities to utilise 3D weaving, but more R&D is required to deploy such technology throughout the supply chain. Recently the University of Sheffield AMRC established 3D weaving capabilities which will be used to bridge the gap and support industry. 3D woven preforms have the capability to demonstrate multifunctionality in the manufacture of advanced composites. One of the 3D multifunctional structures is the auxetic functionality which needs to be investigated and demonstrated to industry. This could be in the form of expandable honeycomb type structures 3 , which could be woven and tested to show capability and potentially improved mechanical performances with high damage tolerance such as crash, compression and impact. Figure 1 explains what the auxetic structure is compared to the conventional honeycomb structure in terms of its geometry, i.e., an auxetic material exposed to tension would increase in dimensions in the direction that is lateral to an applied tensile force. An auxetic structure has several advantages in a crash situation for example good energy absorption, however, the repeatable manufacture of an auxetic structure with a predictable behavior needs further work 4 . Poisson's ratio, which is the ratio of the strain normal to the applied load to the extension strain (or axial strain) in the direction of the applied load. Poisson's ratio ( ν ) of standard material can be expressed as: where, ε t = transverse strain, ε l = longitudinal or axial strain, ∆L = change in length, L o = initial length, ∆T = change in width and T o = initial width. Most conventional materials show positive Poisson's ratio (PPR) under tensile loads because they exhibit positive longitudinal and negative transverse strains, but smart materials like auxetics behave oppositely and show negative Poisson's ratio (NPR). It is known that conventional materials such as rubber and metals laterally contract when stretched and laterally expand when compressed in the longitudinal direction; such materials have a PPR. In contrast, there are some special materials which possess a NPR which laterally expand when stretched or laterally shrink when compressed in the longitudinal direction. The materials with NPR are also called 'auxetics' , which originated from the Greek word 'auxetos' meaning 'that which may be increased' 5 . Auxetics could be materials and/or structures, they have been investigated in the literature from different perspectives such as developing materials and structures, comparing behaviours and testing performances. In comparison with conventional materials, auxetic structures have many improved properties. They have higher shear modulus, hence better shear resistance. Auxetic materials have enhanced indentation/impact resistance and energy absorbance properties. When conventional material is subjected to an impact force, the material moves away from the impact point, but exhibiting the opposite behaviour, the auxetic material flows towards to impact point, which makes the auxetic materials harder to be indented. They also have other advantages, such as enhanced fracture toughness, improved crack growth resistance and higher damping resistance. Due to these advantages, auxetic composite structures could find suitable applications in high value manufacturing, such as aerospace and automotive sectors. The disadvantage of auxetic composites is that they may be difficult to manufacture on a large scale 5 , but such difficulty has been challenged in this work. Many studies have been conducted to develop and investigate new auxetic structures and materials based on different material scales. The examples include auxetic fibers 6,7 , auxetic fabrics 8,9 , auxetic foams 10,11 , and auxetic composites 12,13 . Auxetic woven-composite structures are investigated in this project. Zhou et al. 14 developed auxetic composites made of 3D orthogonal woven textile and polyurethane foam. They prove that the auxetic composites exhibited NPR and behave more like damping material with lower compression stress, while the non-auxetic composites behaves more like stiffer material with higher compression stress. In another study 15 www.nature.com/scientificreports/ 3D-woven structures were produced and the effect of float length of ground weave and binding yarn on auxeticity of the fabric was investigated. A set of different 3D orthogonal woven structures were produced on a rapier dobby loom by changing the float length in the ground weave and binding yarns. The results showed that the 3D-woven materials with equal and maximum float length of ground weave and binding yarn showed greater auxetic behavior. Also, the impact energy absorption of the developed composites was found to increase with the increase in float length, justifying that the structures are auxetic and possess NPR. Zulifqar and Hu 16 reported that the woven fabric could be auxetic through a combination of loose weave and tight weave in the same structure. They showed that the developed fabrics exhibit NPR effect in both weft and warp directions in a large range of tensile strain. In this work, a Staubli 3D Weaving System was utilised, including Unival jacquard to weave 3D honeycomb fabrics using Toray T300-6 k carbon fibres fed from warp and weft directions. With the aid of polyester foam, the developed 3D-woven fabrics were converted to two different preforms: conventional honeycomb and new auxetic structures. The preforms were infused using epoxy resin to manufacture large composite structures investigated in this study. Tensile and compression tests were carried out to assess the functionality of honeycomb and auxetic composite structures through their Poisson's ratio measurements. Materials Carbon fibre (CF), thermoset resin system and hard PET foam were used to preform and manufacture composite structures. Their grade and properties are given Table 1: According to the resin manufacturer, the T-Prime 130-1 was mixed at 100/27 by wt% of resin/hardener to infuse the dry preforms. Weave design and fabric manufacturing. EAT weave-design software was used to design a complex honeycomb structure. A schematic diagram of the honeycomb design suggested in this study is shown in Fig. 2. The unit cell of this honeycomb structure (Fig. 2) consists of a number of plain weaves of different number of layers; single-layer weave (A), two-layer weave (B, C), four-layer weave (D), and three-layer weave (E). Using the EAT software, a colour coding system has been firstly allocated and then assigned to the different weave designs selected to form the suggested honeycomb structure. Table 2 below presents the number of picks and pick density of the different zones defined within the structure designed. Figure 3 shows the EAT assigned weaves (red, yellow and green zones) include JC5 file of the honeycomb design installed for the Jacquard UNI-VAL 100. The main goal of this research is to demonstrate a 3D woven composite structure that can exhibit a smart functionality such as an auxetic structure of NPR. The honeycomb structure designed in this study (Fig. 2) is converted to the auxetic one as shown in Fig. 4. The 3D weaving system (creel, Jacquard, loom and horizontal take off table) was used to produce the honeycomb fabric. The 3D weaving system (Fig. 5) was threaded with 3072 carbon fibre tows through the warp direction and the same fibre was also used through the weft direction. 16 ends per dent was drawn in through the reed. 128 ends (64 from each side) of the 3072 ones were loaded with polyester (PET) yarn and used as selvage catch cord to lock both edges of the woven fabric (Fig. 6). Dry fibre preforming and testing auxetic functionality. In order to minimise errors, trials and to save materials, a soft foam insert/core was used to preform the woven fabric into the honeycomb and auxetic structures prior to resin infusion. The dry fibre preform structure was made to test its formability and functionality in particular the auxetic one. The foam was cut to the relevant shapes and was inserted into the fabric pockets. The honeycomb structure (Fig. 8a) achieved its targeted preform shape relatively easily, but the auxetic preform (Fig. 8b) needed some additional supports (in the form of G-clamps) in order to hold the shape. As a dry preform, the functionality of the auxetic structure was tested to confirm its negative Poisson's ratio. Snapshots (Fig. 8c and Fig. 8d) were captured during the manual tensile testing. The longitudinal and transverse Table 2 using JC5 output file from EAT software. Table 3. The Poisson's ratio is found to be negative (− 0.78) which confirms that the preform is showing auxetic behaviour. Composite's manufacturing. Due to the high complexity of the woven structures, the resin infusion and vacuum bagging method was employed in this study. To avoid squashing and compressing the soft foam, used above in Fig. 8 during the vacuuming process, alternative hard and high-density PET foam (Divinycell P150) was used to preform the honeycomb structure prior to infusion. The foam inserts were wrapped with release film to ease demoulding after curing. Figure 9a,b,c shows an example of bagging process of honeycomb preforms including the resin infusion of both honeycomb and auxetic structures. An infusion mesh or a resin flow assist material (blue) was used to encourage flow in particular across the preform as shown in Fig. 9. Gurit T-Prime 130-1 resin and hardener were used to infuse the woven preforms produced in this research. The mixing ratio of resin to hardener used in these infusions was 100:27 by weight, as prescribed by the manufacturer's TDS. Table 4 gives the mixing ratio used in grams. After degassing the mixture for 10 min, the infusion took place and was completed in around 30 min. Subsequently, the assembly was moved to a preheated oven and cured at 60 °C for 3 h. Figure 10 demonstrates a selection of the manufactured honeycomb and auxetic composite structures. Results and discussion Mechanical tests were carried out to determine the Poisson's ratio for the honeycomb and auxetic composite structures manufactured in this study. Despite the conventional samples such as flat and cylindrical coupons, there are no standard methods available to determine Poisson's ratio of such complicated structures developed in this research. Instron testing machines were used to have a good control and determine the force-displacement graphs precisely. The two composite structures are subjected to tensile and compression tests, the results of which are detailed in the following sections. Tensile test. Prior to testing, the length and height of the auxetic and honeycomb samples were measured as shown and given in Fig. 11. A transducer was used to ensure precise and online measurement of the transverse displacement during the test. The test repeats were recorded and two screenshots were captured to determine the initial and final transverse displacements. In the case of the auxetic structure, Fig. 12 shows the start (left) and end (right) positions of the tensile test. Figure 13a,b shows the maximum longitudinal (L) and transverse (T) displacements of the auxetic structure recorded during the test. Due to the complexity and rigidity of the tested structure, it is noticed that the transducer has been slightly deviated from the original position at the start of the test (Fig. 13). To overcome this misalignment, the measurements of the transverse displacement were also recoded from the grid background www.nature.com/scientificreports/ (graph paper). From the measurements made and the figures above, Table 5 listed the measurements obtained. It is found that the Poisson's ratio of the tested structure is − 2.86, i.e. the auxetic composite exhibited NPR in the case of the tensile testing. In the case of the honeycomb structure, Fig. 14 shows the start and end positions of the tensile test. The longitudinal and transverse displacements of honeycomb structure recorded during the tensile test are shown in Fig. 15a,b and the Poisson's ratio results are given in Table 6. It is found that the Poisson's ratio of the tested structure is 8.10, i.e. as expected the honeycomb composite exhibited PPR in the case of the tensile testing. Compression test. In the case of compression test setup, a ruler was used as an indicator of the transverse displacement instead of the transducer used in the tensile test. Different symbols were used in this test due to the change of the load direction. For the auxetic sample, the original length is noted d o which measured 710 mm whilst the original height h o measured 170 mm. The dimensions of d o and h o for the honeycomb were 787 mm and 160 mm respectively. As shown in Fig. 16, a small section was highlighted on the ruler to measure the displacement in the longitudinal direction. Figure 16 shows the start and end positions of the compression test for www.nature.com/scientificreports/ the auxetic structure. Table 7 gives the measured displacements, strains and Poisson's ratio of auxetic composite. It is found that the Poisson's ratio of the tested auxetic structure is − 0.12, i.e. the auxetic composite also exhibited NPR in the case of the compression testing. It is proven that the auxetic composite structure revealed NPR under tensile and compression loads. In terms of the honeycomb structure, Fig. 17 shows the start and end positions of compression test. Table 8 gives the measured displacements, strains and Poisson's ratio of honeycomb composite. It is found that the Poisson's ratio of the tested honeycomb structure is 0.11, i.e. the honeycomb composite also exhibited PPR in the case of the compression testing. In summary, it is found that the auxetic composite structure exhibited NPP (− 2.86 & − 0.12), whereas the honeycomb structure showed PPR (8.10 & 0.11) under both testing mechanisms (tensile and compression). But in the case of the tensile test, the Poisson's ratio obtained for both structures are found to be out of the normal range of standard materials (− 1 to 1) which may be due to the specific structures developed in this work. Conclusion 3D honeycomb structures were successfully woven and the use of a supportive core material (in this case foam) was needed to allow the dry 3D woven fabrics to be preformed and resin infused. As a dry fibre, the auxetic preform was manually tested and its auxetic functionality was successfully proven. Dry preforms (honeycomb and auxetic) were infused using epoxy resin then the cured honeycomb and auxetic composites were successfully tested using tensile and compression tests. The honeycomb structure exhibited positive Poisson's ratios (PPR) in both testing directions (tensile and compression), but the auxetic structure demonstrated a negative Poisson's ratio (NPR) and thereby exhibited smart functionality. The concept of 3D woven smart functionality composites is proven, and multifunctional 3D woven composites are demonstrated. The values of Poisson's ratio obtained for both structures are found to be outside the range of conventional materials in the case of the tensile test. Future work is recommended to manufacture generic panels or demonstrators made of honeycomb/auxetic composites and investigate their mechanical performance through different responses such as impact and crash tests. Furthermore, the very high Poisson's ratio for the honeycomb structure will be explored further to see if this can be exploited in new applications.
3,977.4
2022-12-01T00:00:00.000
[ "Materials Science", "Engineering" ]
A decision-theoretic approach to Bayesian clinical trial design and evaluation of robustness to prior-data conflict Summary Bayesian clinical trials allow taking advantage of relevant external information through the elicitation of prior distributions, which influence Bayesian posterior parameter estimates and test decisions. However, incorporation of historical information can have harmful consequences on the trial’s frequentist (conditional) operating characteristics in case of inconsistency between prior information and the newly collected data. A compromise between meaningful incorporation of historical information and strict control of frequentist error rates is therefore often sought. Our aim is thus to review and investigate the rationale and consequences of different approaches to relaxing strict frequentist control of error rates from a Bayesian decision-theoretic viewpoint. In particular, we define an integrated risk which incorporates losses arising from testing, estimation, and sampling. A weighted combination of the integrated risk addends arising from testing and estimation allows moving smoothly between these two targets. Furthermore, we explore different possible elicitations of the test error costs, leading to test decisions based either on posterior probabilities, or solely on Bayes factors. Sensitivity analyses are performed following the convention which makes a distinction between the prior of the data-generating process, and the analysis prior adopted to fit the data. Simulation in the case of normal and binomial outcomes and an application to a one-arm proof-of-concept trial, exemplify how such analysis can be conducted to explore sensitivity of the integrated risk, the operating characteristics, and the optimal sample size, to prior-data conflict. Robust analysis prior specifications, which gradually discount potentially conflicting prior information, are also included for comparison. Guidance with respect to cost elicitation, particularly in the context of a Phase II proof-of-concept trial, is provided. INTRODUCTION Bayesian clinical trial designs are often evaluated in terms of frequentist (i.e., conditional on a given parameter value) operating characteristics such as type I error rate, power, and mean squared error (MSE). The work of Viele and others (2014) provides an overview of the effects of different borrowing mechanisms on clinical trial frequentist operating characteristics. In general, improvements in terms of frequentist type I error rate, power, and MSE can be achieved if prior (historical) information is indeed consistent with the true parameter value generating the current data, at the cost of type I error rate and MSE inflation, and power loss, otherwise. Indeed, if strict control of frequentist type I error rate is required and a uniformly most powerful test is available, the maximum attainable power ultimately depends only on the final sample size, irrespectively of whether a static or dynamic borrowing mechanism is adopted, i.e., it does not depend on whether the prior is fixed or adapts to the observed degree of conflict between historical and current information (see Kopp-Schneider and others, 2020). In practice, however, a carefully elicited prior should contain trustworthy information, and its incorporation is therefore likely to improve the trial's test decision and effect estimate. It is therefore of interest to investigate the rationale and consequences of different approaches to relaxing strict control of frequentist operating characteristics in a quantitative, rational, and unified framework. Moreover, as sampling generally comes at a non-negligible monetary and/or ethical cost, such a framework should allow taking sampling costs into account when eliciting the trial's sample size. The task just described is essentially a Bayesian decision-theoretic problem. In this context, we define a loss function to explicitly account for costs in terms of all operating characteristics, including the estimation error, and in terms of sample size. Estimation error can be of particular relevance in, e.g., a Phase II proof-of-concept trial. Indeed, such a trial aims at assessing both significance and relevance of a treatment, and can more likely result in indeterminate outcomes (i.e., when only significance or relevance is declared, see e.g., Fisch and others, 2014) also under a large true effect if a large uncertainty is associated with the posterior estimate. In addition, as the posterior estimate of a Phase II trial is often used to inform the sample size calculations of the subsequent Phase III trial, a large estimation error can induce a high risk of an under-or over-powered Phase III trial (Wang and others, 2006). We show how the decision-theoretic framework can unify different approaches to sample size selection, ultimately connected to cost elicitation. Moreover, following Sahu and Smith (2006), we illustrate how sensitivity analyses can be performed through the elicitation of two priors: a sampling prior, which is used to reproduce different potential data-generating mechanisms, and an analysis prior, which is used to fit the data and perform the trial. The sampling and the analysis priors are also referred to in the literature as the design and the fitting prior, respectively. In addition to the performance of sensitivity analysis, their distinction has been adopted to reflect a different view on the use of prior information. For example, a pharmaceutical company may be willing to incorporate available prior information to estimate the probability of success of a trial (also known as assurance or average power), while regulatory agencies may require the analysis to be performed under a pessimistic prior or the frequentist paradigm (O'Hagan and Stevens, 2001;Crisp and others, 2018). Indeed, the sampling prior allows formally taking into account uncertainty about the parameter of the data-generating process when computing the operating characteristics of the design (O'Hagan and Stevens, 2001). On the other hand, frequentist operating characteristics can still be retrieved simply by assuming a point-mass sampling prior. In contrast to the commonly adopted frequentist approach to type I error rate control, however, here we place particular focus on the idea of controlling a cost-weighted sum of both error rates (see e.g., Grieve, 2015;Pericchi and Pereira, 2016). Such choice has decision-theoretic foundations, and has been shown to unify Bayesian and frequentist approaches to hypothesis testing by effectively solving Lindley's paradox (i.e., the fact that different test decisions are provided by frequentist and Bayesian approaches as the sample size increases) (Pericchi and Pereira, 2016); moreover, it induces a test decision strictly related to the value of the Bayes factor (BF), which has been advocated as a sensible alternative to p-values as a measure of statistical evidence (Bayarri and others, 2016). Section 2 introduces the decision-theoretic framework adopted in this work, it outlines the rationale for the control of a weighted sum of error rates, and provides further details on the choice and consequences of the sampling-analysis prior dichotomy. Section 3 describes the proposed approach by introducing the target loss function and provides guidance on cost elicitation. The simulations of Section 4 exemplify its behavior in terms of operating characteristics and elicited sample sizes for different analysis prior specifications, including robust prior specifications. Section 5 provides a sample application to a proof-ofconcept trial (Roychoudhury and others, 2018). In the final section, we provide a summary of this work, an outlook on possible developments, and a further endorsement of the decision-theoretic approach in the context of the recent debate on p-values and significance testing. Hypothesis testing Assume that a clinical trial is performed to test the null hypothesis H 0 that a given effect θ ≤ θ 0 versus the alternative hypothesis H 1 that θ > θ 0 . Let y denote the observed data having probability density function f (y|n, θ) indexed by θ and sample size n, and let π(θ) be the prior distribution for θ. Let also m(y) denote the marginal data distribution, i.e. m(y) = f (y|n, θ)π(θ)dθ. The decision d c 0 ,c 1 consists in either rejecting (d c 0 ,c 1 = 1) or keeping (d c 0 ,c 1 = 0) the null hypothesis. A c 0 -c 1 loss function can be defined as (see e.g., Robert, 2007) where I denotes the indicator function, and c 0 and c 1 are the losses assigned to keeping H 0 when it is false and rejecting H 0 when it is true, respectively. Taking into account uncertainty about θ , the optimal test decision after an outcome y has been observed is obtained by minimization of the posterior expected loss ρ(π, d c 0 ,c 1 (y)|y) = E π [L(θ , d c 0 ,c 1 (y))|y], i.e., ρ(π, d c 0 ,c 1 (y)|y) =c 1 P π (θ ≤ θ 0 |y) where P π (θ ≤ θ 0 |y) is the posterior probability of the null hypothesis under data y and prior π . ρ(π, d c 0 ,c 1 (y)|y) is minimized by choosing d π c 0 ,c 1 (y) = I {P π (θ ≤θ 0 |y)<γ π } , where γ π = c 0 /(c 0 + c 1 ) (note that only the ratio between c 0 and c 1 is relevant). In contrast, uncertainty about y can be taken into account by averaging the loss function (2.1) across data realizations for a fixed θ , leading to the frequentist risk where P f (d c 0 ,c 1 (y) = 1|θ) is the probability of rejecting H 0 under the data density f (y|n, θ). Note that, for c 0 = c 1 = 1, the risk function (2.3) represents the conditional type I error rate α(θ) for θ ≤ θ 0 , or type II error rate β(θ) = 1 − α(θ), for θ > θ 0 . Uncertainty about both the data outcome and the parameter value can be accounted for by integration of the frequentist risk (2.3) over π(θ) or, equivalently, of the posterior expected loss (2.2) over the marginal data distribution m(y). This leads to the integrated risk measure (2.4) Therefore, the decision d π c 0 ,c 1 (y) which minimizes the posterior expected loss for each y ∈ Y , also minimizes the integrated risk (see, e.g., Robert, 2007). Note that if π(θ) is chosen as a two-point mass prior, with one point belonging to the support of the null, and one point belonging to the support of the alternative hypothesis, the integrated risk (2.4) is equivalent to the frequentist risk (2.3) with "reweighted" c 0 and c 1 . The integrated risk (2.4) can also be seen as the sum of the average type I error Minimizing the sum of type I and type II error rate The optimal decision d π c 0 ,c 1 is intrinsically connected to the value of the BF. Indeed, H 0 is kept if P π (θ ≤ θ 0 |y) ≥ γ π , i.e., if (see e.g., Parmigiani and Inoue, 2009;Pericchi and Pereira, 2016) The first factor on the right-hand side of Equation (2.5) is given by the prior odds of the null versus the alternative hypothesis, while the second factor is the BF of the null versus the alternative hypothesis. By exploiting the duality between prior probabilities and test error costs (Robert, 2007), if we assume i.e., if the ratio of the costs is lower than the BF. The advantage of the latter cost elicitation is that the integrated risk can be seen as the (weighted) sum of the average type I and type II error rates, which should aid interpretability and cost elicitation. Moreover, the test decision no longer depends on the prior odds. In this sense, it can be thought to be induced only by the available data, although the influence of the prior shape distinguishes it from a purely frequentist concept unless point-mass priors are adopted (Berger and Sellke, 1987). Performing a test decision based only on the principle of minimizing the sum of test error rates, thus without enforcement of (either conditional or average) type I error rate control, is the natural and optimal choice in a decision-theoretic set-up, and has been advocated in various works (see, e.g., Grieve, 2015;Pericchi and Pereira, 2016;Isakov and others, 2019, and references therein). As expressed very clearly in Pericchi and Pereira (2016) and references therein, an error rates-weighted approach defines the rejection region by fixing the required amount of evidence in favor of the alternative hypothesis relative to the null, rather than by fixing the type I error rate, and this has desirable consequences in terms of asymptotic behavior and independence of the result from the mechanism by which data were collected. Crucially, the latter also implies that no adjustment is required in sequential testing procedures (Grieve, 2015;Bayarri and others, 2016). Additionally, Pericchi and Pereira (2016) show that Lindley's paradox is not a discrepancy between frequentist and Bayesian test decisions, but rather a discrepancy between decisions based on fixing type I error rate, and decisions which allow a decrease in both test error rates, according to their relative costs, as the sample size increases. It is also of interest to note that (see Bayarri and others, 2016;Pericchi and Pereira, 2016) Y BF(y)f (y|θ ∈ (θ 0 , ∞), R)dy = α/(1 − β), where R denotes the rejection region, i.e., the set of y outcomes such that d π c 0 ,c 1 (y) = 1, and α and β are the average type I and type II error rate, respectively. This is directly related to Equation (2.6), and in particular to the fact that the decision based on the BF is the minimizer of the sum of average test error rates (Pericchi and Pereira, 2016). Sampling and analysis prior As outlined in the Section 1, here we assume that two priors are elicited, a sampling prior π s , which is assumed to generate the data, and an analysis prior π a , which is used to fit the data and which induces the actual trial decisions (see, e.g., O'Hagan and Stevens, 2001;De Santis, 2006;Sahu and Smith, 2006;Psioda and Ibrahim, 2018). Assuming the integrated risk (2.4) to be computed with respect to the sampling prior π s , it would be minimized by a choice taken with respect to the sampling prior itself, i.e., d πs c 0 ,c 1 (y). In practice, however, the true sampling prior is unknown and thus decisions have to be undertaken based on optimality with respect to the analysis prior. Sensitivity analyses can be performed to explore the integrated risks induced by different data-generating mechanisms. Following Sahu and Smith (2006), the integrated risk is then computed as r(π s , d πa In the formalization of Berger (1990), our and Sahu and Smith's (2006) approach performs sensitivity analyses with respect to the integrated risk as functional of interest. Note that we obtain the integrated risk by averaging the frequentist risk over the sampling prior distribution (rather than averaging the posterior expected loss induced by the analysis prior over the marginal data distribution). When π s = π a , the two approaches would not lead to the same result. Our choice has the desirable feature of providing a straightforward connection with the frequentist approach simply by replacing the sampling prior with a point-mass prior. Furthermore, in a context where no dichotomy between sampling and analysis prior is assumed, Berger (1985) notes that this approach involves "uncertainty about π only at the stage of averaging R(θ, d)." If sensitivity analyses are not of primary interest, a single sampling prior can be elicited (see, e.g., Psioda and Ibrahim, 2018). In the default sampling prior approach of Psioda and Ibrahim (2018), the sampling priors under the null and alternative hypothesis arise by truncation of the prior elicited from the historical information at θ 0 , and the subsequent normalization. Average type I error rate and power are then computed with respect to the elicited sampling priors, and their control is achieved by adopting a fixed threshold for rejection, equal to the desired average type I error rate, and adapting the analysis prior informativeness and the sample size until the desired characteristics are obtained. Here, we follow a related approach which, however, includes the costs of average rates of type I and type II errors in the integrated risk function and targets the weighted sum of such average test error rates. Moreover, we incorporate estimation error and sampling costs as criteria to derive an optimal sample size. Finally, the distinction between π s and π a is exploited to perform sensitivity analyses of the optimal sample size and operating characteristics with respect to different data-generating processes conveyed by the sampling prior π s . Extended integrated risk definition Early phase clinical trials, e.g., proof of concept studies, often focus not only on detecting efficacy, i.e., whether the null hypothesis can be rejected, but also on determining relevance, i.e., whether the observed effect is large enough to motivate proceeding to a Phase III study. Here, we follow Fisch and others (2014) and denote a trial outcome "relevant" if P πa (θ ≤ θ R |y) < 0.5, where θ R is the relevance threshold, and "indeterminate" if either relevance or significance is observed. When θ > θ R , we expect the probability of an indeterminate outcome to correlate with estimation error as the sample size increases (we will show in Section 4 that such probability may, however, be non-monotone). Estimation error might be considered even more harmful if, as is often the case, the effect detected in the Phase II trial is exploited to perform sample size calculations for the subsequent Phase III trial, as a large error can induce a significantly higher risk of Phase III trial under-or over-powering (see Wang and others, 2006, for a discussion and a related approach addressing this point). These observations motivate the introduction of estimation error in the loss function (2.1). Thereby, we add to each subcase the commonly adopted quadratic loss function with decision d q , L(θ , d q (y)) = {θ − d q (y)} 2 (see, e.g., Parmigiani and Inoue, 2009). Moreover, as both the test error rates and the estimation error would decrease for increasing sample size, a desirable principled approach for the choice of the sample size needs to explicitly incorporate the cost of each additional observation into the loss function (see, e.g., Lindley, 1997). The cost of sampling can be assumed to be, e.g., additive and linear in the number of observations (Lindley, 1997). The integrated risk (2.4) becomes , c q is a constant introduced to reflect the relative importance of estimation versus test error, and c n is the cost per sample. Note that we have dropped the dependence of r on the sampling prior π s and the vector of decisions with respect to the analysis prior d πa (y) = {d πa c 0 ,c 1 (y), d πa q (y)} for ease of notation. When π a = π s , the risk is minimized by the optimal choices d πa c 0 ,c 1 (y) and d πa q (y) = E πa [θ |y], while the optimal sample size needs to be identified numerically. Cost elicitation Cost elicitation can proceed as follows. We can first select according to the relative importance assigned to each average error rate, keeping in mind that the BF should be larger than c 0 /c 1 to keep H 0 (see, e.g., Kass and Raftery, 1995;Pericchi and Pereira, 2016, for guidance on BF thresholding); to facilitate interpretation, we can additionally require c 1 + c 0 = 1. Note that, if we aim at minimizing the weighted sum of average test error rates, under the sampling prior we have that c 0 = c 0 {P πs (θ > θ 0 )} −1 and c 1 = c 1 {P πs (θ ≤ θ 0 )} −1 . However, as the sampling prior is generally not available, decisions must be undertaken with respect to the analysis prior, leading to a threshold for rejection γ πa = c 0 P πa (θ ≤θ 0 ) c 0 P πa (θ ≤θ 0 )+c 1 (1−P πa (θ ≤θ 0 )) . As π a approaches π s , γ πa also approaches the optimal decision threshold γ πs , and the integrated risk is minimized. However, for the purpose of evaluating the integrated risk (3.7), c 0 and c 1 are computed with respect to the sampling prior, and only the decision is affected by π a . To elicit c q and c n , we take advantage of the connection between the "goal sampling" and the integrated risk minimization approach highlighted by Lindley (1997). In particular, for a specific sample size choice induced by the goal sampling approach, i.e., the minimum sample size required to achieve specific testing or estimation targets, a cost per observation can be inferred through a local approximation of the integrated risk. This can be achieved by first noting that both operating characteristics, i.e., weighted sum of average test error rates and average MSE, are a function of the sample size n. Let the integrated risk for, e.g., the weighted sum of average test error rates (SATE) be denoted by r SATE (n) = SATE(n) + c SATE n n. In the usual risk optimization procedure, c SATE n is fixed, while the sample size and, therefore, SATE(n) are varied so that the integrated risk is minimized. However, if a target SATE is specified and a given sample size n SATE is elicited so that such SATE target is achieved, the cost per observation corresponding to this combination can be inferred. In particular The same procedure can be applied to infer a cost for the average MSE (AMSE), c AMSE n . An example is provided in Section S1 and Figure S1 of the supplementary material available at Biostatistics online. This procedure also highlights why goal sampling approaches cannot be considered "cost free," and are rather based on an implicit elicitation of the costs (Lindley, 1997). In order to obtain c q and c n in Equation (3.7), note that the minimum of the integrated risk is invariant to multiplication by a constant; therefore, if we divide r SATE (n) and r AMSE (n) by c SATE n and c AMSE n , respectively, the combined integrated risk (3.7) can be written as The weight w ∈ [0, 1] has the role of defining the importance of the testing versus the estimation target. We propose taking w = n SATE /(n SATE + n AMSE ), under the implicit assumption that the importance of each characteristic is reflected in the number of patients one would be willing to pay for the corresponding target. Note that the sample size minimizing the integrated risk (3.8) will lie between n SATE and n AMSE : If the targets specified are to be strictly satisfied, then one should adopt a pure goal sampling approach and select n as the maximum between n SATE and n AMSE . The derivatives of SATE(n) and AMSE(n) may not be analytically available. A numerical approximation is however easily available once values for both operating characteristics are computed in a neighborhood of their respective target sample sizes. We therefore suggest to compute SATE(n) and AMSE(n) for increasing sample sizes under a chosen sampling prior and analysis prior; these can be reasonably taken to be, e.g., the available informative prior and the intended analysis prior, respectively. At this stage, the average type I and type II error rates can also be stored to simplify elicitation of a target weighted sum of average test error rates. To simplify elicitation of a target AMSE, the average probability of an indeterminate outcome, the average "extreme" power loss and the average "extreme" sample size gain when θ > θ R , can also be computed. Extreme power loss and sample size gain could be expressed, for example, in terms of the (1 − ζ )% quantiles of the respective conditional distributions induced by the data outcomes. We define the power loss and sample size gain as the difference between a target Phase III power (0.9, for example) and corresponding sample size, and the "realized" power and sample size, respectively. That is, they are defined based on standard frequentist sample size calculations if the posterior mean of the current trial is taken as the "true" effect. When the trial result is not significant and/or not relevant, the assumption is that the realized power and sample size are equal to 0, as the Phase III trial is not taking place. The conditional probability of an indeterminate outcome, and the (1 − ζ )% quantiles of power loss and sample size gain, are then averaged with respect to the sampling prior distribution truncated from below at θ R , i.e., we only concentrate on losses for θ > θ R . Once the costs have been calculated, the integrated risk is readily available for a grid of sample sizes, and the final optimal sample size can be numerically identified as the one minimizing the overall risk. Alternatively, if available, actual monetary values can be assigned to all costs. EVALUATION: DESIGNING A TRIAL AND CHECKING SENSITIVITY TO PRIOR-DATA CONFLICT To illustrate the concepts and the proposed approaches, we first focus on a normal outcome y, y i ∼ N (θ, σ 2 = 1), i = 1, . . . , n. Interest is placed on the mean parameter θ , which is assumed to follow a N (μ s , σ 2 s ) sampling prior. We test the set of hypotheses H 0 : θ ≤ 0 versus H 1 : θ > 0, and assume a relevance threshold θ R = 0.15. The following analysis priors are considered: (i) A vague prior π vague , which follows a N (0, 10 2 ) distribution; (ii) an informative prior, which follows a N (0.25, 1/50) distribution; (iii) a robust mixture prior (see, e.g., Berger and Berliner, 1986;Schmidli and others, 2014), which is a mixture of the informative prior and a vague N (0.25, 10 2 ) prior distribution, with mixture weight equal to 0.5; and (iv) an empirical Bayes (EB) power prior π pow . The EB power prior can be interpreted as a posterior arising from a historical study which collected n 0 observations, of which a 0 n 0 are accounted for in the analysis, where the parameter a 0 ∈ [0, 1] is selected by maximization of the marginal likelihood m(y) = f (y|n, θ)π(θ|a 0 , y 0 , n 0 )dθ, where π(θ|a 0 , y 0 , n 0 ) denotes the power prior with parameter a 0 , and y 0 and n 0 are the historical outcome and sample size, respectively (Gravestock and Held, 2017). Here, the historical outcome is summarized by the meanȳ 0 = 0.25, and the historical sample size is n 0 = 50. When a 0 = 1, the EB power prior coincides with the informative prior. Note that the EB commensurate prior (Hobbs and others, 2012) for normal outcomes coincides with the EB power prior (see Appendix C.1 of Wiesenfarth and Calderazzo, 2020), and therefore all results for the EB power prior directly apply also to the EB commensurate prior in this set-up. We assume that type I error is 19 times more costly than type II error, thus we take c 0 = 0.05 and c 1 = 0.95 . To gain further insight into the analysis prior decision thresholds and test decisions, Figure 1 provides an illustration of each decision threshold γ π prior for varying sampling prior mean μ s (σ 2 s = 1/50), and of the BF for varying data mean outcomesȳ, assuming n = 100. The decision thresholds of all analysis priors tend to differ more from the sampling prior optimal threshold for negative values of μ s (values are plotted on the logarithmic scale for ease of representation), which is indeed the region where the historical information differs most from the sampling prior; the EB power prior threshold depends also on the datā y and varies, as expected, between the vague and the informative prior ones, depending on the degree of conflict. Note that for the vague prior P πvague (θ ≤ θ 0 ) ≈ 0.5, thus γ πvague ≈ c 0 /(c 0 + c 1 ). Moreover, as σ vague → ∞, d πvague c 0 ,c 1 approaches the usual frequentist test decision at level 5%. As for the BF, it is of interest to note that, in the region close to the threshold c 0 /c 1 , the EB power prior BF behaves very similarly to the informative prior one, while the mixture prior BF is closer to the vague prior BF. This has to be partly attributed to the shape of the corresponding null and alternative hypothesis truncated densities. Sensitivity of the BF to the location of historical information and weight and vague component of the robust mixture prior is investigated in Section S2 and Figure S2 of the supplementary material available at Biostatistics online. Operating characteristics related to testing and estimation for the normal outcome simulation example. The extreme power loss and extreme sample size gain in a subsequent Phase III trial, are defined as the 80% quantile of the respective distributions induced by the data outcomes. The probability of an indeterminate outcome, the extreme power loss and extreme sample size gain are averaged with respect to the sampling prior N (0.25, 1/50) truncated from below at θ R = 0.15. The remaining operating characteristics are averaged with respect to the whole sampling prior distribution. The dashed vertical lines represent the sample sizes identified for cost elicitation with respect to the vague analysis prior, and, independently, for testing (upper panels, n SATE = 160) and estimation (lower panels, n AMSE = 180). Such sample sizes allow to maintain an average type I and type II error rate below 0.05 and 0.2, respectively, as well as an average probability of an indeterminate outcome, average extreme power loss and average extreme sample size gain below 0.1, 0.3, and approximately 50, respectively. Note that the informative prior results exactly overlap with the sampling prior ones. The costs c SATE n and c AMSE n are derived as described in Section S1 of the supplementary material available at Biostatistics online, where n SATE = 160 and n AMSE = 180 are elicited through inspection of Figure 2 as follows. The sampling prior is assumed to coincide with the informative prior specification. Further we assume, e.g., that the vague prior was selected as analysis prior. Assume that, to reach an average type II error rate below 0.2 under the vague analysis prior specification, one is willing to "pay" the required n SATE = 160 samples. The average type I error rate is in this case always below 0.05, so we assume it does not contribute to the sample size requirement. This sample size corresponds to a weighted sum of average test error rates of 0.027, therefore we have c SATE n = 8.511 · 10 −5 . Analogously, suppose the required n AMSE = 180 samples are deemed acceptable to reach an average probability of an indeterminate outcome under θ > θ R below 0.1 and an average extreme power loss (we take the 1 − ζ = 0.8 quantile of the data induced distribution) and average extreme sample size gain (again for 1 − ζ = 0.8) below 0.3 and approximately 50, respectively. This would correspond to an AMSE equal or lower than 0.006, and cost c AMSE n = 3.098 · 10 −5 . Finally, the weight w = n SATE /(n SATE + n AMSE ) results in 0.471. In principle, a different choice of costs can be associated with each analysis prior. This would however result in integrated risks which are no longer directly comparable across different analysis priors. Therefore, to simplify our exposition, we retain the costs elicited under the vague analysis prior for all comparisons. Note, also, that the probability of an indeterminate outcome is not monotone for the vague and robust mixture analysis priors: this is due to the fact that, for low sample sizes, relevance can be achieved without significance; as the sample size increases, relevance tends to be achieved whenever also significance is observed; the probability of an indeterminate outcome slightly rises again as the sample size becomes large enough to allow significance without relevance, to then decrease again for larger sample sizes as a relevant outcome becomes more likely. Such potentially non-monotonic behavior makes the probability of an indeterminate outcome not ideal for the purposes of defining our costs, but we still regard it as of general interest for the design of the trial, and thus we believe it should be additionally monitored at the cost elicitation stage. Once costs have been selected, the sample size is selected through minimization of the integrated risk, and sensitivity analyses can be carried out by varying the sampling prior specification. Figure 3 shows how each of the considered operating characteristics, the costs, and the integrated risk behave when σ 2 s = 1/50, while varying μ s . As a consequence of the BF behavior illustrated in Figure 1, the EB power and the informative prior tend to attain very close average test error rates; the same can be observed to a lesser extent for the vague and robust mixture prior. The AMSE tends to be lower when the historical information and the sampling prior agree, for all priors incorporating historical information. The vague prior shows a decreased AMSE, coupled with an increased sample size, in a nearby region close to 0. This is induced by the fact that in this region it is harder to discriminate between the two hypotheses. Focusing on the costs, when μ s << 0, the cost of a type II error is higher than the cost of a type I error, as the sampling prior is placing a low probability on the alternative hypothesis. We stress again that this slightly counter-intuitive cost definition is effectively aimed at basing the test choice only on the comparison between c 0 /c 1 and the BF, as shown in Equation (2.6). As μ s increases, the cost of a type II error decreases, while the cost of a type I error increases. The imbalance in the maximum cost of each error is due to the fact that we have specified c 1 to be much larger than c 0 . The overall integrated risk shows non-robustness of the informative prior, reflecting the high sample size requirement and a steep increase in AMSE. The EB power prior reaches a remarkably small integrated risk when μ s >> 0, due to the fact that it achieves average test error rates close to the informative prior, while constraining losses in term of AMSE. On the negative support of μ s this behavior is reversed, i.e., the EB power prior shows the highest integrated risk after the informative prior one, although less strikingly so, due to the imbalance in the test error costs. In terms of overall integrated risk, the robust mixture prior appears less robust than the EB power prior; this is also reflected in the sample size requirement and is mostly induced by the behavior of the average test error rates. Note that the result is influenced by the choice of the mixture weight, the vague component, and the location of historical information. We refer again to Section S2 and Figure S2 of the supplementary material available at Biostatistics online for sensitivity of the BF to such factors. For comparison, Section S3 of the supplementary material available at Biostatistics online focuses on an analogous design, but one in which the sample size is fixed at 100 observations. In Figure S3 of the supplementary material available at Biostatistics online, the integrated risk and the operating characteristics are shown, providing overall a very similar picture to the one just described. In the same context, further insight into the behavior of the sampling prior average test error rates is obtained through visualization of the underlying conditional test error rates and truncated sampling prior distributions, shown in Figure S4 of the supplementary material available at Biostatistics online. Additionally, if it is required that specific targets for both testing and estimation are strictly controlled under the elicited sample size, then the goal sampling approach would be the method of choice. In Section S4 and Figure S5 material available at Biostatistics online, we thus illustrate the results of the sensitivity analyses for a goal sampling approach with target weighted sum of average error rates and AMSE equal to 0.027 and 0.006, respectively, i.e., the targets specified above to elicit the costs. Note that explicitly targeting alternative operating characteristics not included in the integrated risk definition may lead to sub-optimality of the sampling prior. In Section S5 of the supplementary material available at Biostatistics online, we compare our results to the ones induced by the approach in which the test error costs do not include the prior normalizing constants, and thus the rejection threshold is defined as γ πa = c 0 /(c 0 + c 1 ), i.e., it is the same irrespective of the analysis prior specification. For the vague prior, the test decisions, and therefore the average test error rates of Figure S6 of the supplementary material available at Biostatistics online, are approximately equivalent to those in Figure 3, although weighted differently in the integrated risk. The remaining analysis priors tend to be less conservative than observed in Figure 3, although it is important to stress that this is induced by the fact that the trade-off between average type I and type II error rates is not optimized. The EB power prior and the mixture prior provide a compromise on the integrated risk, which is bounded and close to the one of the best performing analysis prior under each scenario. Finally, Section S6 and Figures S7-S10 of the supplementary material available at Biostatistics online provide analogous comparisons for binomial outcomes. Key differences are introduced by the discreteness of the binomial distribution, which induces unsmooth declines of the operating characteristics for increasing sample sizes, and by the bounded parameter space, which leads, e.g., to a lower AMSE close to the boundaries. DATA APPLICATION We focus on the single-arm proof-of-concept trial with binomial outcome reported in Roychoudhury and others (2018). The trial was aimed at testing statistical significance and clinical relevance of an experimental drug for non-small-cell lung cancer and focused on objective response rate as a primary outcome. The threshold for significance θ 0 was set at 0.075, while the threshold for relevance θ R was identified as 0.175. The chosen analysis prior was a Beta(0.0811, 1) (hereafter denoted RSN prior), which has mean equal to 0.075. The sample size of the trial was set at n = 25, implying a conditional power at 0.275 equal to 0.858, which was regarded as acceptable. Following common practice, we assume that a relatively high confidence was therefore set on 0.275, and we assume a Beta(11, 29) informative prior. We also assume that such prior is motivated by a historical study which observed 10 successes and 28 failures, and adopted a uniform prior specification. The EB power and the robust mixture priors are taken to be Beta(a 0 10 + 1, a 0 28 + 1), and 0.5 · Beta(11, 29) + 0.5 · Beta(1, 1), respectively. Figure 4 shows the operating characteristics related to testing and estimation. Focusing on the RSN prior, inspection of the weighted sum of average test error rates indicates that, for sample sizes larger than n SATE = 21, the average type I error rate remains below 0.15 and the average type II error rate below 0.2. The weighted sum of average test error rates at n SATE = 21 equals 0.058, and the derivative at this point (after local polynomial regression smoothing) gives approximately c SATE n = 2.04 · 10 −4 . Focusing on the AMSE target, we observed that, for sample sizes larger than n AMSE = 74, the average probability of an indeterminate outcome remains below 0.1, the average extreme power loss (1 − ζ = 0.8) below 0.3 and the average extreme sample size gain (1 − ζ = 0.8) below 10. If this is deemed acceptable for n AMSE = 74, then the target AMSE is 0.0026, inducing c AMSE n = 3.421 · 10 −5 . The weight w is then taken to be n SATE /(n SATE + n AMSE ) = 0.221. The EB power and robust mixture prior are in this case much closer to the assumed sampling prior than the RSN prior, leading to an overall smaller weighted sum of average test error rates and AMSE. Optimization of the integrated risk is then carried out with the selected costs, and sensitivity analyses are shown in Figure 5 for varying sampling prior Beta(a s , b s ) means, under the constraint that a s + b s = 40. Fig. 4. Operating characteristics related to testing and estimation for the single-arm proof-of-concept trial with binomial outcome reported in Roychoudhury and others (2018). The extreme power loss and extreme sample size gain in a subsequent Phase III trial, are defined as the 80% quantile of the respective distributions induced by the data outcomes. The probability of an indeterminate outcome, the extreme power loss and extreme sample size gain are averaged with respect to the sampling prior Beta(11, 29) truncated from below at θ R = 0.175. The remaining operating characteristics are averaged with respect to the whole sampling prior distribution. The dashed vertical lines represent the sample sizes identified for cost elicitation with respect to the RSN analysis prior, and, independently, for testing (upper panels, n SATE = 21) and estimation (lower panels, n AMSE = 74). Such sample sizes allow to maintain an average type I error rate and type II error rate below 0.15 and 0.2, respectively, as well as an average probability of an indeterminate outcome, average extreme power loss, and average extreme sample size gain below 0.1, 0.3, and 10, respectively. Note that the informative prior results exactly overlap with the sampling prior ones. The RSN analysis prior is, as expected, robust with respect to different sampling prior specifications, leading to sample sizes in the range [22,64]. The decrease near the boundaries is mainly induced by a lower sample size requirement for the MSE in these regions of the parameter space. Note that the sample size selected under the sampling prior with mean 0.275, 64, is relatively high if compared to the sample size originally elicited, 25. This increase is mostly induced by the estimation error component and, in turn, by the desire to limit the chance of incurring an extreme power loss in a Phase III study. The EB power and robust mixture prior show a better behavior, which is reflected in a globally lower integrated risk at the cost of a moderate increase in sample size; the only exception is for the region close to 0, where most of the RSN prior mass is concentrated. CONCLUSIONS In this work, we have explored different Bayesian procedures of sample size selection based on decisiontheoretic considerations. In particular, we have defined an integrated risk which incorporates losses arising from testing, estimation, and sampling. We have related the loss arising from estimation to the probability of an indeterminate trial outcome as well as the potential power loss in a consequent Phase III trial, both aspects being of relevance in particular in the context of Phase II proof-of-concept trials. Furthermore, we have explored two possible elicitations of the test error costs, leading to test decisions based either on posterior probabilities or solely on BFs. The latter specification has the advantage of minimizing the weighted sum of the average type I and type II error rates, which may be appealing both in terms of optimization target and cost interpretability. Skepticism about the historical information can be formally taken into account by performing sensitivity analyses based on the distinction between sampling and analysis prior. In particular, in the approach adopted here, decisions are based on the analysis prior, while uncertainty about θ is summarized by the sampling prior distribution. Frequentist operating characteristics are thus easily obtained by assuming a point-mass sampling prior. We have illustrated the approach and compared different analysis prior specifications in a simulation example. Interestingly, in the set-up considered, when test decisions are based solely on the BF, the EB power prior induces test decisions which closely resemble the informative prior ones while retaining advantages in terms of AMSE robustness. The application of the methodology to a real proof-of-concept trial shows additionally that, if not only testing but also estimation is taken into account, particularly in view of a reduced potential for power loss in a subsequent Phase III trial, a higher Phase II sample size may be required. We have focused on the commonly relevant targets in clinical trial design, i.e., test error rates and MSE, but of course alternative specifications of the loss function can be investigated (e.g., taking into account the distance of the true parameter value from θ 0 , see e.g., Lewis and others, 2007;Parmigiani and Inoue, 2009). Alternative specifications of the set of hypotheses, e.g., a simple null versus a composite alternative (see, e.g., Berger and Sellke, 1987, and references therein), as well as different, more complex, designs, can definitely be of interest for further research. A caveat in this direction is related to the dimensionality of the parameter space, due to the need to compute multiple integrals in the integrated risk, and to the availability of analytical forms for the posterior, due to the need to otherwise resort to, e.g., Markov Chain Monte Carlo. One general criticism of decision-theoretic approaches in the clinical trial context is that different interests may be at stake, inducing a lack of consensus about the costs and losses (see Spiegelhalter and others, 2004, and references therein). We have outlined a "simplified" approach to cost elicitation, linked to the implicit cost elicitation of "goal sampling" approaches. However, we stress that actual costs of test and estimation errors should be adopted whenever available. An interesting proposal moving in the direction of a more objective cost elicitation is provided in Isakov and others (2019), where the definition of type I and type II error costs is based on disease prevalence, severity, and cost of adverse events. The authors also follow a decision-theoretic approach for optimal sample size elicitation; one key difference from our work is the focus on a two-point mass prior and on conditional error rates at each of these two points. We believe that our study can be of particular interest in view of an increased need for rationales of type I error rate inflation when incorporation of historical information is desired. Moreover, both a test decision based on the principle of comparing evidence in favor of the alternative versus the null hypothesis, as a BF thresholding implies, and the monitoring of a relevance threshold, move in the direction of abandoning a purely significance target. More broadly, interest in alternatives to p-values and significance testing is now on the rise, as shown by the ASA statement on p-values and Statistical Significance (Wasserstein and Lazar, 2016), and the recent issue of the American Statistician "Statistical Inference in the 21st Century: A World Beyond p < 0.05." The article by Ruberg and others (2019) contained in this issue indeed supports a broader use of the Bayesian paradigm and of relevant external information in drug development and approval. A formal decision-theoretic approach additionally allows adopting test and estimation decisions which reflect an explicit elicitation of the targets to be optimized, and depend on the "actual costs, benefits, and probabilities" (Gelman and Robert, 2014). SOFTWARE R code for reproduction of all simulations and figures is available online at https://github.com/BDTTrialDesigns/BDT_CombIRSens. SUPPLEMENTARY MATERIAL Supplementary material is available online at http://biostatistics.oxfordjournals.org.
11,145.4
2020-07-31T00:00:00.000
[ "Mathematics" ]
Super-Earths, M Dwarfs, and Photosynthetic Organisms: Habitability in the Lab In a few years, space telescopes will investigate our Galaxy to detect evidence of life, mainly by observing rocky planets. In the last decade, the observation of exoplanet atmospheres and the theoretical works on biosignature gasses have experienced a considerable acceleration. The most attractive feature of the realm of exoplanets is that 40% of M dwarfs host super-Earths with a minimum mass between 1 and 30 Earth masses, orbital periods shorter than 50 days, and radii between those of the Earth and Neptune (1–3.8 R⊕). Moreover, the recent finding of cyanobacteria able to use far-red (FR) light for oxygenic photosynthesis due to the synthesis of chlorophylls d and f, extending in vivo light absorption up to 750 nm, suggests the possibility of exotic photosynthesis in planets around M dwarfs. Using innovative laboratory instrumentation, we exposed different cyanobacteria to an M dwarf star simulated irradiation, comparing their responses to those under solar and FR simulated lights. As expected, in FR light, only the cyanobacteria able to synthesize chlorophyll d and f could grow. Surprisingly, all strains, both able or unable to use FR light, grew and photosynthesized under the M dwarf generated spectrum in a similar way to the solar light and much more efficiently than under the FR one. Our findings highlight the importance of simulating both the visible and FR light components of an M dwarf spectrum to correctly evaluate the photosynthetic performances of oxygenic organisms exposed under such an exotic light condition. Introduction The considerable number of new worlds discovered so far is pushing scientists to provide evidence of life on other planets. The diversity in kind, composition, masses, and radii of these new worlds is so vast that almost all possible mass values are covered in continuity from Mars (0.11 M ⊕ ) up to super-Jupiters (>10 M J ). Among all the planetary hosts, low mass stars, mainly M spectral type stars, are the main targets of the extrasolar planet surveys due to both their high density in the Galaxy and their small radii that provide higher amplitude transit signals than solar-like stars [1,2]. Indeed, the most attractive characteristic of these systems is that 40% of M stars host super-Earths with a minimum mass between about 1 and 30 Earth masses, orbital periods shorter than 50 days, and radii between those of the Earth and Neptune (1-3.8 R ⊕ ). Due to these high occurrence rates, super-Earths (1-10 M⊕) represent the most common type of components of planetary Experimental Aims Our research plan aim was analyzing the changes caused by the presence of photosynthetic bacteria to the chemical composition of a simulated, secondary atmosphere of a super-Earth inside the HZ of an M star. This expression of intents delimits the environmental parameters (pressure, temperature, irradiation pattern, initial atmospheric chemical composition) we want to simulate in the laboratory. The description of another version of this experiment is detailed in Reference [24]. Here, we describe an entirely new evolution of the experiment, putting it into context. The Astrophysical Context M dwarfs are faint hydrogen-burning stars with masses ranging between 0.6-0.075 M and very low photospheric temperatures (3870 K for an M0 V and 2320 K for an M9 V star [25]). Their spectra are characterized by maximum emission at longer wavelengths than those of the Sun and molecular band absorptions that deplete the emitted visible flux (see Figure 1). In the visible, the Sun (G2 V) has an absolute magnitude of about 4.8, while the M stars are about 4 orders of magnitude fainter than the sun (the absolute visual magnitude of an M star ranges between 9 for M0 V to 20 mag for M9 V [25]). [26], from higher to lower photospheric tempretures. The main molecular bands depleting the visible and near infra-red (NIR) flux are also indicated. The spectral features of M-stars are indicated in Reference [27]. The upper part of the figure shows the spectrum of the Sun for comparison. The Sun and M star spectra have been normalized to the flux they emit at 1.2 µm and offset. In other words, M dwarfs are very different from the Sun in both luminosity and spectral distribution. Luminosity affects the position of the HZ around the star, whereas spectral distribution influences the number of photons available in the PAR (Photochemically Active Radiation) wavelength range (400 ≤ λ ≤ 700 nm), challenging the possibility of photosynthesis on the surface of a rocky planet orbiting in the HZ of this kind of star. Not only the wavelength range of the spectral peak, but also the qualitative shape of the spectrum and how it interacts with an HZ planetary atmosphere, vary significantly across the M star mass range. In fact, due to the faintness of M stars, the HZ is as close as ∼0.30 au or even closer to the star [5,28,29]. Hence, the planet orbiting inside the HZ results as tidally locked and becomes a synchronous rotator with 1:1 spin-orbit resonance (e.g., Reference [28]) or higher (3:2) (e.g., Reference [30]). Another possible consequence to consider is whether there is sufficient flux in the wavelength range used for photosynthesis [31]. Heath et al. (1999) [32] determined that the PAR from M stars would be lower than the average terrestrial value by about one order of magnitude. In past years, several authors have also argued that terrestrial planets within the HZs of M dwarfs may not be habitable. The main reasons range from the possible deficiency of volatiles in the planetary atmosphere [33], to the scarce water delivery during the planet evolution [34], or the loss of planetary water during the pre-main sequence due to the higher luminosity of the protostar in that evolutionary phase [35]. The latter seems a showstopper indeed. In any case, Tian and Ida (2015) [36] showed that the content of water in Earth-like planets orbiting low mass stars could be rare, but dune and deep ocean worlds may be common. Nevertheless, many authors are optimistic asserters that the oxygenic photosynthesis can take place on a super-Earth surface also under these conditions. Recent studies on possible water loss in the atmosphere of planets orbiting very cool stars, like Trappist-1 d, show that these planets may still have retained enough water to support surface habitability [37,38]. Furthermore, previous work on the photosynthetic mechanisms and spectral energy requirements elucidated that photosynthesis can still occur in harsh and photon-limited environments (e.g., Reference [21][22][23]32,39]). Another possible showstopper discussed in several works is the obstacle caused by the stellar activity. M dwarfs, by nature, are characterized by their high stellar activity. These stars can significantly change their activity depending on their evolutionary stage. During a quarter of their early life, M dwarfs release high amounts of XUV through flares and chromospheric activity [23], while quiescent stars emit little UV radiation and have no flares [40]. Planets orbiting around M dwarfs often receive high doses of XUV radiation during stellar flares, with fluxes that can increase ten to a hundred times and occur 10-15 times per day. These events rapidly change the radiation environment on the surface [41][42][43] and possibly erode the ozone shield, as well as parts of the atmosphere. However, some researchers point out that these planets could remain habitable despite these issues [41,44,45]. The presence of a strong magnetic field or a thick atmosphere [46,47] could avoid planetary atmosphere erosion. On these planets, possible organisms could develop UV protecting pigments and DNA repair mechanisms, similar to Earth, or thrive in subsurface niches [48] and underwater, where radiation is less intense. This kind of life, of course, would not be detectable remotely. A UV-protective mechanism that could be detected remotely is the photo-protective bioluminescence, where proteins shield the organisms from dangerous UV radiations, emitting the energy at longer, detectable wavelengths. Moreover, it would allow organisms to live on the surface of planets orbiting active stars and is a mechanism already at play on Earth for some species of corals [45]. The flaring activity of M dwarfs, in particular, could even be beneficial for oxygenic photosynthesis, if the XUV portion is adequately shielded by the atmosphere. They could indeed enhance the effectiveness of oxygenic photosynthesis due to the additional flux in the PAR that they can generate [49], even if they are not thought to allow an Earth-like biosphere in planets orbiting the HZ of the majority of the M dwarfs known [13]. From an observational point of view, Kepler found that Earth-sized planets (1.0-1.5 R ⊕ ) are common around M stars with an occurrence rate of 56% with periods shorter than 50 days. Super-Earths with radii between 1.5-2.0 R ⊕ and periods shorter than 50 days orbit M dwarfs with an occurrence rate of 46% [50]. Similar high occurrence rates are reported by the radial velocity surveys [3]. Notable examples are the Trappist-1 system with seven super-Earths [51] and Proxima Cen b [52] orbiting two M stars. These planets are predicted to have large surface gravities (25 ms −2 for 5 M ⊕ (Reference [53] and References therein)) and are likely to exist within a wide range of atmospheres: some of them could be able to retain a thick H-rich atmosphere, whereas others could have a stronger resemblance to Earth with heavier molecules in their atmospheres. A third possibility could be an atmosphere with a moderate abundance of hydrogen due to its escape and/or molecular hydrogen outgassing [53]. The Experiment Plan The background scenario is crowded with theoretical hypotheses on the photosynthesis at work on planets orbiting M stars; however, to the knowledge of authors, no experimental work has been performed directly exposing photosynthetic organisms to a simulated M-dwarf spectum. Here, the (unavoidable) working hypothesis is that the evolution of extra-terrestrial life converged to pigment production and photosynthetic mechanisms similar to that of terrestrial extremophiles under non-Earth conditions. Our simple approach consists of the following steps: i We set up the laboratory instrumentation and selected the organisms for the tests (see Section 4.1). We built some of the laboratory tools ex-novo. We first built a light source suitable for the purpose of the experiment, simulating the star irradiation (see Section 3.1). Secondly, we built the reaction cell (see Section 3.2). ii Before conducting the main experiment, we performed a fiducial one considering the terrestrial environment. We irradiated the selected organisms with solar light and within a terrestrial atmosphere environment. iii Once we checked that the experimental set up functions well in terrestrial condition, we switched to the M star irradiation of organisms, considering a terrestrial atmosphere. iv Lastly, we are planning to conduct experiments using the M star light to irradiate the cyanobacteria that will be put in a modified atmosphere. The composition is defined using the 1-D model of the atmosphere of super-Earths described by Petralia et al. 2020 [54] and Alei et al. 2020 (submitted). In this paper, we address the first three points. The fourth one will be part of future work. Laboratory Set Up The experimental set up is sketched in Figure 2. It can be conceptually split into two parts: the stellar simulator on top, composed of the illuminator and a spectrometer, with the reaction cell on the bottom, where the microorganisms used for the experiments are hosted. The entire system is isolated in a dark container and is cooled by two fans, it is equipped with an anti-condensation system, and it is monitored through a webcam. A control PC allows to operate remotely the system without interfering with the experiment. Star Irradiation Simulator To achieve the described experimental aim, it was necessary to have an unconventional light source. In particular, the light sources used in photosynthetic study facilities are mainly metal halide, high-pressure sodium, fluorescent, and incandescent lamps. These lamps are commercially available and mostly emit solar or close to solar radiation, with a limited capability in adjusting the color temperature and the intensity of the output radiation. In our experiment, we need a light source able to reproduce the irradiation of stars other than the Sun in a quite simple and direct way. Furthermore, it should be operable without interfering with the experiment. To achieve our goal, we designed a completely different light source using light-emitting diodes (LEDs) controlled remotely by a computer. LEDs have also been used in the laboratory as light sources for their efficiency in plants growth [55]. For that purpose, the LED-based devices are built to illuminate the material in the red part (600-700 nm) of the PAR, while, typically, the blue part of PAR is covered by blue fluorescent lamps. For our purposes, we need a completely different device. In fact, for Wien's law (λ Max T = 290 × 10 4 nm K), different stars of different spectral types have the maximum of their emission at different wavelengths. In particular, while the Sun has a peak of emission at about 550 nm, an M star, which is about a factor of 2 cooler than the Sun, has its emission peak in the NIR range (at about 1000 nm). In order to appreciate the differences in the slope of the spectra of the stars of different spectral types, we need a collection of LEDs able to cover a slightly longer wavelength range than the PAR, between about 350-1000 nm. Moreover, this device shall be able to modulate the LEDs intensity in order to mimic, as close as possible, the flux variation of stars of different spectral types. The available LEDs allow us to consider the wavelength range between 365 nm and 940 nm covered by 25 dimmable channels (see Table 1). Because LEDs covering such wavelength range are manufactured with different technologies (from AlGaN/InGaN to GaAs/InGaP), their emitted luminosities are also different from each other, and each channel has a different number of LEDs to achieve the required optical power at a specific wavelength. Furthermore, we added a white LED with a correlated color temperature (CCT) of 2200-2780 K to fill the spectrum in the 630 nm region. We used 312 LEDs in total, arranged in five concentric rings on which the mosaic of circuit boards is arranged in a pie-chart shape, on the surface of which the diodes have been welded [56]. Each channel is tunable enough to allow us to reproduce the radiation of stars of F, G, K, and M spectral types. The modularity design of the board permits easy maintenance in case of damage, allowing us to remove only the problematic piece. The disposition of diodes on the board was designed to reduce the non-uniformity of the flux, due to the intrinsic light exit angle of each led. Moreover, a reflective cylinder and an optical diffusive foil were mounted to increase the uniformity. Since the thermal power of the system dominates its radiation power, the diodes are cooled by a fan set on the back of the board. A spectrometer collects the light through a slit head placed at a manually adjustable distance from the diffusive foil. The adopted spectrometer is a Component Off The Shelf (COTS) component. We selected the FLAME VIS-NIR by Ocean Optics, in which its 2040 × 2040 pixel detector covers the wavelength range 190-1100 nm. The illuminator is controlled by a custom control software [57] that, by means of a graphical user interface (GUI), allows the user to select an appropriate spectrum chosen from a spectral library. For the input spectrum, the control software calculates the intensities of the 25 channels to best fit the spectrum. In any case, through the GUI, the user has access to each channel of the illuminator setting the output flux of the channel. The set spectrum is shown in a window of the GUI. The emitted spectrum is registered by the spectrograph and is superimposed on the input spectrum. Slight differences between the two can be fixed by adjusting the luminosity of each channel. The left panel of Figure 3 shows the simulated spectrum of a solar star (light SOL; see Section 4.2), while the right panel of the same figure shows the simulated M star spectrum (light M7; see Section 4.2). In both panels the input spectrum is represented in red color, and the emitted spectrum in blue. The input spectra are smoothed (e.g., see Figure S1 in the Supplementary Materials for an M7 V star) due to the difficulties in reproducing the high resolution stellar spectra by the spectrum simulator. The Reaction Cell The incubator cell (see Figure 4) is a steel cylinder of 0.5 l of volume in which the light enters through a thermally resistant Borofloat glass, with over 90% transmission in 365-940 nm wavelength range. The atmosphere in the cell can be flushed to change the initial O 2 , CO 2 , and N 2 levels. The cell is provided with pipe fittings and connected to an array of flow meters and needle valves (each for a different input gas: N 2 , O 2 , CO 2 ) to inject atmospheres of controlled and arbitrary compositions. Once the desired mixture is flushed through the cell, the input and output valves are closed to seal the inside environment and leave it to its evolution. Water vapor will quickly reach saturation value, due to the water-based medium in the sample Petri dish. When using high carbon dioxide levels, caution should be exercised to ensure a long enough flushing time to achieve equilibrium between the gas phase and dissolved CO 2 due to its high solubility. The base of the cell and the sample are kept at a constant temperature by controlling a Peltier cell on which the cylinder is leaned. The Peltier temperature set point is always kept 2 • C lower than the surrounding environment (which its temperature is also controlled) to avoid any condensation on the upper glass window. In the context of this work, the cell was always operating at ambient pressure and 30 • C temperature with an initial composition of 75% vol. N 2 , 20% vol. O 2 , and 5% vol. CO 2 ; this provides a high enough amount of carbon dioxide to be fixed into biomass throughout the experiment without excessively stressing the sample. Vital photosynthetic microorganisms in a liquid medium inside the cell are expected to produce oxygen. Hence, the cell is provided with a commercial fluorescence quenching oxygen sensor (Nomasense O 2 P300), while the CO 2 concentration is monitored via a custom Wavelength Modulation Spectroscopy (WMS) Tunable Diode Laser Absorption Spectroscopy (TDLAS, [58]) set up. To monitor the gas, four wedged windows of 2.5 cm are pierced and paired two by two in opposite positions on the wall of the cell. Two of the windows are used by the CO 2 sensor for TDLAS, whereas one is used for the fluorescence quenching tablet, which is remotely sensed through an optical fiber. The reaction cell underwent several modifications for reducing the systematic errors and the human interferences during the experiments. In Battistuzzi et al. (2020) [59], a complete description of the very last version of the reaction cell and the results obtained from a biological point of view is presented. The Control Software The starlight simulator (illuminator and spectrometer) and the incubator cell environment (gas sensors and Peltier cell) are controlled by two separate processes, running on the same computer. We wanted a stand-alone software to control the simulator because it could also be used also for other laboratory applications (e.g., photo-bioreactors, microscopy, yeast growth [60]). Validation of the Experimental Set up To validate the experimental set up, we positioned the cyanobacterium Synechocystis sp. Pasteur Culture Collection (PCC) 6803 liquid cultures into the simulator chamber with an atmosphere consisting of a mixture of gasses in the following composition: 75% of N 2 , 20% of O 2 , and 5% of CO 2 . Eventually, we irradiated it by means of the star simulator with a solar (G2 V) spectrum with three different intensities: 30, 45, and 95 µmole m −2 s −1 (from here, µmole is used for µmole of photons). The organisms exposed to different light regimes grew with good photosynthetic efficiency. Description of the test and of the developed method to evaluate the growth of bacteria without any interferences by the operators are fully detailed in Battistuzzi et al. (2020) [59]. Selected Organisms On Earth, the rise of photosynthetic organisms transformed in billions of years the primordial planet into our beloved and green planet. A review of the fundamental processes at play in photosynthetic organisms with a set of algae and plants on Earth is presented in Ref. [23]. Among microbes, the most relevant in changing the characteristics of our planet were cyanobacteria that evolved oxygenic photosynthesis deeply influencing the Earth's atmosphere and leading to the "oxygen-breathing life" development. The recent finding of cyanobacteria able to use FR light for oxygenic photosynthesis due to the synthesis of chlorophylls d and f , extending in vivo light absorption up to 750 nm [20,61], suggests the possibility of exotic photosynthesis in planets around M stars. We took advantage of the availability of a large selection of extremophiles, including cyanobacteria from soils, thermal spring muds and cave rocks, and cyanobacteria with UV-absorbing pigments. Extremophile cyanobacteria from environments characterized by low irradiances, rich in FR wavelengths [20], are selected for M dwarf star simulations. Some of these cyanobacteria are already known for coping with conditions not occurring in their natural environments, such as space and martian simulated conditions in low Earth orbit [62][63][64]. These astrobiological experiments pointed out that the limits of life have not been established well yet, and that extremophiles may have the potential to cope with the simulated environments planned in our experiment and, hence, can be a good source to identify potential astronomical biosignatures. The selected extremophiles are expected to perform photosynthesis using the simulated M star light radiations, considering that the minimum light level for photosynthesis is about 0.01 µmole m −2 s −1 , i.e., less than 10 −5 of the direct solar flux at Earth in the PAR wavelength range (2000 µmole m −2 s −1 ) (Reference [65,66] and References therein). This implies that, even at the orbit of Pluto, light levels exceed this value by a factor of 100. In particular, for our experiment, we chose two cyanobacteria species, Chlorogloeopsis fritschii PCC 6912 and Chroococcidiopsis thermalis PCC 7203, which are able to perform FR light photoacclimation ((FaRLiP) [20,67]), and Synechococcus PCC 7335, a peculiar cyanobacterium strain capable of both FaRLiP and Chromation Acclimation (CA). We compared the responses of these species with those of Synechocystis sp. PCC 6803, a cyanobacterium unable to activate FaRLiP or CA, hence used as control organism. Chlorogloeopsis fritschii PCC 6912 is an organism that can thrive under various environmental conditions in terms of intensity and temperature. On Earth, its favorable habitats are thermal springs and hyper-salty lakes. The peculiarity of this strain is its ability to synthesize chlorophylls a, d, and f . Chlorophylls d and f are produced in a larger quantity when Chlorogloeopsis fritschii grows in the FR light [68]. On the other hand, Chroococcidiopsis thermalis PCC 7203 is a cyanobacterium isolated from a soil sample in Germany and ascribed to the species Chroococcidiopsis thermalis Geitler, whose type locality is Sumatra hot springs (according to Algaebase website: https://www.algaebase.org/). Moreover, members of the Chroococcidiopsis genus are widespread and can be found in freshwaters, salt waters, and hot and cold deserts [69]. Due to their capability of withstanding different and extreme conditions, Chroococcidiopsis strains are utilized in astrobiology studies [63]. As for Chlorogloeopsis fritschii, Chroococcidiopsis thermalis performs FaRLiP, and both grow continuously and photoautotrophically in FR light and are utilized as model organisms to study cyanobacteria acclimation mechanisms to this light source [70]. Synechococcus sp. PCC 7335 was originally isolated from a snail shell in an intertidal zone and thus adapted to changes in light regimes and hydration/dehydration. This organism is unique due to its capability to activate FaRLiP response when grown under FR light and to clearly show changing its pigmentations complementarily to the incident light spectrum, due to the so-called CA acclimation response [71]. Regarding Synechocystis sp. PCC 6803, it is a well-known cyanobacterium used as a model strain due to the complete sequencing of its genome. Synechocystis sp. PCC 6803 has been selected as a control organism. In fact, it does not possess the gene cluster that is responsible for the FaRLiP, and it does not acclimate when exposed to FR light. Growth and Photosynthetic Efficiency To perform the first part of the experiment (see Section 2.2), the selected cyanobacteria were grown in BG-11 medium [72] or in ASN-III medium [71], depending on the species, in both liquid and solid (with the addition of 10 g l −1 of Agar) cultures. The liquid cultures were exposed to terrestrial atmospheric air in a climatic chamber maintained at a temperature between 28 and 30 • C under a continuous cool white fluorescent light of 30 µmole m −2 s −1 (L36W-840, OSRAM). Once the organisms were in the exponential growth phase, we subdivided them into spots over agarized solid medium in petri plates with BG-11 or ASN-III, to be exposed to the different light sources: solar (SOL; see Figure 3 Table 2. For the experiment, we arranged plates with several spots of 20 µL at different cell concentrations with 0.3, 0.5, 0.7, and 1 of optical density (OD) measured at 750 nm for each organism (see Figure S3 in the Supplementary Materials). The plates were illuminated by the three different sources for 240 h and analyzed after 72 h and at the end of the experiment (see Figure S4 in the Supplementary Materials). All the experiments were performed in three biological replicates. The top panel of Figure 5 shows the situation of the different phenotypes before and after 72 h and 240 h of irradiation with the three different light sources. An enhancement of the optical density of the spots of all the species that are greener than those at the beginning of the experiment can be observed in Figure 5. This trend is visible for both the sample irradiated by the solar light and the sample irradiated by the light with an M star simulated spectrum. The bacteria behaves differently from when they are exposed to the irradiation of the FR light as expected. In fact, it is possible to see that, unlike the other strains, the control organism (PCC 6803) does not grow under the monochromatic FR light. These results are confirmed and quantified by the values of the F 0 incremental ratios ( Figure 5, bottom panel). F 0 is the ground chlorophyll fluorescence measured with Pulse-Amplitude Modulation (PAM) on cells adapted to the dark for 20-30 min. Eventually, they are exposed to a pulse modulated light that does not trigger the photosynthetic process [73]. The chlorophyll ground fluorescence (F 0 ) is proportional to the increment of chlorophyll, and thus, to the number of cells in the considered spot. So, an increase of the F 0 parameter is a measure of the growth of the culture [74]. The measurements are considered reliable when the F 0 value increases linearly with the increasing OD. For this reason, we performed experiments with 4 different initial culture concentrations (OD) for each organism, and we repeated it with 3 independent biological replicates. The best initial OD meeting these conditions proved to be 0.5, 0.7, and 1 for each organism. The measurements have been performed at the beginning of the experiment F 0(0H) and after 72 h (F 0(72H) ). We did not consider the measurement at 240 h as the signals of fluorescence with the initial PAM settings were saturated for most of the spots due to the very high cell concentrations. F 0 chlorophyll fluorescence was measured on the entire spot at 0 and 72 h, by analyzing the same area and maintaining the same acquisition setting each time [74]. The F 0 detected at the beginning (F 0(0H) ) and after 72 h (F 0(72H) ) were used to calculate the F 0 incremental ratio (F 0(72H) − F 0(0H) )/F 0(0H) ) and to estimate the growth. The results reported in Figure 5 (bottom panel) are concluded from spots with 0.7 initial OD and show the increase of F 0 after 72 h of different light sources exposure. The histograms indicate that all cyanobacteria strains are capable of growing well under M7 simulated light, with incremental ratio F 0 comparable to those exposed to the solar light. PCC6803, that does not posses the FarLiP gene cluster, has an F 0 incremental ratio close to zero and negligible with respect to the values registered for M7 and solar light. All the other organisms are shown to be capable of exploiting FR light with different extents depending on the strain. For evaluating the efficiency of the photosynthesis, we analyzed each plate by means of the PAM imaging (FluorCam FC 800MF). We evaluated the PSII photosynthetic efficiency through the chlorophyll fluorescence measurements, before exposing the culture to the various light sources and after 72 h. The photosynthetic efficiency is defined by the ratio F V /F m , where F V = F m − F 0 with F 0 the basal fluorescence, and F m is the maximum fluorescence assessed after 20 min of dark adaptation followed by a flash of saturating light. The averaged values of the parameters F V /F m obtained by the PAM analysis from three independent biological replicates for each organism, derived from the spots with 0.5, 0.7, and 1 initial OD are reported in Table 2. These values show that all the organisms maintain similar photosynthetic efficiency under different light regimes (with values typical of the cyanobacteria strains (Reference [73,75] in Supplementary Materials), while PCC6803 shows a decline of the F V /F m parameter under FR light, as expected. . The phenotype variation of the spotted strains is related to those with 0.7 initial optical density (OD). Bottom panel: Values of the F 0 incremental ratio of the different organisms. The F 0 incremental ratio is defined as follows: (F 0(72H) − F 0(0H) )/F 0(0H) , with F 0(0H) implying F 0 value at 0 h and F 0(72H) corresponding to F 0 value at 72 h. The F 0 incremental ratio values are reported in Table S1 in the Supplementary Materials. Discussion and Conclusions M stars are very popular in the astrobiology community due to their ubiquitous presence in the Galaxy and their small radii, which provide higher amplitude transit signals than solar-like stars. So far, they are recognized to be the most frequent hosts of super-Earths discovered orbiting in the HZ of a star. This sparked off a great theoretical debate about the possibility of having life, particularly photosynthetic life, on these planets. Several efforts have been spent aiming at modeling the upper wavelength limit of putative photoautotrophs on exoplanets. It has been hypothesized that oxygenic photosynthetic organisms could have developed pigments that do not utilize PAR light, but the more abundant NIR light, or employ photosystems using up to 3 or 4 photons per carbon fixed (instead of 2), as well as utilize more photosystems in series (3, 4, or even 6), allowing them to exploit photons of wavelengths up to 2100 nm [21,23,76,77]. Moreover, the prospects for photosynthesis on habitable exomoons via reflected light from the giant planets that they orbit have been theoretically evaluated, suggesting that such photosynthetic biospheres are potentially sustainable on these moons except those around late-type M-dwarfs [78]. However, up to now, no experimental data (survival, growth, and photosynthetic activity) about the behavior of oxygenic photosynthetic organisms exposed to simulated environmental conditions of exoplanets orbiting the HZ of M dwarfs, in particular, exposed to an M-dwarf light spectrum, have been produced. Numerous investigations (see Reference [67], for a review) have been done, instead, in the field of the oxygenic photosynthesis "beyond 700 nm", especially after the discoveries of cyanobacteria able to synthesize chl d and f [79,80]. However, they were committed to understanding the molecular and biochemical mechanisms behind how the photosynthetic functioning of the photosynthetic apparatus under a FR light spectrum, rather than testing it under exotic light spectra [70,71,[81][82][83][84][85]. Here, to the best of the authors' knowledge, for the first time, we present the experimental data obtained directly through exposing photosynthetic organisms to a simulated M dwarf spectrum. We compared the results to responses of those species under solar and FR simulated lights, using innovative laboratory instrumentation. As expected, in FR light, only the cyanobacteria able to synthesize chlorophyll d and f could grow. Surprisingly, all strains, both able or unable to use FR light, grew and photosynthesized under the M dwarf generated spectrum in a similar way to the solar light and much more efficiently than under the FR one. In particular, we compared the responses of strains able to have FarLiP and of the control microorganism PCC 6803 that does not. The growth estimated by F 0 incremental ratio parameter obtained for all the cyanobacteria in our study shows a value that is very similar or equal, considering the error bars, to the value of F 0 measured for those spots irradiated with the solar light. In the case of the irradiation with monochromatic light (far-red (FR)), only PCC 6803 is unable to acclimate itself to the FR light, while all the others show a normal photosynthetic efficiency under this light, as well. This suggests that PCC6803 grows very well under simulated M7 light by only using the visible part of the spectrum. The ability of the other organisms to exploit FR light does not seem be beneficial for growth under M7 simulated light. Furthermore, all the tested strains, except PCC 6803, have very similar values of the F V /F m under any kind of irradiation spectra. This highlights that they are able to acclimate to all the used lights. Our findings emphasize the importance of simulating both the visible and FR light components of an M dwarf spectrum to correctly evaluate the photosynthetic performances of oxygenic organisms exposed to such an exotic light condition. Moreover, in a previous work [59], we demonstrated that, with our experimental set up, we can measure the consumption of CO 2 and the production of O 2 of the PCC 6803 cyanobacterium under solar irradiation. This serves as a prelude to the future analysis of the cyanobacteria photosynthetic gas exchanges in real time during their growth under M star spectra irradiation. Last but not least, we realized an experimental set up that allows us to reproduce, in the laboratory, an alien environment with the possibility to variate the thermal and physical conditions. In this way, we are carrying out experiments on photosynthetic organisms to verify their capacity of thriving and acclimating to extraterrestrial conditions. We developed new and original laboratory devices (e.g., the star irradiation simulator) and novel measurement methods (see Reference [59]) that will allow new experiments in the future. To prepare for the next step of our research plan, we have already produced several models of stable super-Earth atmospheres to be used in the laboratory. We have started to monitor the evolution of oxygen and the fixation of carbon dioxide in the cyanobacteria exposed to very different irradiations and simulated atmospheres. Hence, if the evolutionary tracks on a habitable planet in the HZ of an M star are quite similar to those on Earth, photosynthetic microorganisms could, as well, produce O 2 and fix CO 2 in organic matter on these planets orbiting such cold stars. Will it be possible to observe the released oxygen in a remote way? The answer to this question is not simple because it depends not only on the efficiency in producing oxygen by photosynthetic organisms but also on the efficiency of the possible oxygen sinks that are at work on that planet. The reverse reaction to oxidize photosynthetic products depletes the atmospheric oxygen. The net release of oxygen in the atmosphere, due to this balance, is regulated by the sink of organics in the sediments. If the level of O 2 is low in the atmosphere, the reactions with reducing gases from vulcanism (H 2 and H 2 S) and submarine weathering [86,87] can deplete O 2 . If the O 2 production rate is greater than the depletion rate, its build-up in the atmosphere is possible [88], and the Fe 2+ oxidation process becomes an important one. Catling and Kasting (2017) and Kaltenegger et al. (2010) (Reference [86,87], respectively, and References therein) discussed deeper on the build-up of oxygen in the atmosphere of a planet. Oxygen depletion is a time-dependent process. The atmospheric oxygen is recycled through respiration and photosynthesis in less than 10,000 years. In the case of total extinction of the biosphere of Earth, the atmospheric O 2 would disappear in a few million years [87]. Thus, we conclude that only the observations can give us the right answer. So far, brand new ground-and space-based instruments are planned to be operative with the aim of finding and characterizing extrasolar planets. In the next future, dedicated space missions and space telescopes, like James Webb Space Telescope (JWST) and Origin Space Telescope (OST), and huge ground telescopes will be the right tools to search for life in other worlds. Supplementary Materials: The following are available online at https://www.mdpi.com/2075-172 9/11/1/10/. Figure S1: M7 V star input spectrum as it appears before (in light gray), and after (in red) the smoothing process. The emitted spectrum (in blue) is superimposed. The smoothing process reduces the resolution of the input spectra. Hence, the stellar simulator is able to better reproduce the input spectrum following the depletion of flux due to large atomic absorptions or molecular bands. Figure S2: Emitted FR light measured by the FLAME VIS-NIR spectrograph. The central wavelength is 720 nm and the full width at zero level is 130 nm with a wavelength range 650 nm and 780 nm. The luminosity of this lamp is 2.3 µmole m −2 s −1 in the PAR and 20 µmole m −2 s −1 in the total working range. Figure S3: Different cultures of the selected cyanobacteria with different optical density, before the 20µl spots were deposited on the Petri's plate. Figure S4: Examples of a BG-11 agar plate with S. sp. PCC6803, C. fritschii PCC6912 and C. thermalis PCC 7203 spots and a BG-11-ASN III agar plate with S. sp. PCC7335. Plates are shown after 72 and 240 h of exposure under the M-dwarf simulated spectrum. Table S1: Averaged values (n= 6) of the F 0 incremental ratio obtained for several organisms under different light sources. The considered error is 1σ. Acknowledgments: The authors would like to thank T. Morosinotto and G. Galletta for their very useful comments and suggestions, as well as F. Z. Majidi for her fundamental help in editing the final version of the paper. The authors would like to thank also the two referees for their useful comments. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript:
9,165.8
2020-12-24T00:00:00.000
[ "Physics", "Geology" ]
Comparative Analysis of the Structural Variety of Complex Syntactic Units in the English and Azerbaijani Languages The article has been written on the basis of comparative-typological method in the study of the languages (the English and Azerbaijani languages) belonging to different language systems (English belonging to analytical type of languages, Azerbaijani to the synthetic type of languages). The main aim of the investigation is to find out similarities and distinctions between variant and invariant expressions of meaning in the both languages. We have aimed at studying this problem from the view of varieties in different language levels. The varieties of different levels are regulated by laws within the system of a language laws and comprise positional combinatory and distributional variants which belong to inner system laws and comprise positional combinatory and distributional variants which belong to structural varieties-within the system of the language. In this research work on the problem of variants and invariants, problem of dichotomy of language-speech, the problems related to them, social and individual problems in the both languages, system and norm and other problems have undergone scientific investigation. In the introduction of the article it is pointed out that the varieties (phonological mainly) have been investigated on the functional aspect in the both compared languages. Here the terms “variant” and “invariant” have found their linguistic interpretation. The coauthors of the article basing on the variant and invariant problems in the both English and Azerbaijani languages, have made attempts to throw light on the problem of variations in complex syntactic units in the compared languages. Within the scope of the study of the investigated problem the coauthors have introduced discussions of the scholars dealing with the problem of variations, including variants and invariants in the both compared languages. Aimed at making experimental evidences the coauthors have introduced the tables, showing variations in the compared languages in which varieties of the vowel (i) in complex syntactic units in the both languages pronounced in different positions have been indicated (see tables 1, 2, 3). Tables have been taken from (Yunusov, 2005; 2007). In the conclusion of the research work the authors have reflected the generalized consideration touched upon in the process of investigation. Introduction The variants of different units may oppose only to their units, but not others.Varieties have been investigated either on the language functional plan /e.g. the role of variants in forming literary language), or on the system of phonetic and grammatical aspects.On this account the interpretation of varieties in different language levels, especially its structural, formal and semantic varieties, their interrelationship with one another and other problems expect their profound systematic solutions. Under variant we understand two or more formal modifications without being related to certain changes of meaning.Variants may appear under the speaker's aim and object, under the situation of speech communication and under special language material etc. Without going into details, we can say that each complex syntactic unit may take part in various variants in the complex sentence paradigms.The languages belonging to different language families (English and Azerbaijani) cover these variants and these variants appear depending upon the syntactic -structural and semantics of the verb-predicate properties (Stepanov, 1979). However it is highly important to consider the complex syntactic unit from all these points of views.The problem of classifying subordinate clauses is one of the vexed questions of syntactic theory.Several systems have been tested at various times, and practically each of them has seemed to possess some drawbacks.Some of the classifications so far proposed, have been inconsistent, that is to say, they do not base on any firm principle of division which equally can be applied to all clauses to be considered. Scope of the Study Investigating the varieties in different language levels, the majority of scholars show that the varieties of different levels are regulated by the laws of inner system and comprise positional, combinatory and distributional variants which belong to inner structural varieties.Investigating the problem of variants and invariants should help the problem of dichotomy of language -speech, the problems related with it, social and individual problems in languages, system and norm and other problems.It is clear that the unity of different points determine the place of units in the related systems.The description of those non-differential relations puts forward the system of variants.The system of variants doesn't have less importance than the system of invariants in the interpretation of the linguistic phenomena. Units exist only in their variants.It should be mentioned here that the system of variants is restricted with the language norms.The units existing in their variants should oppose to one another in this frame (Stepanov, 1979). Undoubtedly, intonation variations of the languages may help the scholars to solve this problem.As to Stepanov (1979), variation in itself is one of the main problems of the language.It is not possible to ignore the thought of Q. P. Torsuev, saying that "constancy and variation are one of the most important features of language structure" (Torsuev, 1977). A. Martine points out that "in the expression plan variant and invariant confrontation has been determined by the scholars of Prague linguistic school, and have been named as the invariant and as a phoneme" (Martine, 1970). As to Wallace Chafe the representatives of the Prague linguistic school, glossematics and American descriptives tried to affirm taxonomic method in the science on linguistics (Chafe, 1975) L. Bloomfield who headed American descriptive linguistics repeatedly stressing the importance of meaning, stated that it was too early to study it (Bloomfield, 1968). N. Chomskyi mentioned that one of the defective points of the structuralists was the fact that they ignored the "phonetic bias" (Chomsky, 1962).This problem has been studied in the Azerbaijani linguistics as well.Speaking on the role of tone in the pronunciation of the syntactic units within the complex sentences majority of these linguists shared the view on the determination of tones in the complex sentences.As to them within the complex sentences, the components of the syntactic units, the principal and the subordinate clauses have two tones: subordinating and subordinated tones depending upon the type of sentences. In the complex sentences the first coming sentence is pronounced with rising tone, while the second one is pronounced with descending tone (Abdullayev, 1999;Abdullayev, 1974). Under variant we understand two or more formal modifications without being subjected to certain changes of meaning.Variants may appear depending on the speaker's aim and object, under the situation of speech communication and under special language material etc.The considered material makes it possible for us to say that each complex syntactic unit may take part in various variants in the complex sentence paradigms (Yunusov, 2005;2008). Usually the classification of complex syntactic units is based on the type of the subordinate clause, its function and meaning, and also on the conjunctions and connecting words.This method, though being highly important for the understanding and analysis of various types of subordinate clauses, is not satisfactory, in that the structure of the complex sentence as a whole and the structure of the main clause as its basic element is not taken into consideration.Besides, this classification doesn't take into consideration the fact that subordination as a way of connecting two clauses may vary from the view of a close relation to a very loose connection, with many gradations in between; then, the relative importance of the main and the subordinate clause may also be different, sometimes it is the subordinate clause which contains the main idea, the role of the main clause being tended to expressing modal shades of meaning -doubt, sureness and so on. Research Methodology We have used in this article comparative typological method in the investigation of the problem of variants and invariants, problem of dichotomy of language-speech, the problem of system and norm in the structural varieties of complex syntactic units in the English and Azerbaijani languages. Variation of the Complex Syntactic Units We base on the first postulation and continue our investigation.We suppose that separately each of these complex syntactic units having different realization forms gets only one invariant form.The mentioned theoretical problem should be explained like this. If we take one of the types of complex syntactic units in any compared languages and show the realization of their phonetic variants in one type, this type will keep its stability in different spoken situations, so these types of complex syntactic units, let's suppose subordinate clauses of object, differ from the types of attributive subordinate clauses both by their formal-structural and semantical points and oppose to them.Formally, though the object subordinate clauses appear in some variants, semantically, they perform the function of an object to the predicate -verb of the principal clause or it may also refer to a non-finite form of the verb, to an adjective, or to a word belonging to the part of speech expressing state.But the meaning of the attributive subordinate clauses is to qualify the antecedent.Unlike the object subordinate clauses, the attributive subordinate clauses serve as an attribute to a noun (pronoun) in the principal clause. As the meaning of object differs from the meaning of attribute semantically, the variants of object will differ from the corresponding variants of attribute formally as well. In other words these two complex syntactic units like two abstract models oppose to each other in the syntactic level of the language.They keep their independence like object and attributive subordinate clauses in the language system.Accordingly each of these complex syntactic units has its own possible variants.Those possible variants of the complex syntactic units have got their own realization forms in speech.Those realization forms are related with different concrete types etc. Summing up all above mentioned we may come to the conclusion that the object and attributive subordinate clauses like different complex syntactic units oppose to each other in language system and they are opposed like two communicative models.If the predicate -verb of the principal clause of object subordinate clauses is expressed by the verbs of "saying", this complex syntactic unit has got minimum ten realization forms.These realization forms separately have got a lot of chances of usage in different situations.The same realization forms can be observed in the attributive subordinate clauses as well.What kind of features make these variants appear and how these related to each other by their intonation -structural points are coordinated?It should be mentioned that the individual features of realization of each complex syntactic unit is less important.On this account study of "individuals" is of great importance in studying "common" points.If the "individuals" differ from each other, they always reveal the properties of "common" and "individual" problems.In other words, if the "individuals" don't have the relations tending to "commons", they can't exist in the language."Commons" can only exist by means of "individuals" going through the "special" points.Each "common" is considered to be an "individual".Each "common" approximately comprises all the "individual" objects.Each "individual", though incomplete, includes the "common" and so on and so forth.If we interprete above mentioned philosophical conception concerning to complex syntactic units, we can show that the object subordinate clauses as "common", possess their variants as "special" and their patterns as "individual".This interpretation has got a great importance in investigating complex syntactic units from the point of constancy and variety in languages of different systems. But what results should be gained by the contrastive structure studies of the variety of complex syntactic units in the non-kindred English and Azerbaijani languages?The experimental analysis prove that it is possible to find out a lot of morphs and allomorphs between these compared languages. Comparative analysis of the melodic structure variety of complex syntactic units in the English and Azerbaijani languages proves that under different factors, i.e., the syntactic structure, the position of the components of complex syntactic units, the semantic side of each component and the expanding of components by the addition of lexical elements, the melodic structure variety of different complex syntactic units in the compared languages differ from one another.While investigating the English and Azerbaijani complex syntactic units, we have found out that the syndetic elements, i.e., the conjunctions, conjunctive pronouns, conjunctive adverbs, relative pronouns and relative adverbs also affect on the melodic structure variety of complex syntactic units in languages of different systems.Under the use of those above mentioned syndetic elements the syntagmatic division of complex syntactic units takes place not only between the juncture of the components of complex syntactic units but also within syntagms or clauses (components).They are called either within syntagmdivision or within (inner) component (clause) division.The results of experimental-phonetic investigation show that in the English language the frequent usage of those syndetic elements is shown in the usage of the following conjunctions: "that", "what", "how", "who", "why", "where", "if", "whether", "when", "which" and "whose". Comparative analysis of the melodic structure variety in the languages of different systems shows that the syntagmatic division takes place after the verbs expressing "wish", "desire" offered by L.L.Iofik (Iofik, 1953).The analysis of the melodic structure variety of complex syntactic units in the English and Azerbaijani languages shows that the components "he knew" in the complex syntactic unit "He knew that Carrie listened to him pleasurably" and "Cahan bilirdi ki" in the sentence "Cahan bilirdi ki, çətini belə sözlər yayılıncadır" differ from each other by their melodic structure variety.It should be mentioned that they differ from each other formerly.If in the English complex syntactic unit, the conjunction "that" is used with the subordinate clause, in the Azerbaijani language the conjunction "ki" joins the principal clause.And by virtue of it the structure of those complex syntactic units undergoes the melodic variety.If the main tone of the component "he knew" is changed between 230 hz and 125 hz, the main tone of the component "Cahan bilirdi ki" is changed between 250 hz and 130hz.The width of diapazone in the English sentence is 115 hz but in Azerbaijani it equals 140 hz.As it is seen in the Azerbaijani language, the width of diapazone is larger than the width of diapazone in the English language.It should be mentioned that though the conjunctions "that", "ki" can't make up a communicative centre, generally they can occupy the melodic peak in different complex syntactic units in the languages of different systems. Comparative analysis of the structure of complex syntactic units "The man looked at him and saw that he was deathly pale", "He saw clearly that this was her idea", "Nəhayət anladım ki, öz üzərimə gücümdən çox-çox böyük yük götürürəm", "Sonra bildim ki, bunların hərəsi on manatlıqdır" show that the expanding of each components with different lexical elements influences not only on the melodic structure variety of complex syntactic units in the compared languages but also it changes the syntagmatic division in each of those above-mentioned subordinate clauses.It should be noted that here grammatical and semantical relations of the components of complex syntactic units, the modal-emotional colouring of the situation, the structural point and the position of the components within the complex syntactic units should be taken into account as well. Comparative analysis of the melodic structure variety of complex syntactic units shows that the syndetical types "He recollected with satisfaction that he had bought that house over Jame's head", "You know very well that that has nothing to do with it", "You know very well that I couldn't tell anyone the reason", "Nəhayət anladım ki, öz üzərimə gücümdən çox-çox böyük yük götürürəm", "İndi başa düşürəm ki, cinayət eləmişəm" and "Sonra bildim ki, bunların hərəsi on manatlıqdır", the adverbial word groups in each of them accordingly "with satisfaction", "very well", "nəhayət", "indi" and "sonra" increase the semantic meaning of the verb-predicate in the principal clauses.By virtue of the use of those adverbial word groups in the principal clauses of the complex syntactic units, the melodic structure variety of the principal clauses of these complex syntactic units is pronounced with the rising tone.Unlike the Azerbaijani complex syntactic units, the English complex syntactic unit "You know very well that that has nothing to do with it" acquires one more distinguished feature.That's the adverbial word group "very well" differs from other adverbial word groups such as "then", "at last", "now" by its character.The adverbial word group "very well" is closely connected with the verb-predicate "know" expressing the meaning of mental activity. The experimental analysis of phoneme (i) in the complex syntactic unit "He was just a bit stupid you know not very bright" and the same phoneme in the Azerbaijani language."Nəriman düz deyir bu kağız şübhəli kağızdır", and table 3, illustrating the dynamic varieties of the complex syntactic unit "I'll tell you I'll do" may show us differentiations on the progradient and terminal syntagms on the phoneme varieties (see tables) Table 1.Varieties of the vowel (ı) in complex syntactic unit "He was just a bit stupid you know not very bright" in different positions Comparative analysis of the melodic structure variety of complex syntactic units in the English and Azerbaijani languages proves that investigating any structural type of syntactic unit helps to widen investigation of the problem of constancy and variety in general linguistics.The main and positive aspect of this guide is in its experimental-phonetic study.Comparative analysis shows that though the complex syntactic units in the English and Azerbaijani languages gain a lot of forms in their melodic varieties, they never reach the level of invariant point.Though these variants become different apparents, they are only determined by their constituent parts, but not others.Those variants are united with those common points which are the same for all, that's invariant.But at the same time according to the melodic structure variety these different apparent forms are grouped by variants.These variants have some common and different characters (Yunusov, 2007). Conclusion Scientific investigation of comparative analysis of the structural variety of complex syntactic units in the English and Azerbaijani languages makes it possible for us to come to the following conclusions: 1. discussions of scholars both in English and Azerbaijani linguistics grounded on different language systems show that approach to the problem in the both languages are different; 2. each of the complex syntactic units in the phonetic plan opposes to the other intonation counter part.The appearance of any variant mostly depends upon the syntactic-semantic and syntagmatic structure of the complex syntactic units in languages of different systems; 3. experimental analysis of the dynamic varieties of the complex syntactic unit "I'll tell you that I'll do" in the English example in the pronunciation of English diphthongs and monophthongs (ai, e, u: 5, a9, u) by two annoucers shows that they are different as to progradient and terminal syntagms; 4. different points are observed in asyndetic and syndetic structural types of complex syntactic units depending upon the peak of communicative center and the whole synactic units as well; 5. comparative analysis of the melodic structure variety of complex syntactic units in the English and Azerbaijani languages shows that in Azerbaijani syntactic units, the width of diapazone happens to be in progradiyent syntagms much more than in English; the introduced tables 1,2, may serve to prove the justification of the mentioned by us the thesis; 6. the melodic structure analysis of the complex syntactic units in the English and Azerbaijani languages proves that not all the syntactic units gain the syntagmatic division between the juncture of the components in the compared languages; 7. comparative analysis of the melodic structure variety of complex syntactic units in the English and Azerbaijani languages proves that if the verb-predicate of the principal clause of complex syntactic units in the compared languages is expressed by the verbs of "mental activity", the melodic structure of the first components in the compared languages is characterized by the rising tone.But if it is expressed by the verbs of "speech" we can't possibly observe this scene; 8. unlike the structural type of asyndetic complex syntactic units, in the structural type of syndetic complex syntactic units the conjunction "that" carries out much more phonetic freight.So the conjunctions can't be considered simply the formal grammatical means between the components of the complex syntactic units in the compared languages.Removing the conjunctions out of the structural type of syndetic complex syntactic units sometimes puts the meaning into confusion. Table 3 . Dynamic varieties of the complex syntactic unit "I'll tell you that I'll do"
4,684.4
2015-01-27T00:00:00.000
[ "Linguistics" ]
Pickl’s Proof of the Quantum Mean-Field Limit and Quantum Klimontovich Solutions . This paper discusses the mean-field limit for the quantum dynamics of N identical bosons in R 3 interacting via a binary potential with Coulomb type singularity. Our approach is based on the theory of quantum Klimontovich solutions defined in [F. Golse, T. Paul, Commun. Math. Phys. 369 (2019), 1021–1053]. Our first main result is a definition of the interaction nonlinearity in the equation governing the dynamics of quantum Klimontovich solutions for a class of interaction potentials slightly less general than those considered in [T. Kato, Trans. Amer. Math. Soc. 70 (1951), 195–211]. Our second main result is a new operator inequality satisfied by the quantum Klimontovich solution in the case of an interaction potential with Coulomb type singularity. When evaluated on an initial bosonic pure state, this operator inequality reduces to a Gronwall inequality for a functional introduced in [P. Pickl, Lett. Math. Phys. 97 (2011), 151–164], resulting in a convergence rate estimate for the quantum mean-field limit leading to the time-dependent Hartree equation. Introduction and Notation In classical mechanics, the motion equations for a system of N identical point particles of mass m with positions q j (t) ∈ R 3 and momenta p j (t) ∈ R 3 for all j = 1, . . ., N is ∇V (q j (t) − q k (t)) = −∇ pj H N (p 1 (t), . . ., q N (t)) , where the N -particle classical Hamiltonian is Assuming that V ∈ C 1,1 (R 3 ), this differential system has a unique global solution for all initial data.If V is even 1 , the phase space empirical measure is an exact, weak solution of the Vlasov equation with self-consistent, mean-field potential This remarkable observation is due to Klimontovich, and solutions of the Vlasov equation ( 3) of the form (2) are referred to as "Klimontovich solutions".Thus, if µ N (0) → f in dxdξ weakly in the sense of probability measures as N → ∞, where f in is a probability density on R 3 x × R 3 ξ , one has µ N (t, dxdξ) → f (t, x, ξ)dxdξ weakly in the sense of probability measures for all t ≥ 0 as N → ∞, where f is the solution of the Vlasov equation ( 4) Thus, the mean-field limit in classical mechanics is equivalent to the continuous dependence for the weak topology of probability measures of solutions of the Vlasov equation in terms of their initial data.See [4] for a proof of this result.For instance, the weak convergence of the initial data can be realized by a random choice of (q j (0), p j (0)), independent and identically distributed with distribution f in . The mean-field limit for bosonic systems in quantum mechanics has been formulated in different settings, by using the so-called BBGKY hierarchy [20,2,1,6], or in the second quantization setting [18].Interestingly, these techniques allow considering singular potentials such as the Coulomb potential, instead of C 1,1 potentials as in the classical case.(The mean-field limit with Coulomb potentials in classical mechanics is still an open problem at the time of this writing; see however [19] in the special case of monokinetic particle distributions.See also [9,10] for potentials less singular than the Coulomb potential). The quantum mean-field equation analogous to the Vlasov equation ( 4) is the (time-dependent) Hartree equation (5) In [16,14], an original method, close to the second quantization approach in [18], but avoiding the rather heavy formalism of Fock spaces, was proposed and successfully applied to singular potentials including the Coulomb potential. All these approaches noticeably differ from the classical setting used in [4] for lack of a quantum notion of phase-space empirical measures.However, a quantum analogue of the notion of phase-space empirical measure was recently proposed in [8], along with an equation analogous to (3) governing their evolution.This notion was used in [8] to prove the uniformity of the mean-field limit in the Planck constant ̵ h > 0. However, the discussion in [8] only considers regular potentials (specifically ).Even writing the equation analogous to (3) satisfied by the quantum analogue of the phase-space empirical measure requires V ∈ F L 1 (R d ) in the setting of [8]. The purpose of the present paper is twofold: (a) to extend the formalism of quantum empirical measures considered in [8] to treat the case of singular potentials including the Coulomb potential, which is of particular interest for applications to atomic physics (see Theorem 3.1 in section 3), and (b) to explain how the ideas in [16,14] can be couched in terms of the formalism of quantum empirical measures defined in [8] (see Theorem 4.1 and Corollary 4.2 in section 4). Specifically, we prove an inequality between operators on the N -particle Hilbert space, of which the key estimates in [16,14] leading to the quantum mean-field limit are straightforward consequences. The next section briefly recalls only the essential part of [8] used in the sequel.The main results obtained in the present paper are Theorems 3.1 and 4.1 from sections 3 and 4 respectively.The proofs of these results are given in the subsequent sections. Quantum Klimontovich Solutions Consider the quantum N -body Hamiltonian (6) H ). Henceforth it is assumed that V is a real-valued function such that H N has a (unique) self-adjoint extension to H N , still denoted H N .A well-known sufficient condition for this to be true has been found by Kato (see condition (5) in [12]): there exists R > 0 such that (7) z ≤R In particular, these conditions include the (repulsive) Coulomb potential in R 3 .In fact, H N has a self-adjoint extension to H N under a condition slightly more general than Kato's original assumption recalled above: (see Theorem X.16 and Example 2 in [17], and Theorem V.9 with m = 1 in [15]). In the sequel, we adopt the notation in [8].In particular, we set (9) The dynamics of the morphism M in N is defined by conjugation with the N -particle dynamics as follows: for each A ∈ L(H), ) is henceforth referred to as the quantum Klimontovich solution. Assume henceforth that V is even: The first main result in [8] where is the quantum kinetic energy, and where ( 14) , where E ω ∈ L(H) is the operator defined by ( 16) Since the integrand of the right-hand side of ( 15) takes its values in the non separable space L(H N ), it is worth mentioning that this integral is a weak integral for the ultraweak topology in L(H N ) (see footnote 3 on p. 1032 in [8]). At variance with the classical case recalled in (3), the differential equation ( 13) satisfied by the quantum Klimontovich solution t ↦ M N (t) is not formally identical to the mean-field, time-dependent Hartree equation (5).The relation between ( 5) and ( 13) is explained in Theorem 3.5, the second main result in [8], recalled below. If ψ is a solution of the the time-dependent Hartree equation ( 5) satisfying the normalization condition is a solution of (13). Extending the Definition of Our first task is to extend the definition (15) of the term C(V, M N (t), M N (t)) to a more general class of potentials V , including the Coulomb potential in R 3 . Since 2 Throughout this paper, we adopt the Dirac bra-ket notation.Thus a wave function ψ ∈ H viewed as a vector of the linear space H is denoted ψ⟩, whereas ⟨ψ designates the linear functional and ⟨ψ φ⟩ ∶= ⟨ψ I H φ⟩ is the inner product on H. and to take advantage of the decay of S N [φ] in ω, assuming that φ is regular enough.Our argument does not use any regularity on Φ in N or Ψ in N .This is quite natural, since anyway Kato's condition (8) on the interaction potential V does not entail higher than (Sobolev) H 2 regularity for U N (t)Φ in N or U N (t)Ψ in N , as observed in Note V.10 of [15]. Our first main result in this paper is the following result, leading to a definition of C(V, M N (t), M N (t))( φ⟩⟨φ ) in the case of singular, Coulomb-like potentials V , and for bounded wave functions φ.This theorem can be regarded as an extension to the case of singular, Coulomb like potentials V of the formalism of quantum Klimontovich solutions in [8]. Theorem 3.1.Assume that V is a real-valued measurable function on R 3 satisfying the parity condition (12), and The integral on the right-hand side of the equality above is to be understood as a weak integral and defines as a continuous map from R to L(H N ) endowed with the ultraweak topology, which is moreover bounded on R for the operator norm on L(H N ). Obviously, condition (17) is stronger than Kato's condition (8).However, the repulsive Coulomb potential z ↦ 1 z in R 3 obviously satisfies (17), since its Fourier transform ζ ↦ C ζ 2 belongs to L 1 (R 3 ) + L 2 (R 3 ).In particular, H N has a selfadjoint extension to H N under condition (17) Observe first that Without loss of generality, consider the term where with the notation We shall prove that where the last inequality is the Cauchy-Schwarz inequality for the inner integral. On the other hand and and so that for each t ∈ R by polarization, and the function is bounded on R with values in L(H N ) for the norm topology, and continuous on R with values in L(H N ) endowed with the weak operator topology, and therefore for the ultraweak topology (since the weak operator and the ultraweak topologies coincide on norm bounded subsets of L(H N )). Remark.In the sequel, we shall also need to consider terms of the form (I) ∶= 1 The term (III) is the easiest of all.Indeed, The terms (I) and (II) are slightly more delicate, but can be treated by the same method already used in the proof of the theorem above.First, Thus, we have proved that , which leads to a definition of (I). The case of (II) is essentially similar.Observe that And (II) ∶= 1 with a similar conclusion for F 3 .On the other hand Therefore the map , thereby leading to a definition of (II). An Operator Inequality. Application to the Mean-Field Limit First consider the Cauchy problem for the time dependent Hartree equation ( 5).Assuming that the potential V satisfies ( 8) and ( 12), for each φ in ∈ H 2 (R 3 ), there exists a unique solution φ ∈ C(R, H 2 (R 3 )) of ( 5) by Theorems 1.4 and 1.3 of [11]. Pickl's key idea in his proof of the mean-field limit in quantum mechanics is to consider the following functional (see Definition 2.2 and formula (6) in [16], with the choice n(k) ∶= k N , in the notation of [16]): Assuming that ψ ≡ ψ(t, x) is a solution of (5) while Ψ N (t, ⋅) ∶= U N (t)Ψ in N , Pickl studies in section 2.1 of [16] the time-dependent function t ↦ α N (Ψ N (t, ⋅), ψ(t, ⋅)), and proves that it satisfies some Gronwall inequality. Observe first that Pickl's functional α N (Ψ N (t, ⋅), ψ(t, ⋅)) can be recast in terms of the quantum Klimontovich solution M N (t) as follows (18) . This identity suggests therefore to deduce from ( 13) and ( 5) the expression of in terms of the interaction operator C defined in (15).This is done in the first part of the next theorem, which is our second main result in this paper. Let ψ in ∈ H 2 (R 3 ) satisfy ψ in H = 1, let ψ be the solution of the Cauchy problem (5) for the time-dependent Hartree equation, and set (2) the operator C(V, M N (t) − R(t), M N (t))(P (t)) is skew-adjoint on H N and satisfies the operator inequality and where C S is the norm of the Sobolev embedding The operator inequality for quantum Klimontovich solutions in the case of potentials with Coulomb type singularity obtained in part (2) of Theorem 4.1 can be thought of as the reformulation of Pickl's argument in terms of the quantum Klimontovich solution M N (t). Indeed, we deduce from parts (1) and ( 2) in Theorem 4.1 the operator inequality (21) 3 We recall that, if E, F are Banach spaces Then, evaluating both sides of this inequality on the initial N -particle state Ψ in N , and taking into account the identity (18) leads to the Gronwall inequality . This last inequality corresponds to inequality (11) and Lemma 3.2 in [16]. In the sequel, we shall denote by L p (H) for p ≥ 1 the Schatten two-sided ideal of L(H) consisting of operators T such that In with L given by (20). Let us briefly indicate how one arrives at the operator inequality in part (2) of Theorem 4.1.Let Λ 1 , Λ 2 ∈ L(L(H), L(H N )) be such that 12) and ( 17), define In other words, where the integral on the right hand is to be understood in the ultraweak sense (see footnote 3 on p. 1032 in [8]). For each A ∈ L(H), denote by Λ j (•A) and Λ j (A•) the linear maps where the first equality follows from the fact that Λ 1 and Λ 2 are *-homomorphisms, while the second equality uses the fact that V is real-valued, since V is real-valued and even. An easy consequence of (24) and of this lemma is that, for each The key observations leading to Theorem 4.1 are summarized in the two following lemmas.In the first of these two lemmas, the interaction operator is decomposed into a sum of four terms.Lemma 4.4.Under the same assumptions and with the same notations as in Theorem 4.1, the interaction operator satisfies the identity All the terms involved in this decomposition can be defined by the same method already used in the proof of Theorem 3.1.Indeed, one can check that all these terms involve only expressions of the type (I), (II) or (III) in the Remark following Theorem 3.1.This easy verification is left to the reader, and we shall henceforth consider this matter as settled by the detailed explanations concerning (I), (II) and (III) given in the previous section. Each term in this decomposition satisfies an operator inequality involving only the operator norm of the "mean-field squared potential" (V 2 ) R(t) , instead of the "bare" interaction potential V itself. Then Remarks on ℓ(t) in (26) and L(t) in (20). ( , which is the multiplication operator by the function where we recall that C S is the norm of the Sobolev embedding (2) If V satisfies (8), then V (I − ∆) −1 ≤ M for some positive constant M (see the discussion in §5.3 of chapter V in [13], so that In this remark, we shall make a slightly more restrictive assumption, namely that In space dimension d = 3, the Hardy inequality, which can be put in the form 4 1 x 2 ≤ 4(−∆) implies that the Coulomb potential satisfies the assumption above on V .If the potential V satisfies the (operator) inequality ( 27), then . 4 To see that 4 is optimal, minimize in α > 0 the expression (3) A bound on ℓ(t) in terms of ψ(t, ⋅) H 1 (R 3 ) instead of ψ(t, ⋅) H 2 (R 3 ) is advantageous since the former quantity can be controlled rather explicitly by means of the conservation of energy for the Hartree equation ( 5).This explicit control is useful in particular to assess the dependence in ̵ h of the convergence rate for the mean-field limit obtained in Corollary (4.2). Clearly, the convergence rate for the quantum mean-field limit in Corollary 4.2 is not uniform in the semiclassical regime, in the first place because of the factor 3 ̵ h on the right hand side of the upper bound for F N ∶m (t) − R(t) ⊗m 1 , which comes from the i ̵ h∂ t part of the quantum dynamical equation.However, one should expect that the function ℓ(t), or at least the upper bound for ℓ(t) obtained in ( 2), grows at least as 1 ̵ h, since it involves ∇ x ψ(t, ⋅) L 2 , expected to be of order 1 ̵ h for semiclassical wave functions ψ (think for instance of a WKB wave function, or of a Schrödinger coherent state). We shall discuss this issue by means of the conservation of energy satisfied by the Hartree solution ψ (see formula (5.2) in [3]): Observe that (where F designates the Fourier transform on R d ), so that the conservation of mass and energy for the Hartree solution implies that Typical states used in the semiclassical regime (WKB or coherent states, for instance) satisfy ̵ h ∇ψ in L 2 = O(1).Thus, in that case Things become worse if the potential energy is a priori of indefinite sign.With (28), the energy conservation implies that and thus Therefore, the exponential amplifying factor in Corollary 4.2 is exp(Kt ̵ h 5 2 ) in the first case, and exp(Kt ̵ h 3 ) in the second.These elementary remarks suggest that Pickl's clever method for proving the quantum mean-field limit with singular potentials including the Coulomb potential (see [16,14]) is not expected to give uniform convergence rates (as in [7,8] in the case of regular interaction potentials) for the mean field limit in the semiclassical regime. Proof of part (1) in Theorem 4.1 For each σ ∈ S N and each ] is a bounded operator on H.According to formula (25) in [8], denoting by V kl the multiplication operator The core result in the proof of Theorem 3.1 is that the function , and more generally, using a spectral decomposition of the trace-class operator With the definition of C in Theorem 3.1, we conclude that the operator for each operator F N ∈ L(H N ) satisfying (31).One easily checks that for all D N ∈ L(H N ) satisfying (32).Since any trace-class operator on H N is a linear combination of 4 density operators, we conclude that On the other hand Finally, by condition ( 17) on V , one has so that, returning to (34), one arrives at the equality which proves part (1) in Theorem 4.1.Remark.In [8], the equality This argument cannot be used here since V ∉ F L 1 (R 3 ).Besides, the definition of the operator C(V, M N (t), M N (t))(R(t)) in Theorem 3.1 makes critical use of the fact that R(t) = ψ(t, ⋅)⟩⟨ψ(t, ⋅) with ψ(t, ⋅) ∈ L 2 ∩ L ∞ (R 3 ).This is the reason for the rather lengthy justification of (33) in this section. Proof of Lemma 4.4 In the sequel, we seek to "simplify" the expression of the interaction operator This will lead to rather involved computations which do not seem much of a simplification.However, we shall see that the final result of these computations, reported in Lemma 4.4, although algebraically more cumbersome, has better analytical properties. and observe that ]dω .All the terms in the right hand side of the equality above are either similar to the one considered in Theorem 3.1, or of the type denoted (III) in the Remark following Theorem 3.1. An elementary computation shows that, for all ) -see formula before (41) on p. 1041 in [8].On the other hand (12).With the formula (36), we conclude that (39) . By (35), one can further simplify the term Observe again that all the integrals in the right hand side of the equalities defining T 1 and T 3 are of the form defined in Theorem 3.1, or of the form (I), (II) or (III), or their adjoint, in the Remark following Theorem 3.1. That follows from (12) and the definition (23).This concludes the proof of Lemma 4.4. Proof of Lemma 4.5 In the sequel, we shall estimate these four terms in increasing order of technical difficulty.-see the formula following (41) on p. 1041 in [8].Thus , where the equality follows from the fact that R(t) = R(t) * , which implies that (41) On the other hand, by Jensen's inequality for each A ∈ L(H), so that (44) ) .Then (41) and (44) imply that so that T * 4 = −T 4 .Hence ±iT 4 are self-adjoint operators on H N , so that (45) One has By cyclicity of the trace, for each F in N satisfying (31), denoting . By (44), Next we use the following elementary observation.Proof.Indeed, we seek to prove that ⟨Ψ T Ψ⟩ ≥ 0 for each Ψ ∈ H N . For each Ψ ∈ H N such that Ψ N H N = 1, set Then F satisfies (31), so that Therefore, by cyclicity of the trace, for each and we study the quantity ) . where so that, by the Cauchy-Schwarz inequality, . First, one has (the second equality follows from the fact that J k (P (t)) commutes with Π N (t) and Π N (t) −1 ), so that The inequality above follows from the fact that trace H N (Π N (t) −1 (J k P (t)) 2 F N (t)) = trace H N (F N (t) For instance, if F in N = ψ in ⟩⟨ψ in ⊗N with ψ in ∈ H and ψ in H = 1, one has α N (0) = trace H N (R(0) ⊗N M in N (P (0))) = trace H (R(0)P (0)) = 0 , so that 8.2.2.Pickl's functional and the trace norm.How the inequality above implies the mean-field limit is explained by the following lemma, which recaps the results stated as Lemmas 2.1 and 2.2 in [14], and whose proof is given below for the sake of keeping the present paper self-contained.If F in N ∈ L(H N ) satisfies (31), for each m = 1, . . ., N , we denote by F N ∶m (t) the m-particle reduced density operator deduced from F N (t) = U N (t)F in N U N (t) * , i.e. trace Hm (F N ∶m (t)A 1 ⊗ . . .⊗ A m ) = trace H N (F N (t)(J 1 A 1 ) . . .(J m A m )) for all A 1 , . . ., A m ∈ L(H).This completes the proof of Corollary 4.2. Lemma 4 . 5 . Under the same assumptions and with the same notations as in Theorem 4.1, set 7. 1 . Bound for T 4 .The easiest term to treat is obviously T 4 .We first recall that (40) M N (t)(A) ≤ A for each A ∈ L(H)
5,390
2022-03-24T00:00:00.000
[ "Physics" ]
Copper-Catalyzed Eglinton Oxidative Homocoupling of Terminal Alkynes : A Computational Study The copper(II) acetate mediated oxidative homocoupling of terminal alkynes, namely, the Eglinton coupling, has been studied with DFT methods. The mechanism of the whole reaction has been modeled using phenylacetylene as substrate. The obtained results indicate that, in contrast to some classical proposals, the reaction does not involve the formation of free alkynyl radicals and proceeds by the dimerization of copper(II) alkynyl complexes followed by a bimetallic reductive elimination. The calculations demonstrate that the rate limiting-step of the reaction is the alkyne deprotonation and that more acidic substrates provide faster reactions, in agreement with the experimental observations. Introduction Conjugated diynes are recurring building blocks in a great range of industrial intermediates and materials [1][2][3][4][5].Besides their very well-known antifungal properties [6], they have been widely employed to prepare optical [7] and organic materials [8][9][10] and molecular devices [7,11].Acetylenic coupling has become a powerful tool to obtain 1,3-diynes and has experienced a great development in recent years.Nevertheless, the first acetylenic coupling dates from 1869 and was reported by Glaser; he observed that copper(I) phenylacetylide smoothly underwent homocoupling under aerobic conditions to deliver diphenyldiacetylene [12,13].This process was further developed later, the so-called Hay modification, by including nitrogen donor ligands such as N,N,N ,N -tetramethylethylenediamine (TMEDA) which facilitated the whole process and allowed carrying out the reaction under homogeneous conditions [14,15].Some years later other similar catalytic procedures, leading to asymmetric diynes, were proposed, for example, the Sonogashira [16,17] and Cadiot-Chodkiewicz [18] cross-coupling reactions.However, many of these protocols require expensive metal sources and external oxidants in order to recover the active catalyst, which clearly is a disadvantage.One way to circumvent this issue is using the copper-mediated oxidative homocoupling of terminal alkynes reported by Eglinton and Galbraith in the late 1950s [19,20].This reaction, shown in a general form in Scheme 1, employs the inexpensive copper(II) acetate as the metal source in a (super)stoichiometric amount.This coupling is usually fast, clean, and completely homogeneous and tolerates mild reaction conditions.The solvent of choice is usually a 1 : 1 methanolic pyridine mixture, but other solvents can be employed.In addition, unlike many other metal-mediated reactions the Eglinton coupling does not require the usage of any other external ligand. In recent years the Eglinton reaction has been widely employed in the synthesis of cyclic bisacetylenes [21][22][23] and macrocycles [24,25] such as annulenes [26,27], rotaxanes [28], catenanes [29,30], conjugated long structures [31], poly -diyls [32], and molecular wires [11].Although this coupling method has been known for a long time and is still widely applied, the mechanism governing this reaction is not completely understood.The first mechanistic proposal was reported by Salkind and Fundyler [33], Scheme 2. In there, the terminal alkyne is deprotonated (step (i)) and then oxidized by copper(II) to form the alkynyl radical (step (ii)) and that can afterwards dimerize to deliver the final 1,3-diyne (step (iii)).The authors propose that the first two stages probably Scheme 1: General form of the Eglinton oxidative homocoupling of alkynes.involve copper derivatives rather than isolated anions and indicate that the rate-limiting step is the first one, based on experimental observations that state that the most acidic acetylenes provide the fastest reactions [34]. Nevertheless, it has never been demonstrated that the reaction follows a radical mechanism and the identity of the base remains to be determined since in some reports this role is attributed to the acetate ligands but other sources propose the pyridine solvent as the deprotonating agent.Some years later a more elaborated proposal was reported by Bohlmann and coworkers [35], Scheme 3. Based on their scheme, the reaction starts with the -coordination of the triple bond to a copper species, facilitating the activation of the terminal C-H bond by a base.The final diyne product is obtained by reductive elimination from a "dinuclear" copper(II) acetylide species. In this report the mechanism of the Eglinton oxidative homocoupling of terminal alkynes is studied aiming to determine if the proposed mechanisms are plausible; the radical character of the reaction as well as the nature of the base and the substrate influence on the reaction rate will be also studied.Other similar copper-catalyzed reactions have been studied computationally with very successful outcomes [36][37][38][39], showing the value of computational approximations on mechanistic studies of this kind. Computational Details All the structures have been fully optimized using the Gaussian 09 package [40], with the B97D density functional [41,42].This functional has been successfully employed in other computational reports involving similar systems to the one studied in this communication [43,44].All the calculations involving radical systems, such as those including copper(II) cations, have been performed using unrestricted wave functions. In the optimization process the standard 6-31G(d) [45][46][47] basis set was used for all H, C, N, and O atoms while the Stuttgart triple zeta basis set (SDD) [48,49], along with the associated ECP to describe the core electrons, was employed for Cu.All the optimizations have been carried out in solvent employing the (IEF-PCM) continuum dielectric solvation model [50,51] including the radii and nonelectrostatic SMD terms developed by Marenich and coworkers [52].Experimentally, a 1 : 1 mixture of pyridine and methanol was employed as solvent.In the calculations only the former was used because using a mixture of solvents in Gaussian 09 is not allowed and, in addition, pyridine is used sometimes as an explicit ligand.Nevertheless, the impact of using pyridine alone on the calculated free energies is expected to be small.In all cases frequency calculations were carried out to ensure the nature of stationary points and transition states. Additional single point calculations on the previously optimized geometries were employed to obtain improved solvated free energy values with larger basis sets.The aug-cc-pVTZ-PP basis set including polarization and the associated electron core potential [53] was employed for Cu while the 6-311+G * * all-electron basis set [47,54,55] was used for all the other atoms.The solvation model is maintained the same as in the optimization process.Unless otherwise stated all the free energy values in the text correspond to those obtained with the larger basis sets including solvation at 25 ∘ C. Results and Discussion In this section the most plausible mechanism for the Eglinton oxidative homocoupling of phenylacetylene, a representative sample of the terminal alkynes usually employed, is described (Scheme 4).Alternative pathways have been computed whenever possible in order to check that the best option is always selected.The detailed structures of all the computed copper intermediates can be found in Figure 1. The catalytic cycle starts with the coordination of the terminal alkyne to copper(II) acetate (I) to form intermediate II.In this complex two new interactions are established, one between one of the acetate groups and the proton of the incoming alkyne and another one between the proximal carbon atom of the alkyne and the copper center.The O-H and Cu-C distances are 2.45 and 2.17 Å, respectively.Additionally, the alkyne C-H distance elongates to 1.08 Å after the coordination, making that bond slightly longer than that found in free phenylacetylene (1.04 Å).This process is not thermodynamically favored and almost 10 kcal mol −1 is required to attach the triple bond to the copper; this could be probably attributed to the worse donating ability of the triple bond when compared to the bidentate acetate group.Since intermediate II is higher in energy than I and a strong structural rearrangement is required to get to the former, this step should be expected to depend on a transition state.All the attempts to directly locate this transition state failed and thus a linear transit energy scan was carried out to elucidate this part of the mechanism.This procedure, consisting of a series of optimizations where the distance between the substrate and the copper atoms is fixed at values between 2.2 and 2.9 Å, shows a monotonic uphill energy profile when the distance between both moieties decreases.These results seem to point out that, in principle, the addition of phenylacetylene onto I is not governed by a transition state.The particular arrangement of ligands in complex II facilitates the proton transfer between the alkyne and the pending acetate; in fact the deprotonation transition state (TS CH) is less than 9 kcal mol −1 higher than II indicating that this process should be quite fast.In TS CH the Cu-C distance is reduced to 2.01 Å while the C-H and O-H distances become similar: 1.33 and 1.27 Å, respectively. After the deprotonation intermediate III is obtained; this complex is slightly less stable than the previous one but remains at a reasonable height.In contrast, the direct deprotonation using pyridine as the base, as proposed in some reports, requires more than 35 kcal mol −1 .These results seem quite obvious since acetate is a stronger base than pyridine.In addition, the presence of the metal, once coordinated to the triple bond, contributes to enhancing the acidity of the C-H bond and seems to be crucial in the proton transfer process.Once III is formed the reaction proceeds by the replacement of the newly formed acetic acid moiety by a pyridine molecule (IV).This process is thermodynamically favored and since pyridine is solvent and is present in great excess, it is expected to happen quite fast.The square planar intermediate IV is, in fact, the same species found in the left part of equation (ii) in Scheme 2. This complex, as all the other neutral mononuclear copper(II) species studied, has one unpaired electron and thus is in the doublet free energy surface.The spin distribution in IV indicates that the unpaired electron is mostly localized on the copper atom, although some spin polarization is found in the terminal carbon of the alkyne ligand.However, generating a free alkynyl radical and the corresponding [Cu(OAc)(py)] complex from IV is nearly impossible because that process requires more than 30 kcal mol −1 , probably because the alkynyl radical formed is not stable enough.In contrast, the dimerization of IV to deliver the dinuclear alkynyl-bridged complex V is thermodynamically viable.The formation of this complex forces the unpaired electrons to couple, taking the reaction to the singlet free energy surface, which is, in turn, lower in energy than the triplet energy surface.Other dinuclear complexes, for example, acetate-bridged, produced higher energy intermediates.Both copper atoms in V have distorted square pyramidal structures, with one of the oxygen atoms of the acetate ligand occupying the axial position; the Cu-O eq and Cu-O ax distances are 1.98 and 3.00 Å, respectively.The Cu 2 C 2 core in complex V is not completely planar but a wedge with alternated Cu-C distances of 1.98 and 2.00 Å and a C-C distance of 2.59 Å.Since V is singlet the spin distribution cannot be obtained; the open-shell analogous complex cannot be correctly computed with the B97D functional and thus the only way to obtain an estimation of the spin distribution requires the calculation of V in the triplet state.This calculation states that the unpaired electrons remain mainly on the copper atoms while only some spin delocalization can be found in the bridging alkynyl carbon atoms, ruling out the formation of the organic radicals proposed by Salkind and coworkers.From V the bimetallic reductive elimination process (TS CC) is quite straightforward and requires only 2.6 kcal mol −1 to deliver the final diphenyldiacetylene product and the complex [Cu(OAc)(py)] (VI).The geometry of this transition state is very similar to the one found for intermediate V; obviously the main differences are found in the Cu-C and C-C distances which shrink to 1.97 and 1.96 Å, respectively.Although the bimetallic reductive elimination is not a very common process it has already been proposed in the literature [35,39] and in the mechanism of Bohlmann and coworkers, which seems to be the right one to describe the reactivity of the studied reaction. The calculations indicate that the reaction is exergonic by 25.9 kcal mol −1 .The highest barrier is 18.1 kcal mol −1 , computed as the free energy difference between I and TS CH and corresponding to the deprotonation process, which was proposed by Eglinton and Galbraith in their original paper as the rate-limiting step.This could explain why the more acidic acetylenes produce faster reactions.In order to check the validity of this statement the catalytic cycle was recomputed for two different substituted phenylacetylenes: p-NO 2 C 6 H 4 C≡CH and p-MeC 6 H 4 C≡CH.The computed results can be found in Figure 2. As may be observed the computed profiles for the three substrates follow a very similar trend.This should not be surprising because the geometries obtained for the catalytic cycle of the substituted phenylacetylenes are quite similar to those shown in Figure 1.In all cases the addition of the substrate on the copper(II) species I is endergonic and can be related to its donation ability; that is, the formation of complex II is more favorable for the most electron-rich substrate p-MeC 6 H 4 C≡CH.In contrast, the obtention of the same complex with p-NO 2 C 6 H 4 C≡CH requires an additional amount of 3.5 kcal mol −1 .The activation of the terminal C-H bond follows the reverse order, in agreement with the inductive effect of the para-group on the phenylacetylene.The deprotonation process requires 4.9, 8.5, and 10.7 kcal mol −1 for p-NO 2 C 6 H 4 C≡CH, C 6 H 5 C≡CH, and p-MeC 6 H 4 C≡CH, respectively, indicating that the most electron-withdrawing substituents contribute to lowering this transition state.In all cases a pyridine solvent molecule easily displaces the newly formed acetic acid, allowing the formation of complex IV.The dimerization of complex IV, as well as the reductive elimination transition state, has practically the same energy requirements for the three substituted phenylacetylenes.In all cases the highest barrier corresponds to the deprotonation step, that is, the free energy difference between I and TS CH, which is 18.1, 17.0, and 19.3 kcal mol −1 for phenylacetylene, pnitrophenylacetylene, and p-methylphenylacetylene, respectively.The observed trend can be directly related to the acidity of the alkyne; as proposed by Eglinton and Galbraith, the acidic p-nitro substituted substrate provides the lowest reaction barrier while the more electron-rich p-methyl substrate produces slower reactions because its deprotonation step has a higher free energy barrier. Conclusions The Eglinton oxidative homocoupling of terminal alkynes has been successfully studied using phenylacetylene as a model substrate.The calculations demonstrate that the coordination of the triple bond to the metal center enhances the acidity of the terminal C-H bond and facilitates its activation by the acetate ligand.The alternative deprotonation pathway using pyridine, the solvent of the reaction, as the base, a much higher free energy profile and can be consequently ruled out. The formation of the C-C bond is achieved from a dinuclear copper(II) complex with diradical character centered mainly on the metal atoms.This indicates that, in contrast to some of the classical proposals, the free organic alkynyl radicals are not formed.Consequently, the reaction proceeds following a mechanism that resembles the one proposed by Bohlmann and coworkers. The highest free energy barrier for phenylacetylene corresponds to the deprotonation process, in agreement with the experimental observations.Calculations using different para-substituted phenylacetylenes also confirm that the most acidic substrates provide faster reactions because the deprotonation barrier is lower. Figure 1 : Figure 1: Detailed structures of V and TS CC (distances in Å, Cu = brown, N = blue, O = red, C = gray, and H = white; for clarity most H atoms have been omitted).
3,549
2015-12-28T00:00:00.000
[ "Chemistry", "Biology" ]
Efficacy of Organic Component on Hardness and Fracture Toughness of Human Dentin Using Heat treatment – In vitro Study The study was performed to find the influence of organic matrix on hardness and fracture toughness of the human dentin.The mid-sagittal sections of the dentin were subjected to Vickers hardness testing after removal of organic component from dentin by heat treatment. The heat treatment temperature for human dentin was determined as 800°C by thermogravimetric analysis in order to facilitate complete removal of organic component. The removal of proteins from dentin was confirmed by Fourier transform infrared spectroscopy.The significant difference in hardness and fracture toughness values was found among the heat treated and untreated dentin (p<0.001).This study confirms the role of organic component of dentin on mechanical properties. Thus, helps in research and development of novel reparative materials INTRODUCTION The mature human dentin is composed of 70% of inorganic component, 20% of organic material and 10% of water by weight. 1 The inorganic material is mainly composed of calcium phosphate related to the hexagonal hydroxyapatite, whose chemical formula is Ca 10 (PO 4 ) 6 •2(OH). 2 The organic matrix is composed of collagen (85-90%) and a variety of non collagenous proteins. 3,4,5Most of the collagen is type I. 6 The dentin serves as an elastic foundation for the enamel and as a protective enclosure for the pulp and therefore mechanical properties are of utmost importance in determining the tooth strength.The most striking morphological feature of dentin is the tubule, with its hypermineralized peritubular cuff, influenced the mechanical properties of dentin. 7The elastic behavior of dentin was due to intertubular dentin matrix, and not the dentinal tubules. 8Any changes in the mineral imbalance caused by caries or developmental disorders, compromise the mechanical integrity of the tooth.But, the influence of organic components of dentin on the mechanical properties has not been clearly proved.Hence, the objective of the present study was to acquire the details on the influence of organic matrix on hardness and fracture toughness of the human dentin. Sample preparation Twenty human mandibular premolars free of caries extracted for orthodontic reasons from young individuals were used for this study.The premolars were rinsed with saline after extraction and stored at -40°C and was used within one month.The samples were mounted in acrylic resin at room temperature and cut in the mid sagittal plane using hard tissue microtome (Leica, Rotterdam) to obtain specimens with the thickness of 1.5mm.The study included two groups, the control group being the samples with organic component (n=8).The experimental group was heat treated to remove the organic content in the dentin specimen (n=8). Thermal treatment The heat treatment temperature for human dentin was determined as 800°C by thermogravimetric analysis in order to facilitate complete removal of organic component (Fig. 1).The heat treatment was performed with 8 specimens in a tubular furnace (Indfur, India) with oxygen atmosphere for 4 hours to 800°C to remove the organic matrix from the dentin.The removal of proteins from dentin was confirmed by Fourier transform infrared spectroscopy (Perkin Elmer, USA) (Fig. 2).The untreated 8 samples were used as a negative control. Hardness and fracture toughness analyses The hardness of the dentin was evaluated using Vickers diamond tip.The tests were carried out at room temperature with different loads form 0.98N -49N, until crack formation and dwell time of 15 seconds.For each sample, 8 indentations were made in the circumpulpal dentin.A distance of at least two times the impression diagonal was kept between the indentations to minimize interactions between neighboring indentations.The Vickers hardness was calculated according to the equation, HV = 1.854xP/ D 2 , where, HV -Vickers hardness, P -applied load, D -indentation diagonal (µm). 9e fracture toughness was computed according to the equation, K IC = P/ l 3/2 â 0 , where, K IC -Fracture toughness , P -load, l -crack length, â 0 -Indenter constant equal to 7 for a vicker's indenter. 10 RESULTS In the mid sagittal plane, the lowest hardness values were observed in the heat treated specimens when they were subjected to different loads from 0.98N -49N (Table, Fig. 3a).In both treated and untreated specimens, a pronounced load-dependent hardness behavior was evident.The hardness decreased as the load increased (Table ).The decrease in hardness was observed up to 55% in the samples in which organic matrix was absent by heat treatment.The optical micrography revealed smaller indentation impression in the untreated specimens compared to the samples in which organic matrix were absent.The fracture toughness and hardness values were directly correlated to each other and inversely proportional to load (Table, Fig. 3).The fracture toughness is related to the crack length.The optical micrography of the samples revealed the crack formed at a greater load in untreated than heat treated specimens.The crack was formed in untreated dentin samples when the load of 49N was applied.The samples in which the organic matrix was removed, the crack formation occurred when subjected to the load of 9.8N (Table, Fig. 3).The removal of organic matter led to a substantial decrease in the fracture toughness of about 57% in the specimens. The Students t -test was used to test for significant difference in hardness and fracture toughness values among the heat treated and untreated dentin specimens.The statistically significant difference (p<0.001) was found between the control and experimental groups in the parameters, hardness and fracture toughness. DISCUSSION Human dentin essentially is a hydrated composite composed of nanocrystalline carbonated (calcium -phosphate -based ) apatite mineral (45% by volume), type I collagen fibrils (33% by volume) and water (22% by volume).The results of this study demonstrated that, although the dentin comprises 20% of organic matter by weight, it significantly influenced its mechanical properties confirming previous hypothesis. 11With the objective to prove the validity of this hypothesis, we intended to remove the organic matrix in the dentin to check for its influence in hardness and fracture toughness.Thermal treatment method of organic removal was chosen since wet chemical techniques have been shown to alter the mineral content of bone which is similar to dentin. 12Heating at high temperature (700°C -900°C) removed the organic constituents of cortical bone and the coralline hydroxyapatite without, apparently, affecting the interlocking framework of the hydroxyapatite crystallites. 13,14,15e results of the present study showed statistically significant difference (p<0.001) in the mechanical properties tested, between the control and experimental groups as the collagen fibrils in dentin are roughly 50 -100 nm in diameter; they are randomly oriented in a plane perpendicular to the direction of dentin formation 16 and the mineral occupies two sites within this collagen scaffold: intrafibrillar (inside the periodically spaced gap zones in the collagen fibril) and extrafibrillar (in the interstices between the fibrils).The apatite crystals are believed to nucleate initially in the gap zone, followed by secondary mineralization of the interstitial positions between the fibrils. 17The hydroxyapatite crystals, which average 0.1 µm in length, are formed along their fibers with their long axes oriented parallel to the collagen fibers. 18It is generally believed that collagen fibrils form a felt work structure laid down perpendicular to the tubules and in the plane of the advancing mineralization front 19 and the intertubular dentin matrix, and not the dentinal tubules, dominates the elastic behavior. 8Hence, the removal of organic component in human dentin significantly decreased the hardness and fracture toughness and weakens entire structure of dentin, as the type I collagen which acts as a scaffold that accommodates a large proportion of mineral (56%) in holes and pores of fibrils. 1 CONCLUSION Our findings provide information that the removal of organic matrix resulted up to 55% and 57% decrease in hardness and fracture toughness, which implied that the organic matrix is not only essential during hard tissue formation, but also has a functional role in the mature tissue and emphasizing its importance in maintaining the integrity of human dentin. Fig. 1 : Fig. 1: The thermogram of dentin shows mass loss of 8% at about 210°C and 20% of weight loss at about 800ºC, which correspond respectively to water and organic component. Fig. 2 : Fig. 2: The differences in the ratios of peak intensities of amide I (major protein absorbance band) and PO 4 were used to monitor organic matrix removal.The amide I absorbance at 1600-1700 cm-1 was significantly reduced in the heated sample, due to protein removal. Fig. 3b : Fig. 3b: Comparison of fracture toughness in heat treated and untreated samples. Table 1 : Mechanical characteristics of untreated and heat treated human coronal dentin.Load
1,967
2016-12-22T00:00:00.000
[ "Materials Science", "Medicine" ]
Chitin-rich heteroglycan from Sporothrix schenckii sensu stricto potentiates fungal clearance in a mouse model of sporotrichosis and promotes macrophages phagocytosis Background Fungal cell wall polysaccharides maintain the integrity of fungi and interact with host immune cells. The immunomodulation of fungal polysaccharides has been demonstrated in previous studies. However, the effect of chitin-rich heteroglycan extracted from Sporothrix schenckii sensu stricto on the immune response has not been investigated. Results In this study, chitin-rich heteroglycan was extracted from S. schenckii sensu stricto, and immunomodulation was investigated via histopathological analysis of skin lesions in a mouse model of sporotrichosis and evaluation of the phagocytic function and cytokine secretion of macrophages in vitro. The results showed that the skin lesions regressed and granulomatous inflammation was reduced in infected mice within 5 weeks. Moreover, heteroglycan promoted the fungal phagocytosis by macrophages and modulated the cytokine secretion. Heteroglycan upregulated TNF-α expression early at 24 h and IL-12 expression late at 72 h after incubation, which might result from moderate activation of macrophages and contribute to the subsequent adaptive immune response. Conclusions Chitin-rich heteroglycan extracted from S. schenckii sensu stricto potentiated fungal clearance in a mouse model of sporotrichosis. Moreover, chitin-rich heteroglycan promoted fungus phagocytosis by macrophages and modulated cytokines secretion. These results might indicate that chitin-rich heteroglycan could be considered as an immunomodulator used in the treatment of sporotrichosis. Introduction Sporotrichosis is a subcutaneous mycosis caused by traumatic inoculation of the dimorphic fungus Sporothrix schenckii complex, including S. schenckii s str, S. globosa, S. brasiliensis, and S. luriei [1]. S. schenckii s str, which is the most common fungus in this complex, manifests as mycelial form in the soil and plant debris and yeast form in infected animals [2]. The fungal cell wall is composed of many polysaccharides and glycoproteins that show dynamic change due to the impact of culture media and growth conditions [3]. These polysaccharides might be exposed to the surface of the fungal cell wall or released into the circulation during infection and thus have the chance to interact with host immune cells. An increasing number of studies have reported the immunomodulation of chitin and glucan from Candida albicans and Aspergillus fumigatus which produced inconsistent results [4][5][6][7]. Chitin, glucan, and mannan have also been used as nontoxic immune or vaccine adjuvants to enhance the immune response to several antigens in recent years [8]. Chitin and glucans are covered by mannan and glycoprotein on the cell wall of S. schenckii s str [9]. Heat killing and drug treatment of fungi were used to investigate the immune activity of polysaccharides from the fungal cell wall in previous studies [3,10,11]. We previously found that chitin exposure on the cell wall of S. schenckii s str upon curcumin treatment favored the antifungal response in infected mice [12]. Individual purified polysaccharides are extremely difficult to isolate since the fungal cell wall is present in the form of a chitin-glucan complex. Until now, few studies have investigated the immune activity of purified polysaccharides from S. schenckii s str. The alkali-insoluble glucan complex from S. schenckii s str could increase nitric oxide (NO) production by peritoneal macrophages alone and stimulate secretion of the proinflammatory cytokines IL-1β, IL-18 and IL-17 when cocultured with splenocytes and peritoneal macrophages [13,14]. There are many polysaccharides on the cell wall of S. schenckii complex, and their distribution results in different virulence levels in the Galleria mellonella model [15]. This finding also indicates that these polysaccharides might play important roles in modulating the immune response to the S. schenckii complex. The aims of this study were to explore the immune activity of heteroglycan in the pathogenesis of sporotrichosis. We observed that chitin-rich heteroglycan exerted an antifungal response in a mouse model of sporotrichosis and modulated fungal phagocytosis and cytokines secretion by macrophages. The results here demonstrated that chitin-rich heteroglycan from S. schenckii s str might act as an immunomodulator in a mouse model of sporotrichosis. Results The components and size of the heteroglycan extracted from S. schenckii s str We extracted heteroglycan from the mycelial form of S. schenckii s str and then detected its components by highperformance liquid chromatography (HPLC). The heteroglycan samples were white microparticles in various sizes by microscope (Fig. 1C). The results of HPLC showed that the heteroglycan microparticles were composed of chitin (89%), mannose (8.5%), and glucosamine (2.5%) (Fig. 1B). According to the flow cytometry results, most heteroglycan microparticles were less than 10 μm, and half of them were less than 2 μm in size (Fig. 1D). The heteroglycan microparticles were protein-and lipopolysaccharide-free, and was also negative for microbiological culture. Animal model of sporotrichosis Mice were infected with conidia of S. schenckii s str to investigate the immune activity of heteroglycan in vivo. Since the proportion of chitin accounted for 89%, we chose 100 μg as the treatment dose of heteroglycan according to a previous study [4]. All the mice in the three groups survived after 5 weeks. We monitored the lesions of conidia-infected mice weekly; they became nodules and reached their maximum size in the 1st week. The nodule sizes did not show significant differences within 3 weeks with or without heteroglycan treatment. However, the lesions of the infected mice with heteroglycan treatment (0.61 ± 0.11 cm, 0.58 ± 0.10 cm, respectively) regressed significantly compared to those of the untreated mice (0.71 ± 0.09 cm, 0.69 ± 0.11 cm, respectively) at the end of the 4th and 5th weeks (p = 0.035, p = 0.011, respectively) (Fig. 2a). Additionally, yellow pus was extruded from the lesions of the untreated mice (Fig. 2b). Nodules did not develop in the PBS blank group. Fungal culture of skin lesions Local skins with lesions were harvested from conidia-infected mice at the end of the 5th week and homogenized. The homogenates were then grown in SDA at 25°C for 5 days to determine fungal colonies. More fungal colonies were found in the lesions of the untreated group (5 × 10 4 ± 1 × 10 4 CFU/skin) than in the heteroglycan-treated group (300 ± 117 CFU/skin) (p < 0.001) (Fig. 2c). No fungal colonies from lung, liver, spleen, or kidney samples were found in the three groups. Histopathological analysis of skin lesions Lesions from infected mice were assessed by histopathological analysis with hematoxylin and eosin (H&E) and periodic acid-Schiff (PAS) staining. The lesions of the heteroglycan-treated group displayed retracted granulomatous inflammation with reduced immune cell infiltration. Fungus was rarely observed in the nodules of the Fig. 1 The components and size of the heteroglycan extracted from S. schenckii s str. A The HPLC chromatogram of carbohydrate standards (GlcN, glucosamine; Man, mannose; Glc, glucose). B The components of heteroglycan analyzed by HPLC included chitin (89%), mannan (8.5%), and glucan (2.5%). C The microscopic image of heteroglycan microparticles. D Heteroglycan size was determined by flow cytometry, which showed that most heteroglycan were less than 10 μm and half of them were less than 2 μm Fig. 2 The nodule sizes were monitored weekly in infected mice for 5 weeks (n = 20). (a) The nodule sizes of the infected mice with heteroglycan treatment (n = 10) decreased significantly compared to those of untreated mice (n = 10) at the end of the 4th and 5th weeks. *, p < 0.05. (b) Lesions in the untreated group (A, black arrow) were worse compared to those in the heteroglycan-treated group (B, black arrow) at the end of the 5th week. Yellow pus was extruded from the lesions of the untreated group (C, black arrow), whereas no pus extrusion was observed in the heteroglycan-treated group (D) at the end of the 5th week. (c) There were more fungal colonies from lesions in the untreated group than in the heteroglycan-treated group at the end of the 5th week. ***, p < 0.001 heteroglycan-treated group following PAS staining (Fig. 3A). In contrast, the lesions of the untreated group exhibited suppurative granulomatous inflammation, which was surrounded by neutrophils, mononuclear cells, and lymphocytes in the outer layer. Many ovaland cigar-shaped yeast forms of S. schenckii s str were observed in the nodules of the untreated group following PAS staining (Fig. 3B). Phagocytosis assay Conidial phagocytosis by macrophages from uninfected mice was observed after incubation for 48 h (Fig. 4A, B) since we wanted to observe the survival of fungus in the macrophages. The phagocytic index (PI) was higher in the heteroglycan-treated group (7.40 ± 1.09) than in the control group (2.68 ± 0.74) (p < 0.001), which suggested that heteroglycan promoted the ingestion of conidia by macrophages (Fig. 4C). In addition, ingested conidia could survive and transform to the yeast phase inside the macrophages (Fig. 4A, B). This finding indicated that macrophages played limited roles in the fungicidal ability. Cytokine induction in vitro To evaluate the impact of heteroglycan on the pattern of cytokines in the incubation supernatant of macrophages and conidia, we next performed ELISAs of TNF-α, IL-12, and IL-10. The results showed that TNF-α expression was significantly upregulated at 24 h and maintained at a high level at 48 h upon heteroglycan treatment compared to that of the untreated group (p = 0.048, p = 0.037, respectively) (Fig. 5A). IL-12 expression was significantly upregulated at 24 h but subsequently reduced with conidia stimulation compared to heteroglycan treatment (p = 0.018). However, the IL-12 levels gradually increased within 72 h upon heteroglycan treatment compared to the control levels (p = 0.013) (Fig. 5B). IL-10 expression was significantly upregulated with conidial stimulation at 24 h and was reduced upon heteroglycan treatment (p = 0.048) (Fig. 5C). The upregulation of TNF-α and IL-12 expression and the reduction in IL-10 expression upon heteroglycan treatment might favor the antifungal response since macrophages alone could not exert fungicidal effects. Discussion The fungal cell wall is composed of many polysaccharides that can be recognized by immune cells, regulate cytokine secretion, and modulate immune responses [16,17]. The culture media, growth conditions and drug treatment may impact fungal morphologies and cell wall Fig. 3 Histopathologic analysis by H&E and PAS staining in infected mice after 5 weeks. A represents the heteroglycan-treated group (n = 10). A1 exhibited reduced granulomatous inflammation with a few mononuclear cells, epithelioid cells, and lymphocyte infiltration in the dermal tissue, as assessed by H&E staining; A2 showed that fungi were rarely observed in the lesion, as assessed by PAS staining. B represents the untreated group (n = 10). B1 exhibited suppurative granulomatous inflammation with numerous neutrophils, mononuclear cells, and lymphocytes in the outer layer of the lesions, as assessed by H&E staining; B2 showed that round, oval and cigar-shaped yeast forms of S. schenckii s str (black arrow) were observed in the lesion, as assessed by PAS staining. Scale bar, 50 μm Conidial phagocytosis by macrophages from uninfected mice after incubation for 48 h. Conidia and cigar-shaped yeast forms of S. schenckii s str (black arrow) were observed in the macrophages. However, the percentage of phagocytosing cells and the mean number of ingested fungi in macrophages were lower in the untreated group (A) than in the heteroglycan-treated group (B). Scale bar, 50 μm. The phagocytic index of macrophages was higher in the heteroglycan-treated group than in the untreated group (C). ***, p < 0.001. The data of phagocytosis assay were performed in triplicate composition [11,15,18]. There are three morphologies in S. schenckii s str: conidia, germlings and yeast-like cells [9]. In the present study, the immune activity of chitin-rich heteroglycan extracted from S. schenckii s str were investigated. Chitin-rich heteroglycan was extracted from the mycelial form of S. schenckii s str since it was difficult to collect enough conidia to extract polysaccharides required in our study. The heteroglycan included chitin, mannan, and glucan, which indicated that all these polysaccharides might affect immune activity. However, chitin accounted for 89% of heteroglycan, indicating a greater chance of interacting with immune cells. The particle size of the heteroglycan was determined by flow cytometry since the immune activity of chitin and glucan might be affected by different sizes [19,20]. In previous studies, chitin was reported to stimulate various immune responses, including pro-and anti-inflammatory immune responses and "allergic" responses, which might be related to differences in size [4,[21][22][23]. Da SC et al. reported that intermediate-sized chitin (40-70 μm) and small chitin (< 40 μm, largely 2-10 μm) stimulated TNF-α secretion, while only small chitin induced IL-10 secretion. Large chitin fragments were nonimmunogenic [19]. Approximately 99.5% of heteroglycan was less than 10 μm in our study, indicating that it could be easily internalized by macrophages and thus modulate immune response. Therefore, we further investigated the immunomodulatory effect of heteroglycans in vivo and in vitro. Heteroglycans were commonly administered via intraperitoneal inoculation or intranasal instillation in previous studies [4,7,24]. The heteroglycan was administered via intraperitoneal inoculation in the mouse model of sporotrichosis in our study. Our data showed that the skin lesions regressed and granulomatous inflammation was reduced with less inflammatory cell infiltration, tissue impairment and fungal burden in the heteroglycan-treated group than in the untreated group. The results above suggested that heteroglycan might play protective roles in mouse defense against fungal infection. These findings were consistent with the results of a previous study showing that chitin elevated the survival of murine candidiasis [5]. However, chitin coupled with glucan from Aspergillus niger was associated with eosinophilic allergic inflammation in the lungs of mice, which was reduced following fungal challenge [23]. In fact, polysaccharides derived from different fungi might result in various immune responses that influence the outcome or state of diseases. Therefore, the immune activity of polysaccharides needs to be specifically investigated in species. To explore the underlying mechanisms for the protective immunity of chitin-rich heteroglycan, we further investigated fungal phagocytosis and cytokine secretion by macrophages upon heteroglycan treatment. One study reported that mannan, glucan, and glucosamine inhibited fungal phagocytosis by phagocytes since they blocked the receptor for recognition on the surface of phagocytes [25]. However, we found that heteroglycan could promote fungal ingestion by macrophages in the present study. This finding might be related to the size of heteroglycans since different sizes and lengths of chitin could bind intracellular or extracellular pattern recognition receptors (PRRs) and thus induce pro-or anti-inflammatory cytokines, which might impact the phagocytosis function of macrophages [4,21,22]. Goodridge et al. also reported that particulate glucans but not soluble glucans could activate the dectin-1 signaling pathway [20]. In addition, the heteroglycan used in this study was composed of chitin, mannan, and glucan. These polysaccharides could be recognized by PRRs and thus activated macrophages, including increased PRR expression and chitinase secretion, which might promote fungal phagocytosis in previous studies [16]. For example, Barbara et al. reported that chitin upregulated TLR2 and TLR4 transcription in keratinocytes [26]. Chitin linked covalently to glucan from A. fumigatus induced TNF-α production compared with individual polysaccharides [7]. The alkali-insoluble glucans from S. schenckii s str increased NO production by peritoneal macrophages from infected mice [13]. Goncalves et al. also reported that alkali-insoluble glucans from S. schenckii s str stimulated proinflammatory cytokine IL-1β, IL-18 and IL-17 secretion by coculture with splenocytes and peritoneal macrophages [14]. Furthermore, the upregulated PRR expression of activated macrophages showed a cooperative and synergistic effect in pathogen-associated molecular patterns (PAMPs) recognition. One study reported higher cytokine production in macrophages upon the synergistic response from dectin-1 recognition of glucan and TLR-4 recognition of mannan [27]. These results indicate that innate immunity can be "trained" to acquire a higher capacity to defend against invasive infections, which makes sense to evolution [28,29]. Indeed, we observed that more conidia were ingested by a single macrophage upon heteroglycan treatment. After incubation for 48 h, the ingested conidia survived and transformed to the yeast phase in macrophages, indicating the limited fungicidal ability of macrophages in vitro. We selected the 48 h time point due to the slow transformation and propagation of S. schenckii s str based on our experience. We further investigated the cytokines produced by macrophages with or without heteroglycan treatment. Usually, Th1/Th17 immune responses are considered protective, whereas Th2 immune responses might induce the dissemination of fungi [30]. Cytokines play critical roles in the immune responses of macrophages and the subsequent adaptive response. TNF-α expression was significantly upregulated early upon heteroglycan treatment, suggesting that macrophages were activated and proinflammation might favor fungal clearance. IL-12 expression was temporally upregulated at 24 h with conidia stimulation, which might be insufficient to maintain the efficiency of the adaptive immune response. However, IL-12 expression following co-stimulation of heteroglycan and conidia gradually increased within 72 h, which might have sufficient time for the activation of immune cells. The upregulation of anti-inflammatory IL-10 expression by conidia stimulation at 24 h, which was consistent with the timeframe of IL-12 elevation, might be detrimental to fungal clearance. Wagener et al. reported that chitin could stimulate IL-10 secretion by macrophages, which dampened inflammation and maintained immune homeostasis after the pathogen was defeated [4]. However, the IL-10 level was reduced upon heteroglycan treatment which might partly result from the survival of fungi within macrophages in our study. Chitin-rich heteroglycan increased pro-inflammatory cytokine secretion and reduced anti-inflammatory cytokine secretion at different time points in our study, which might involve macrophage polarization and thus modulate the adaptive immune response. This study had three main limitations. First, the type of macrophages activated by chitin-rich heteroglycan was not investigated. Second, the heteroglycan extracted from S. schenckii s str included chitin, mannan, and glucan, and therefore, we could not determine which component was the most important. Third, the heteroglycan used in our study was extracted from the mycelial form of S. schenckii s str; thus, the role of polysaccharides from the yeast form needs to be investigated in the future. Conclusions In this study, we investigated the immunomodulation of chitin-rich heteroglycan extracted from S. schenckii s str in vivo and in vitro. Chitin-rich heteroglycan potentiated fungal clearance, and the lesions regressed in a mouse model of sporotrichosis. Moreover, chitin-rich heteroglycan promoted fungal phagocytosis by macrophages and elaborately modulated cytokine secretion, which might enhance protection against S. schenckii s str infection. Overall, this study demonstrates the immune modulation of chitin-rich heteroglycan which might extend our knowledge about the immune activity of heteroglycan from S. schenckii s str. Animals Female BALB/c mice, weighing between 20 and 25 g, were purchased from the Animal Center of Sun Yat-sen University. During the experimental period, five or six mice were placed in each mouse cage. These mice were housed under stable conditions (temperature 23-25°C, relative humidity 50-70%, 12 h light/dark cycle) with unrestricted access to water and diet at the Experimental Animal Center of Sun Yat-sen University. The animal experiments were implemented according to the Guide for the Care and Use of Laboratory Animals of Sun Yatsen University (Permit Numbers: SCXK 2009-0011) and approved by the Ethics Committee on Animal Experiment in the Faculty of Sun Yat-sen University. Microorganisms and inoculum preparation S. schenckii s str CBS359.36 (CBS, Utrecht, Netherlands) was used in this study. The fungus was cultured at 25°C in sabouraud agar medium (SDA) for 7 days to activate the strain and was then inoculated in potato dextrose agar medium (PDA) for an additional 10-14 days to produce conidia. Conidial suspensions were acquired by gently washing the colonies with 5 ml of sterile 1× phosphate buffered saline (1× PBS) (Gibco, USA). Collected conidia were filtered using sterile gauze and then adjusted to the designed concentration in sterile 1× PBS using a hemocytometer. The conidial suspensions were stored at 4°C and used within 3 days after preparation. Conidial suspension viability was confirmed by culture for 7 days at 25°C on SDA. Conidia and yeast-like cells are frequent morphologies used for infection in vivo and in vitro. Conidia were used as an infection source in our study since conidia could provide the opportunity to observe the morphological transition within macrophages. Heteroglycan extract from S. schenckii s str Heteroglycan was isolated from the mycelial form of S. schenckii s str according to previous studies [4,21] with some adaptations. Briefly, conidia were inoculated on SDA medium at 25°C for 10-14 days. The fungus was harvested carefully without the medium, washed three times with deionized water, ground to debris, resuspended in 5% (w/v) NaOH and boiled at 100°C for 5-6 h. The samples were resuspended in 30% hydrogen peroxide/glacial acetic acid solution (1:1) and boiled at 100°C for 2-3 h. Finally, samples were collected by centrifugation (5000 rpm) and washed three times with deionized water before resuspension in sterile 1× PBS and stored in a 4°C refrigerator. The samples were hydrolyzed with 13 M (99% (w/v)) trifluoroacetic acid (TFAA) (Jianyang Biotech, Guangdong, China) at 100°C for 4 h and evaporated at 70°C. The samples were washed twice with deionized water by evaporation and then resuspended in deionized water. To investigate the components of the heteroglycan, samples were analyzed together with carbohydrate standards by highperformance liquid chromatography according to a peak area normalization method in a carbohydrate analyzer system (Aglient 1100, USA). Before the infection experiments, heteroglycan samples were tested for endotoxin contamination and protein using the limulus amebocyte lysate (LAL) assay (QCL-1000; Lonza) and BCA Protein Assay Kit (CWbio, China) respectively. The heteroglycan samples were inoculated on SDA for 96 h or LB broth for 24 h to detect possible fungal and bacterial contamination. The heteroglycan size was determined by flow cytometry using 2 μm and 10 μm latex beads as size standards (SPH-ACBP, USA), as described in previous studies [4]. Heteroglycan stimulation in a mouse model of sporotrichosis Chitin-rich heteroglycan (100 μg/ml) and conidial suspensions of 1 × 10 7 cfu/ml in sterile 1 × PBS were prepared. Thirty BALB/c mice were randomly assigned into three groups (n = 30). Ten mice were subcutaneously injected with 0.10 ml of PBS on the back skin as the blank group (n = 10). Another ten mice were subcutaneously injected with 0.10 ml of conidial suspension on the back skin and intraperitoneally injected with 1× PBS as the untreated group (n = 10). The remaining ten mice were subcutaneously injected with 0.10 ml conidial suspension on the back skin and intraperitoneally inoculated with heteroglycan (100 μg) as the heteroglycantreated group (n = 10). All mice were observed for 5 weeks, and the lesion sizes were measured weekly. At the end of the experiment, the mice were sacrificed by cervical dislocation. The skin lesions, lung, liver, and spleen were harvested under aseptic conditions for fungal culture. The pathologic differences and fungal burden in local skin were evaluated by H&E and PAS staining. Macrophages isolation from mice Peritoneal macrophages of 36 healthy mice that were sacrificed by cervical dislocation were collected from the peritoneal cavity lavage using sterile 1× PBS. More than 95% of the collected cells were macrophages, as demonstrated by Giemsa staining. Macrophages were adjusted to 1 × 10 6 cells/well in 6-well flat-bottom culture plates (Corning, Beijing). The plates with cells were incubated for 2 h at 37°C in an incubator containing 5% CO 2 and 95% air. After incubation, the wells were washed with DMEM (low glucose) (Gibco, USA) 3 times to remove nonadherent cells, and the adherent cells were cultured in DMEM (low glucose) containing 10% fetal bovine serum/newborn calf serum (Gibco, USA) for the following phagocytosis and cytokine induction assay. Phagocytosis assay Conidia (5 × 10 6 cfu/well) with or without chitin-rich heteroglycan (10 μg/ml) were incubated with macrophages (1 × 10 6 cells/well) in round-bottom 6-well plates with slides at the bottom (MOI 5:1). All plates were incubated at 37°C in 5% CO 2 and 95% air for 48 h. The supernatant was removed and washed with 1× PBS 3 times. The slides were stained with eosin (Sigma, USA) and analyzed under an optical microscope (400× magnification; Axiostar plus). The phagocytic index (PI) was calculated as the percentage of phagocytosing cells multiplied by the mean number of ingested fungi in a total of 200 macrophages per well [31]. Cytokine induction assay Conidial suspensions (1 × 10 6 cfu/well) were incubated with macrophages (1 × 10 6 cells/well) in 6-well plates. Chitin-rich heteroglycan (10 μg/ml) was added to 4 wells as the heteroglycan-treated group while the same volume of DMEM (low glucose) was added to another 4 wells as the conidia group. The remaining 4 wells contained macrophages without conidia or heteroglycan as the control group. All plates were incubated for 24 h, 48 h, and 72 h at 37°C in an incubator containing 5% CO 2 and 95% air. After the appropriate incubation time, supernatants were collected to evaluate TNF-α, IL-12, and IL-10 secretion using commercial enzyme-linked immunosorbent assays (ELISAs) (R&D System, USA) following the manufacturer's instructions. Colorimetric reactions were measured at 450 nm with wavelength correction at 570 nm on a Multiskan Ascent ELISA reader (Labsystems, Helsinki, Finland). Statistical analysis Statistical analysis was performed using SPSS software 22.0 (SPSS Inc., Chicago, IL, USA). Data represented cumulative results of all experiments and were showed as mean ± standard deviation (SD). The Mann-Whitney U test was used to establish statistical significance of phagocytosis and fungal burden in two groups. Kruskal-Wallis test was used to establish statistical significance of cytokines secretion in three groups. A value of p < 0.05 was considered statistically significant.
5,843.6
2021-06-25T00:00:00.000
[ "Biology", "Medicine" ]
ADVANCED ENCRYPTION STANDARD USING FPGA OVERNETWORK The increase number of eavesdropping or cracker to attack the information and hack the privacy of people. So, the essential issue is making system capable of ciphering information with rapid speed. Due to the advance in computer eavesdropping and cracker that made them to analysis the way of ciphering in rapid speed way. The development in the computer especially in the rapid processer in the last decade create the breaching of any system is a matter of time. Owing to most of breaching ways are based on analysis of system that requireы to be breached and to try brute force on that system to crack it. However, the lacking of influential processers that are capable of breaching system since earlier processors are limit to number of instructions. It can be done in second, which was not sufficient trying to break the system using brute force. In addition, the time required is far away from getting valuable messages in the time that needed. So, the research gives the focus on performing rapid system for ciphering the information rapidly and changing the ciphering every few milliseconds. The changing of ciphering in every millisecond helps system form preventing the eavesdropping and cracker from imposing brute force on the system and hacking the messages and images. The system that created is based on Advanced Encryption Standard (AES), which is it very best performing algorithm in ciphering and deciphering since it doesn’t need complex mathematical formula. The research is about designing system that capable of performing AES by using high processer designed on Field programmable gate Area (FPGA). The ciphering of AES using FPGA helps minimize the time required to cipher the information. Also, the research will focus on ciphering and deciphering of images by AES using FPGA. Introduction The evolution that happens in the computer and the rapid processer that occur during the last decade make the breaching of any system is a matter of time [1]. Due most of breaching ways are based on analysis of system that need to be breached and to try brute force on that system to crack it [2,3]. However, the lacking of powerful processers that are capable of breaching system since pervious processors are limit to number of instructions. It can be done in second, which was not sufficient trying to break the system using brute force [4]. In addition, the time required is far away from getting valuable messages in the time that needed [5,6]. Due to these reasons, the research gives the focus on performing rapid system that capable of ciphering the information in rapid way and changing the ciphering every few milliseconds [7,8]. The changing of ciphering in every millisecond helps system form preventing the eavesdropping and cracker from imposing brute force on the system and hacking the messages and images [9]. National Institute of standards and technology (NIST) used the Advanced Encryption Standard (AES) as a replacement for 3DES and IDES which was the most used for ciphering in Computer Sciences their time. The difference in the way of the AES than the 3DES and IDES is that AES is not based on Feistel Structure. The advantage of this structure is that AES can perform the whole block of data in single matrix [9]. The system that created is based on AES, which is it very best performing algorithm in ciphering and deciphering since it doesn't need complex mathematical formula [3,10]. And it has flexibility in use because of it can used in ciphering message or images [10,11]. Also, the AES is chosen based on preforming of the algorithm. It was the top on security scoreboard during the two previous decades. The evolution of this algorithm that develops pervious these years to became the best ciphering ways in network and banking security [12]. The system that created in this research is AES using the FPGA. The FPGA is device that capable of designing processer that capable of performing special algorithm based on the program that injected in the FPGA. The special processer that design in FPGA is capable of performing AES on images and message depending on the way that needed in attach that insert to the FPGA as input [13]. The FPGA is rapid the time that required to ciphering any text message in microsecond and enhanced the AES by ciphering and deciphering time required. Due to capability of parallel processing on FPGA [14]. The FPGA gives the opportunity to changing the ciphering ever millisecond in rapid ways since it only required a microsecond to cipher or decipher any messages [10]. This will help in sending information through the network will never be breaching due to rapid changing of ciphering. A Huge amount of application of AES algorithm used because its ability in cipher data and invincible to break and the flexibility in usage [15,16]. Also, it doesn't require massive amount of resources and that made it powerful in ciphering and deciphering. The AES consisting of four element or steps as shown in Fig. 1 [17]. Computer Sciences As shown in the Fig. 1 each round consist from various steps which these steps are SubBytes, which used to divide the bytes of messages into pieces in order to encrypt this message and exchange each pixel that want to be ciphering with value that located in S-Box. On the other hand, the second steps are ShiftRows, which will shift each row in amount that may be different from others row in the message or images. The last but not least step is MixColumns that will mix four bytes of each column and it will explain later in ciphering section. The last step is ADDRoundKeys. This step is used to add key in every round of the ciphering process. These steps are power of AES since it very complex and truck steps that made to create the mighty of AES algorithm. AES with FPGA The AES composed from four steps as mentioned before and can be shown in Fig. 2. As the initial step, the AES algorithm is subject to the data needed to be ciphering. It will deter the step if the data is required to be cipher as plane text for first operation for changing the plain text to state array [15]. Otherwise, if the data is consist from image it will not require to make array state since the images will represent as array state. This research uses 256´256 images for ciphering and deciphering. The first deterring of program is not needed since the images are represented as array. That means the image will go to the following process which will substitute all pixels with a value located in S-Box that corresponding to its location. The S-Box is constructed from 16´16 array. The number of values that S-Box consisted is 256 values and each one of these values are different from others in the same matrix that needed to be ciphered. Fig. 2. AES Steps The second step is simply substitution of every pixel in array with value that corresponding to the value in S-Box. Since the S-Box is an array and this size of this array is 16 by 16. The S-Box contains 256 values in it and each value is different from other that located in the same matrix. The operation of substitution of pixels in array with its corresponding value in S-Box can be shown in Fig. 3. ShiftRows will be the Second step after complete the substitution of all pixel in array with its corresponding pixel in S-Box. The ShiftRows as its name mean that shifting all rows in the state array to the left. The shifting will be differ from row to row in the same state array matrix and the shifting will depends on their location. The shifting process based on Anti-Clock wise and the shifting process can be shown in Fig. 4. In last step but not least the MixColumn, from its name its transform columns. The Mix-Column will divide every column into 4 bytes block and each block will be transformed into polynomials equation by Galois function GF(2 8 ). These polynomial equation will be multiplied by its modulo x 8 +x 4 +x 3 +x 1 +x 0 as shown in Fig. 5 [18]. Then the last step will applied which is KeyGenerate and AddRoundKey. This step is first generate key for XORed the 16 bytes of the key generated with its corresponding 16 bytes of array of image. The key is generated based on the schedule of the algorithm. Fig. 6 shows the AddRoundKey process. Finally, when all these steps applied on the state array. The steps will repeat again for 10 more round on state array to encrypt the system as shown in Fig. 7 The deciphering process will be as reverse to the steps of ciphering. It will start with AddRoundKey and back to all previous round. Then, this step completes the results of first round will be input to inverse of MixColumn. MixColumn will return each 4 byte to its original results by multiply each 4 bytes with module of GF(8) equation. Then the results of this will go to the third step. The Shift Row will shift return the offset of each row to its position. Once all rows return to its Original Research Article: full paper (2021), «EUREKA: Physics and Engineering» Number 1 order and location before Shifting Row. The final step will be applied when Sub Byte will return each value of the state to its real value based on the S-Box. The results of Sub Byte will be the original images or the plain text. Material and methods Spartan 3AN is used for the implementation of the proposed algorithm as the device to implement the algorithm and has been programmed using Very High-level Design Language (VHDL). VHD L is used in this project for configuring and applying the AES algorithm in the FPGA. This language is one of the most famous languages that used in programming and configurable the FPGA. As known FPGA is very complicated devices specially in coding and working while in contrast it gives great results compare to other devices in the same field. The VHDL is used in Synthesis, simulation and generating the programmable file code for each structure and sections of AES algorithm in the FPGA and it can be shown in Fig. 7. The flexibility of fully utilizing and programing FPGA to achieve great results in ciphering and deciphering of AES in FPGA is noted. Results Rapid ciphering of information is most important in our days since the evolution of massive information need ciphering to transmit to their recipient. However, the using of FPGA help to improve of rapid ciphering of Image using AES. The combination of AES algorithm requires massive resource and injected on FPGA help to rapid ciphering of images. The time required to cipher 256×256 image is only 20 nanoseconds as shown in Fig. 8. This time is very helpful since it can cipher and decipher multiple images with no time. Computer Sciences Also, the resources of AES that toke from the FPGA resource are small as shown in Table 1. The FPGA that used in this experiment is Spartan 3AN and a frequency. The FPGA faster the operation of ciphering of image since it used parallel processing that means perform multiple tasks in same time if none of these tasks related to each other. The AES algorithm took the advantage of FPGA since most of their procedure is not related to each other. The resource that allocated in FPGA when injecting AES on it is explained as shown below. In addition, the program is used to changing the ciphering with every millisecond. It will allow to give more confidently to the system and this will not be happening if the FPGA not here. The proposed is compared with different algorithm and different types of FPGA devices and these different algorithms are used different types of approach to encryptions data. However, the results that have been taken from this proposed algorithm is more greats than other as shown in Table 2. The table shows that all the proposed approach used 128 bits for ciphering compare to 256 bits which is more security and more reliable than other methods. However, this cost the proposed approach more time than other for ciphering due to the large size of the algorithm. Also, the results shows that the number of slices that is used is small compared to other since the algorithm used 256 bits types of algorithm compared to 128 bits for other methods. Also, it can be shown that the throughput of the algorithm is less than Harshali Zodpe and Ashok Sapkal due to two facts the first they used faster FPGA devices so the throughput is faster the other is that the algorithm used 256 bits in ciphering compared to 128 bits in the other methods. Discussion of experimental results Defending the data is the most important as for world international or personal to saves data. Every year many algorithms are either formed or add new features to the previous algorithms to enhance the security of algorithms. This research used the most effective algorithm which is AES. FPGA is used in this research for speed-up the process of the algorithm and had made major effect to the speed of the algorithm with only 20 ns for complete the ciphering of nine round. The algorithm used the 256 bits for ciphering which is great enhancement in the security to cipher text due to the fact the eavesdropping needs more time to deciphering and more Original Research Article: full paper (2021), «EUREKA: Physics and Engineering» Number 1 sophisticated computer to break the system compared to 64 or 128 AES algorithm. There is no limitation in 256 AES algorithm in this research due to using the FPGA for ciphering and deciphering since the FPGA used the parallel processing that allow to perform multiply process in the same and this save time for performing the 256-AES. However, without the FPGA used 256-AES for ciphering and deciphering is bit slower since it required more process for ciphering than other 64 or 128 AES algorithm. The enhanced for this research is to use on high quality image and tried to compress the image and also tried to used multiple single processors like using multiple raspberry-pi or Arduino to provide the same results like FPGA. Conclusion The massive resource of information increases in every day the advance of the computers that used by eavesdropping to attack these data. It gives the essentiality to find way to rapid ciphering and to changing ciphering from time to time to make it invincible to eavesdropping to hack system. This done by union the FPGA and AES since the AES has the best scoring in security scoreboard and FPGA is special device. It allows to create special processor to do the AES operations. The FPGA help in festering the ciphering of AES in only 20 nanoseconds for image sized 256´256 pixels. And this done due to the ability of FPGA in performing the parallel processing and support of the AES algorithm to work in parallel. The next steps in this research will applied the algorithm on larger image size and to perform the algorithm on other devices to like Raspberry Pi and Arduino in parallel mode.
3,591
2021-01-29T00:00:00.000
[ "Computer Science" ]
Assessing the Value of Transfer Learning Metrics for Radio Frequency Domain Adaptation : The use of transfer learning (TL) techniques has become common practice in fields such as computer vision (CV) and natural language processing (NLP). Leveraging prior knowledge gained from data with different distributions, TL offers higher performance and reduced training time, but has yet to be fully utilized in applications of machine learning (ML) and deep learning (DL) techniques and applications related to wireless communications, a field loosely termed radio frequency machine learning (RFML). This work examines whether existing transferability metrics, used in other modalities, might be useful in the context of RFML. Results show that the two existing metrics tested, Log Expected Empirical Prediction (LEEP) and Logarithm of Maximum Evidence (LogME), correlate well with post-transfer accuracy and can therefore be used to select source models for radio frequency (RF) domain adaptation and to predict post-transfer accuracy. Introduction Modern day radio communications systems (Figure 1) allow users to send information across vast distances at near instantaneous speeds.The introduction of ML and DL techniques to modern radio communications systems has the potential to provide increased performance and flexibility when compared to traditional signal processing techniques.For example, cognitive radios (CRs) are capable of autonomously modifying parameters such as the modulation scheme, center frequency, bandwidth, and power in response to the external RF environment to provide continuous, high quality service to the end-user while complying with system and regulatory constraints [1].While RFML and CR approaches inevitably overlap, RFML differs from CR in that RFML only aims to utilize autonomous feature learning from raw RF data to learn the characteristics to detect, identify, and recognize signals-of-interest [2] and is sometimes used off-board the radio itself and without the intent to re-configure the radio.In other words, RFML approaches can be seen as a component of a larger CR system.Nevertheless, both CR and RFML have broad utility in both the commercial and defense sectors [2][3][4] and are expected to be critical components of the upcoming 6G standard [5]. The RF system overview shown in Figure 1 identifies the parameters/variables that each component of an RF system impacts.Such components make up the domain that may differ significantly across transmitters, receivers, and propagation environments (also known as channels), as well as over time, impacting RFML performance [6].For example, preliminary results given in [7] showed that the performance of convolutional neural network (CNN) and long short-term memory (LSTM)-based signal classification algorithms trained on data from one transmitter/receiver pair dropped as much as 8% when tested on data captured from other transmitter/receiver pairs even when augmentations were applied to improve performance.Similarly, [8] showed that performance of a CNNbased transmitter identification algorithm degraded significantly when tested on data captured at different times, as well as when tested on data captured in different locations, because the propagation environment had changed.However, the vast majority of existing RFML research focuses on using supervised learning techniques trained from random initialization to perform tasks such as detecting and classifying signals-of-interest [9], without consideration for the changes in domain that will almost certainly be encountered during deployment causing unpredictable and unwanted changes in performance.Transfer learning (TL) is a means to mitigate such performance degradations by reusing prior knowledge learned from a source domain and task to improve performance on a "similar" target domain and task, as shown in Figure 2.However, the use of TL in RFML algorithms is currently limited and not well understood [6].Prior work began to address this gap by investigating how the RF domain and task impact learned behavior, facilitating or preventing successful transfer [10].More specifically, RF TL performance, as measured by post-transfer top-1 accuracy, was evaluated as a function of several metadata parametersof-interest for a signal classification or automatic modulation classification (AMC) use-case using synthetic datasets.While post-transfer top-1 accuracy provides the ground truth measure of transferability, in the scenario where many source models are available for transfer to an alternate domain, evaluating the post-transfer top-1 accuracy for each source model may be too time consuming and computationally expensive.This work continues to examine RF TL performance across changes in domain specifically using two existing transferability metrics-LEEP [11] and LogME [12]-that provide a measure of how well a source model will transfer to a target dataset using a single forward pass through the source model. The primary contribution of this work is the application of LEEP and LogME to RFML.Though LEEP and LogME are designed to be modality agnositic, they have not been used in RFML previously.This work shows that both LEEP and LogME strongly correlate with post-transfer top-1 accuracy in the context of this AMC use-case, as well as with each other, and that results are consistent with those shown in the original publications.The application of these metrics to RFML also provides additional insight into RF TL performance and trends, building off of the results given in prior work [10]. Second, we present a method for using transferability metrics such as these to predict post-transfer accuracy within a confidence interval and without further training.More specifically, given a labeled raw In-phase/Quadrature (IQ) target dataset and a selection of pre-trained source models, we show that transferability metrics such as LEEP and/or LogME can be used to provide a lower and upper bound on how well each source model will perform once transferred to the target dataset, without performing head re-training or fine-tuning.This paper is organized as follows: Section 2 provides requisite background knowledge in RFML, and discusses related and prior works in TL for RFML, transferability metrics, and transfer accuracy prediction in other modalities such as CV and NLP.In Section 3, each of the key methods and systems used and developed for this work are described in detail, including the simulation environment and dataset creation, the model architecture and training, and the transferability metrics.Section 4 presents experimental results and analysis, as well as the proposed post-transfer accuracy prediction method.Section 5 highlights several directions for future work including extensions of this work performed herein using alternative transferability metrics and captured and/or augmented data, generalizations of this work to inductive TL settings, and the development of more robust or RF-specific transferability metrics.Finally, Section 6 offers conclusions about the effectiveness of TL and existing transferability metrics for RFML and the next steps for incorporating and extending TL techniques in RFML-based research.A list of the acronyms used in this work is provided in the appendix for reference. Radio Frequency Machine Learning (RFML) While the term RFML can be loosely defined as the application of ML or DL to the RF domain, in this work, we use the more rigorous definition of RFML developed by DARPA: the use of DL techniques to reduce the amount of expert-defined features and prior knowledge needed to perform the intended application.This typically means that little-to-no pre-processing is applied to the received signal, which can also reduce latency and computational complexity. The vast majority of RFML literature has focused on delivering state-of-the-art performance on spectrum awareness and cognitive radio tasks such as signal detection, signal classification or AMC, and spectrum anomaly detection while substantially reducing the expert knowledge needed to perform tradition signal processing techniques.One of the most common, and arguably the most mature spectrum awareness or cognitive radio application explored in the literature, and the example use-case examined in this work is AMC, which is described further in the following sub-section. Automatic Modulation Classification (AMC) AMC is the classification of the format or modulation scheme of a signal-of-interest and is a necessary step in demodulating or recovering the data encoded in a signal [9]. Traditional signal processing approaches to AMC typically consist of a feature extraction stage using hand-crafted "expert features" and a pattern recognition stage [13].These expert features are pre-defined and designed by a human domain-expert to statistically distinguish between the modulation classes-of-interest and can be time intensive and computationally expensive to extract.Pattern recognition is then performed on these signal features, extracted from the raw RF data during pre-processing, using algorithms such as decision trees, support vector machines (SVMs), or simple neural networks (NNs) to identify the modulation class of the signal-of-interest. RFML-based approaches both replace the use of hand-crafted expert features using deep NNs, typically CNNs or recurrent neural networks (RNNs), and combine the feature extraction and pattern recognition steps into a single architecture [7,14].Replacing traditional signal processing techniques with RFML allows for blind and automatic feature learning and classification with little-to-no pre-processing and less prior knowledge and has achieved state-of-the-art performance. RF Domain Adaptation The recent RFML and TL taxonomies and surveys [6,9] highlight the limited existing works that successfully use sequential TL techniques for domain adaptation-transferring pre-trained models across channel environments [15,16], across wireless protocols [17,18], and from synthetic data to real data [19][20][21][22]-for tasks such as signal detection, AMC, and specific emitter identification (SEI).Until recently, little-to-no work has examined what characteristics within RF data facilitate or restrict transfer [6], outside of observing a lack of direct transfer [7,23,24], restricting RF TL algorithms to those borrowed from other modalities, such as CV and NLP.While correlations can be drawn between the vision or language spaces and the RF space, these parallels do not always align, and therefore, algorithms designed for CV and NLP may not always be appropriate for use in RFML. Our prior work systematically evaluated RF TL performance as a function of signal-tonoise ratio (SNR), frequency offset (FO), and modulation type for an AMC use-case using the same synthetic dataset used herein and a post-transfer top-1 accuracy as the performance metric.(The impact of changing SNR and/or FO on the RF domain is discussed further in Section 3.1.)Across both changes in domain, modeled using changes in SNR and FO, and changes in task, modeled using changes in modulation type, results indicated that source/target similarity was key to successful transfer, as well as domain/task difficulty.More specifically, transfer is more often successful when the source domain/task is more challenging than the target (i.e., the source domain has a lower SNR, or the source task has more output classes than the target task).Discrepancies in the channel environment, as modeled by changes in SNR, were shown to be more challenging to overcome via TL than discrepancies in the RF hardware or platform, as modeled by changes in FO.Additionally, in the cases when TL provided a performance benefit over training from random initialization, head re-training generally outperformed fine-tuning.In this work, post-transfer top-1 accuracy is paired with existing transferability metrics, LEEP and LogME, to further identify how changes in the RF domain impact transferability and for model selection and post-transfer accuracy prediction. Transferability Metrics TL techniques use prior knowledge obtained from a source domain/task to improve performance on a similar target domain/task.More specifically, TL techniques aim to further refine a pre-trained source model using a target dataset and specialized training techniques.However, not all pre-trained source models will transfer well to a given target dataset.Though it is generally understood that TL is successful when the source and target domains/tasks are "similar" [25], this notion of source/target similarity is ill-defined.The goal of a transferability metric is to quantify how well a given pre-trained source model will transfer to a target dataset.While the area of transferability metrics is growing increasingly popular, to our knowledge, no prior works have examined these metrics in the context of RFML.Transferability metrics developed and examined in the context of other modalities can broadly be categorized into one of two types: those requiring partial re-training and those that do not. Partial re-training methods such as Taskonomy [26] and Task2Vec [27] require some amount of training to occur, whether that be the initial stages of TL, full TL, or the training of an additional probe network, in order to quantify transferability.Partial re-training methods are typically used to identify relationships between source and target tasks and are useful in meta-learning settings but are not well suited to settings where time and/or computational resources are limited.Though the computational complexity of partial re-training methods varies, it vastly exceeds the computational complexity of methods that do not require any additional training, such as those used in this work. This work focuses on methods that do not require additional training, which typically use a single forward pass through a pre-trained model to ascertain transferability.Methods such as these are often used to select a pre-trained model from a model library for transfer to a target dataset, a problem known as source model selection.LEEP [11] and LogME [12] were chosen as example metrics for this work because they are often used as baselines in the transferability metric literature [28][29][30][31], are designed to be modality agnostic, and have outperformed similar metrics such as Negative Conditional Entropy (NCE) [32] and Hscores [33] in CV and NLP-based experiments.Moreover, LEEP and LogME are intuitive to understand, gauging transferability using the source model's response to the target dataset in the form of logits or layer activations.More recent transferability metrics that also show promise include Optimal Transport-based Conditional Entropy (OTCE) [29] and Joint Correspondences Negative Conditional Entropy (JC-NCE) [30], TransRate [28], and Gaussian Bhattacharyya Coefficient (GBC) [31] and may be examined as follow-on work.The success of both LEEP and LogME in the context of RFML shown herein suggests that other modality agnostic transferability metrics such as these would also likely be appropriate for use in RFML. Related works examine source model ranking or selection procedures [34,35], which either rank a set of models by transferability or select the model(s) most likely to provide successful transfer.However, source model ranking or selection methods are less flexible than transferability metrics in online or active learning scenarios.More specifically, source model ranking or selection methods are unable to identify how a new source model compares to the already ranked/selected source models without performing the ranking/selection procedure again.Related works also include methods for selecting the best data to use for pre-training [36] or during the transfer phase [37], and approaches to measuring domain, task, and/or dataset similarity [38]. Predicting Transfer Accuracy The problem of predicting transfer accuracy is still an open field.To the best of our knowledge, no prior works have examined predicting transfer accuracy specifically for RFML, but approaches have been developed for other modalities.Most similar to our work is the approach given in [39], where the authors showed a linear correlation between several domain similarity metrics and transfer accuracy, using statistical inference to derive performance predictions for NLP tools.In this work, we show that LEEP and LogME can be used in place of the domain similarity metrics considered in [39] to similar effect.Similarly, work in [40] used domain similarity metrics to predict performance drops as a result in domain shift.However, the domain similarity metrics used in [40] are not easily applicable to RF data, which are high-dimensional, fast-changing, and highly dependent on the underlying bit pattern.More recently, [41] proposed using a simple multi-layer perceptron (MLP) to determine how well a source dataset will transfer to a target dataset, again in an NLP setting.However, the method proposed [41] required the training of an additional model, as well as the use of domain similarity metrics specific to NLP. Methodology This section presents the experimental setup used in this work, shown in Figure 3, which includes the data and dataset creation process, the model architecture and training, and the transferability metrics.These three key components and processes are each described in detail in the following subsections. Dataset Creation This work uses the same custom synthetic dataset used in our prior work [10], which is publicly available on IEEE DataPort [42].The dataset creation process, shown in Figure 3a, began with the construction of a large "master" dataset containing 600,000 examples of each of the signal types given in Table 1, for a total of 13.8 million examples.For each example in the master dataset, the SNR is selected uniformly at random within [−10 dB, 20 dB], and the FO is selected uniformly at random within [−10%, 10%] of the sample rate.All further signal generation parameters such as filtering parameters, symbol order, etc., are specified in Table 1.Modulation Index ∈ [0.5, 0.9] AM-DSBSC Modulation Index ∈ [0.5, 0.9] AM-LSB Modulation Index ∈ [0.5, 0.9] AM-USB Modulation Index ∈ [0.5, 0.9] AWGN Then, as in [10], subsets of the master dataset were selected to create different RF domains varying: These three parameter sweeps address each type of RF domain adaptation discussed in the RFML TL taxonomy [6]. Simulation Environment All data used in this work were generated using the same noise generation, signal parameters, and signal types as in [10].More specifically, in this work, the signal space has been restricted to the 23 signal types shown in Table 1, observed at a complex baseband in the form of discrete time-series signals, s[t], where (1) and with the exception of the AWGN signal that has a Nyquist rate of 1, all signals have a Nyquist rate of either 0.5 or 0.33 (twice or three times the Nyquist bandwidth). Model Architecture and Training The aim of this work is to use the selected metrics to quantify the ability to transfer the features learned by a single architecture trained across pairwise combinations of source/target datasets with varying (1) SNRs, (2) FOs, or (3) SNRs and FO in order to identify the impact of these parameters-of-interest on transferability.Given the large number of models trained for this work, training time was a primary concern when selecting the model architecture.Therefore, this work uses a simple CNN architecture, shown in Table 2, that is based off of the architectures used in [7].The model pre-training and TL process is shown in Figure 3b and represents a standard training pipeline.For pre-training, the training dataset contained 5000 examples per class, and the validation dataset contained 500 examples per class.These dataset sizes are consistent with [43] and adequate to achieve consistent convergence.Each model was trained using the Adam optimizer [44] and cross-entropy loss [45], with the PyTorch default hyper-parameters [46] (a learning rate of 0.001, without weight decay), for a total of 100 epochs.A checkpoint was saved after the epoch with the lowest validation loss and was reloaded at the conclusion of the 100 epochs. As in the prior work [10], both head-retraining and model fine-tuning methods are examined for transfer.For both methods, the training dataset contained 500 examples per class, and the validation dataset contained 50 examples per class, representing a smaller sample of available target data.Both methods also used the Adam optimizer and cross-entropy loss, with checkpoints saved at the lowest validation loss over 100 epochs.However, during head re-training, only the final layer of the model was trained, again using the PyTorch default hyper-parameters, while the rest of the model's parameters were frozen.During fine-tuning, the entire model was trained with a learning rate of 0.0001, an order of magnitude smaller than the PyTorch default of 0.001 used during pre-training. Transferability Metrics As previously discussed, while transfer accuracy provides the ground truth measure of transferability, calculating transfer accuracy requires performing sequential learning techniques such as head re-training or fine-tuning to completion, in addition to the labeled target dataset.LEEP [11] and LogME [12] are existing metrics designed to predict how well a pre-trained source model will transfer to a labeled target dataset without performing transfer learning techniques and using only a single forward pass through the pre-trained source model.These metrics in particular were shown to outperform similar metrics, NCE [32] and H-scores [33] and are designed to be modality agnostic.Therefore, though neither metric is known to have been shown to correlate with transfer accuracy in the context of RFML, the success both metrics showed in CV and NLP applications bodes well for the RF case. LEEP [11] can be described as the "average log-likelihood of the expected empirical predictor, a simple classifier that makes prediction[s] based on the expected empirical conditional distribution between source and target labels," and is calculated as such that f S is the pre-trained source model, X T is the target dataset, n is the number of examples in the target dataset, Y T is the set of all target labels, Y S is the set of all source labels.P(y T |y S ) is computed using P(y T , y S ) and P(y S ) with P(y T |y S ) = P(y T , y S ) P(y S ) where and Log Expected Empirical Prediction (LEEP) has been shown to correlate well with transfer accuracy using image data, even when the target datasets are small or imbalanced.The metric is bounded between (−∞, 0], such that values closest to zero indicate best transferability, though the scores tend to be smaller when there are more output classes in the target task.The calculation does not make any assumptions about the similarity of the source/target input data, except that they are the same size.For example, if the source data are raw IQ data of size 2 × 128, then the target data must also be of size 2 × 128 but need not be in raw IQ format (i.e., the target data could be in polar format).Therefore, the metric is suitable for estimating transferability when the source and target tasks (output classes) differ.However, the calculation of the metric does assume the use of a Softmax output layer, limiting the technique to supervised classifiers. In comparison to LEEP, which measures the expected empirical distribution between the source and target labels, Logarithm of Maximum Evidence (LogME) [12] estimates the maximum evidence, or marginal likelihood, of a label given the features extracted by the pretrained model at some layer j using a computationally efficient Bayesian algorithm.Letting y be the groundtruth labels of the target dataset, X T , of size n, and D be the dimensionality of the feature space F extracted from the pre-trained model at layer j given X T as input, LogME is computed as follows: The logarithm of the evidence is computed using where A = αI + βF T T and m = βA −1 F T y.The full derivation of Equation ( 7) can be found in [12].Maximization of L(α, β) is achieved by iteratively evaluating m and with σ being the singular values of F T F, and updating until α and β converge, generally in 1-3 iterations.Finally, argmax α,β L(α, β) is scaled by n to compute the average maximum log evidence of y i given F i for all i ∈ {1, . . .n}, or LogME. Like LEEP, this calculation only assumes that the source and target input data are the same size.The metric is bounded within [−1, 1], such that values closes to −1 indicate worst transferability, and values closest to 1 indicate best transferability.LogME does not require the use of a Softmax output layer and is therefore appropriate in un-supervised settings, regression settings, and the like.Further, LogME was shown to outperform LEEP in an image classification setting, better correlating with transfer accuracy and has also shown positive results in an NLP setting. Experimental Results and Analysis The product of the experiments performed herein is 82 data subsets, each with distinct RF domains, 82 source models trained from random initialization, and 4360 transfer learned models, half transferred using head re-training and the remaining half transferred using fine-tuning.Associated with each of the 4360 transfer learned models is a top-1 accuracy value, a LEEP score, and a LogME score.The following subsections present the results obtained from the experiments performed and discuss how well LEEP and LogME perform in the RF modality, how to use transferability metrics to predict post-transfer performance, as well as some insights and practical takeaways that can be gleaned from the results given, including a preliminary understanding of when and how to use TL for RF domain adaptation. Transferability Metrics for Model Selection in RF Domain Adaptation When evaluating whether a transferability metric is accurate, the primary consideration is how well the metric reflects or correlates with the performance metric(s) used.Therefore, to identify whether LEEP and/or LogME can be used to select models for RF domain adaptation is to identify how well LEEP and LogME correlate with post-transfer top-1 accuracy.To this end, Figures 4-6 show LEEP and LogME versus the achieved transfer accuracy for each of the parameter sweeps described in Section 3.1.These figures qualitatively show that both LEEP and LogME correlate well with top-1 accuracy after transfer learning, whether through head re-training or fine-tuning for all domain adaptation settings studied.To quantify whether or not the metrics are useful, two correlation measures are also examined-the Pearson correlation coefficient [47] and the weighted τ [48]-and specified in the shaded boxes of Figures 4-6.The Pearson correlation coefficient, or Pearson's r, is a measure of linear correlation between two variables used in a wide variety of works, including the original LEEP paper.However, Pearson's r makes a number of assumptions about the data, some of which may not be met by these data.Most notably, Pearson's r assumes that both variables (LEEP/LogME and post-transfer top-1 accuracy, herein) are normally distributed and have a linear relationship.Alternatively, weighted τ, a weighted version of the Kendall rank correlation coefficient (Kendall τ), is used in the original LogME work.Weighted τ is a measure of correspondence between pairwise rankings, where higher performing/scoring models receive higher weight, and it only assumes the variables (LEEP/LogME and post-transfer top-1 accuracy, herein) are continuous.Both From these figures and metrics, it can be concluded that both LEEP and LogME are strong measures for selecting models for RF domain adaptation.However, head re-training is more consistent with LEEP and LogME scores than fine-tuning, as evidenced by higher correlation coefficients.Therefore, when using LEEP or LogME for model selection, using head re-training as a TL method would be more reliable than using fine-tuning.In contrast, fine-tuning, while less reliable than head re-training when used in conjunction with LEEP or LogME for model selection, offers potential for small performance gains over head re-training.In practice, this indicates that unless top performance is of more value than reliability, head re-training should be used for TL when using LEEP or LogME for model selection.In the setting where model accuracy is of the utmost importance, it may be advantageous to try both head re-training and fine-tuning. It should also be noted that the results shown in Figures 4-6 are consistent with the results presented in the original LEEP and LogME publications where the metrics were tested in CV and NLP settings, supporting the claim that these metrics are truly modality agnostic.Therefore, other modality agnostic metrics seem likely to perform well in RFML settings as well and may be examined as follow-on work. When and How RF Domain Adaptation Is Most Successful 4.2.1. Environment Adaptation vs. Platform Adaptation Recalling that the sweep over SNR can be regarded as an environment adaptation experiment and the sweep over FO can be regarded as a platform adaptation experiment, more general conclusions can be drawn regarding the challenges that environment and platform adaptation present.Results given in prior work [6] indicated that changes in FO are easier to overcome than changes in SNR.That is, environment adaptation is more difficult to achieve than platform adaptation, and changes in transmitter/receiver hardware are likely easier to overcome using TL techniques than changes in the channel environment.This trend is also shown in Figure 7, which presents the LogME scores as a function of the LEEP scores for each of the parameter sweeps performed, showing both the LEEP and LogME scores are significantly higher for the FO sweep than the SNR sweep or SNR and FO sweep, indicating better transferability.Of course, this conclusion is dependent upon the results presented in Section 4.1, which show that LEEP and LogME correlate with post-transfer accuracy.Therefore, in practice, one should consider the similarity of the source/target channel environment before the similarity of the source/target platform, as changes in the transmitter/receiver pair are more easily overcome during TL. Head Re-Training vs. Fine-Tuning In our prior work, results showed that in the cases when TL provided a performance benefit over training from random initialization, head re-training generally outperformed fine-tuning.This trend is also evident in Figures 4-6.However, in the sweep over FO, especially when the LEEP and LogME scores were low, the fine-tuned models markedly outperformed head re-trained models.A low LEEP/LogME score indicates a significant change between the source and target domains, and in this case, a large change in FO.As a result, new features are needed to discern between modulation types, and modifications to the earlier layers of the pre-trained source model, where feature learning occurs, are needed in order to best adapt to the new target domain.However, head re-training is more time efficient and less computationally expensive than fine-tuning, making a strong case for using head re-training over fine-tuning for RF domain adaptation.The computational complexity of using head re-training versus fine-tuning is architecture-and training algorithm-dependent, but as an example, for the CNN architecture used in this work and shown in Table 2, the number of trainable parameters for head re-training and fine-tuning is 1518 and 7,434,243, respectively. Transferability Metrics for Predicting Post-Transfer Accuracy Having confirmed that LEEP and LogME strongly correlate with post-transfer top-1 accuracy, it can be concluded that these metrics can be used to compare the transferability of n source models to a single target dataset (i.e., whichever model provides the highest LEEP/LogME score is most likely to provide the best transfer).What follows is an approach to not only select or compare models for RF domain adaptation but also to predict the post-transfer top-1 accuracy without any further training.The approach is time-and resource-intensive to initialize, but once initialized, is fast and relatively inexpensive to compute and shows the predictive capabilities of these metrics.It should be noted that the cost of initialization can be mitigated somewhat by using only a subset of the available source/target pairs.However, the more source/target pairs used, the better the quality of the transfer accuracy prediction and confidence interval. Given n known domains and assuming a single model architecture, to initialize the approach: 1. Run baseline simulations for all n known domains, including pre-training source models on all domains, and use head re-training and/or fine-tuning to transfer each source model to the remaining known domains 2. Compute LEEP/LogME scores using all n pre-trained source models and the remaining known domains. 3. Compute post-transfer top-1 accuracy for all n transfer-learned models, constructing datapoints like those displayed in Figures 4-6. 4. Fit a function of the desired form (i.e., linear, logarithmic, etc.) to the LEEP/LogME scores and post-transfer top-1 accuracies.For example, a linear fit of the form y = β 0 x + β 1 is shown in Figures 4-6 such that x is the transferability score and y is the post-transfer top-1 accuracy. 5. Compute the margin of error by first calculating the mean difference between the true post-transfer top-1 accuracy and the predicted post-transfer top-1 accuracy (using the linear fit), and then multiply this mean by the appropriate z-score(s) for the desired confidence interval(s) [49]. Then, during deployment, given a newly labeled target dataset: 1. Compute LEEP/LogME scores for all pre-trained source models and the new target dataset. 2. Select the pre-trained source model yielding the highest LEEP/LogME score for TL. 3. Use the fitted linear function to estimate post-transfer accuracy, given the highest LEEP/LogME score, and add/subtract the margin of error to construct the confidence interval. Optionally, after transferring to the new labeled target dataset, add this dataset to the list of known domains and update the linear fit and margin of error, as needed. The error in the predicted post-transfer accuracy using the proposed method is shown in Figures 8-10.These plots show that not only are LEEP/LogME highly correlated with post-transfer top-1 accuracy (as shown in Figures 4-6), but the error in the predicted posttransfer top-1 accuracy using a linear fit to the LEEP and LogME scores, respectively, is also highly correlated.More specifically, when the proposed method constructed using LEEP predicts a lower/higher post-transfer accuracy than ground truth, the proposed method constructed using LogME will do the same with the frequencies shown in Table 3.This indicates that these scores could be combined to create a more robust transferability metric and more robust post-transfer accuracy prediction with relative ease, which is left for future work. Future Work As previously mentioned, several new transferability metrics were developed concurrently with this work and are suitable as replacements for LEEP and LogME in any of the above experiments.Therefore, the first direction for future work is replicating this work using alternative metrics, such as OTCE [29], JC-NCE [30], TransRate [28], and GBC [31], to identify if these metrics are also suitable for use in the context of RFML and if these metrics might outperform those used herein.Given that this work supports the claim that LEEP and LogME are modality agnostic, it seems likely that additional transferability metrics that are also modality agnostic by design will also follow this trend.Additionally, the concept of transferability metrics and the experiments performed herein should be extended to inductive TL settings, including multi-task learning and sequential learning settings in which the source and target tasks differ (i.e., adding/removing output classes), as well as to different model architectures, as this work only considered RF domain adaptation. Another direction for future work is the development of new transferability metrics that are more robust than LEEP or LogME alone or are RFML-specific.Most apparently, results discussed previously in Section 4.3 indicate that LEEP and LogME could be combined to create a more robust transferability metric with relative ease.However, while modality agnostic metrics such as LEEP and LogME are shown herein to be suitable for use in RFML, a transferability metric purpose-built for the RF space would likely be more widely accepted amongst traditional RF engineers [9]. Towards such an approach, transferability can be gauged in numerous, potentially synergistic, ways.While metrics such as LEEP and LogME measure the source model's activations in response to the target dataset, a supplementary approach could include quantifing the similarity between the source and target datasets, independent of the source model. Finally, this work provided additional conclusions about how best to use TL in the context of RFML, which should be verified and refined with experiments using captured and augmented data [23].If verified, these metrics could be used to select RFML models or predict transfer performance for online or incremental learning during continuous deployment to help overcome the highly fluid nature of modern communication systems [9]. Conclusions TL has yielded tremendous performance benefits in CV and NLP, and as a result, TL is all but commonplace in these fields.However, the benefits of TL have yet to be fully demonstrated and integrated into RFML systems.While prior work began addressing this deficit by systematically evaluating RF domain adaptation performance as a function of several parameters-of-interest, this work introduced two existing transferability metrics, LEEP and LogME.Results presented herein demonstrated that LEEP and LogME correlate well with post-transfer accuracy and can therefore be used for model selection in the context of RF domain adaptation.The addition of these metrics also provided further insight into RF TL performance trends, generally echoing the guidelines for when and how to use RF TL presented in [10].Finally, an approach was presented for predicting post-transfer accuracy using these metrics within a confidence interval and without further training. Figure 1 . Figure 1.A system overview of a radio communications system.In a radio communications system, the transmitter and receiver hardware and synchronization will be imperfect, causing non-zero values of α ∆ [t], ω ∆ [t], and θ ∆ [t].The wireless channel provides additive noise, ν[t]. Figure 2 . Figure 2. In traditional ML (a), a new model is trained from random initialization for each domain/task pairing.TL (b) utilizes prior knowledge learned on one domain/task, in the form of a pre-trained model, to improve performance on a second domain and/or task.A concrete example for environmental adaptation to signal-to-noise ratio (SNR) is given in blue. Figure 3 . Figure 3.A system overview of the (a) dataset creation, (b) model pre-training and TL, and (c) model evaluation and transferability metric calculation processes used in this work. Figure 4 .Figure 5 . The LEEP (a) and LogME (b) scores versus post-transfer top-1 accuracy for the sweep over SNR.The dashed lines present the linear fits for all target domains.The LEEP (a) and LogME (b) scores versus post-transfer top-1 accuracy for the sweep over FO.The dashed lines present the linear fits for all target domains. Figure 6 . Pearson's r and weighted τ have a range of [−1,1].These correlation coefficients confirm the results discussed above.Finally, Figure 7 confirms the LEEP and LogME scores are highly linearly correlated with each other.The LEEP (a) and LogME (b) scores versus post-transfer top-1 accuracy for the sweep over both SNR and FO.The dashed lines present the linear fits for all target domains. Figure 7 . Figure 7.The LEEP versus LogME scores for the sweep over SNR, FO, and both SNR and FO.The dashed lines present the linear fit. Figure 8 . Figure 8.The error in the predicted post-transfer accuracy using a linear fit to the LEEP scores (x-axis) and LogME scores (y-axis) for the sweep over SNR. Figure 9 . Figure 9.The error in the predicted post-transfer accuracy using a linear fit to the LEEP scores (x-axis) and LogME scores (y-axis) for the sweep over FO.Note the change in scale compared to Figures 8 and 10. Figure 10 .Table 3 . Figure 10.The error in the predicted post-transfer accuracy using a linear fit to the LEEP scores (x-axis) and LogME scores (y-axis) for the sweep over both SNR and FO. Table 1 . Signal types included in this work and generation parameters. • Only SNR-Varying only SNR represents an environment adaptation problem, characterized by a change in the RF channel environment (i.e., an increase/decrease in the additive interference, ν[t], of the channel).Twnety-six source data subsets were constructed from the larger master dataset, with SNRs selected uniformly at random from a 5 dB range sweeping from −10 dB to 20 dB in 1 dB steps (i.e., [−10 dB, −5 dB], [−9 dB, −4 dB], . . ., [15 dB, 20 dB]), and for each data subset in this SNR sweep, the transmitting/receiving devices.Twnty-five source data subsets were constructed from the larger master dataset containing examples with SNRs selected uniformly at random from a 10 dB range sweeping from −10 dB to 20 dB in 5 dB steps (i.e., [−10 dB, 0 dB], [−5 dB, 5 dB], . . ., [10 dB, 20 dB]) and with FOs selected uniformly at random from a [10], and θ[t] are the magnitude, frequency, and phase of the signal at time t, and ν[t] is the additive interference from the channel.Any values subscripted with a ∆ represent imperfections/offsets caused by the transmitter/receiver and/or synchronization.Without loss of generality, all offsets caused by hardware imperfections or lack of synchronization have been consolidated onto the transmitter during simulation.Signals are initially synthesized in an additive white Gaussian noise (AWGN) channel environment with unit channel gain, no phase offset, and frequency offset held constant for each observation.Like in[10], SNR is defined as SNR
9,041.2
2024-07-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Role of Plasmid in Production of Acetobacter Xylinum Biofilms Acetobacter xylinum has the ability to produce cellulotic biofilms. Bacterial cellulose is expected to be used in many industrial or biomedical materials for its unique characteristics. A. xylinum contains a complex system of plasmid DNA molecules. A 44 kilobases (kb) plasmid was isolated in wild type of A. xylinum. To improve the cellulose producing ability of A. xylinum, role of the plasmid in production of cellulose was studied. The comparisons between wild type and cured cells of A. xylinum showed that there is considerably difference in cellulose production. In order to study the relationship between plasmid and the rate of cellulose production, bacteria were screened for plasmid profile by a modified method for preparation of plasmid. This method yields high levels of pure plasmid DNA that can be used for common molecular techniques, such as digestion and transformation, with high efficiency. INTRODUCTION Cellulose is the most abundant biopolymer on earth, recognized as the major component of plant biomass, but also a representative of microbial extracellular polymers [1] . Bacterial Cellulose (BC) belongs to specific products of primary metabolism and is mainly a protective coating, whereas plant cellulose plays a structural role [2] . Cellulose is synthesized by bacteria belonging to the genera Acetobacter, Rhizobium, Agrobacterium and Sarcina. Its most efficient producers are Gram-negative, acetic acid bacteria Acetobacter xylinum (reclassified as Gluconacetobacter xylinus), which have been applied as model microorganisms for basic and applied studies on cellulose [3] . Acetobacter xylinum produce an extracellular gel-like material or pellicle, which comprises of a random assembly of cellulose ribbons composed of a number of microfibrils. Extracellularly synthesized microbial cellulose differs from plant cellulose with respect to its high crystallinity and purity (free of legnin and other biogenic products), high water-absorption capacity and mechanical strength in the wet state [4] . Several application have been proposed for this cellulosic layer. Because of the unique properties, resulting from the ultrafine reticulated structure, BC has found a multitude of applications in paper, textile and food industries and as a biomaterial in cosmetics and medicine [5] . One of applications of bacterial cellulose is use as burn infection and heal lesions. Modern medical biotechnology has accepted artificial skins as a valid prospect. Several properties advantageous for its use as a temporary skin substitute were recognized for the biofilms [3,5] . Its successful application by dermatologists and plastic surgeons included human second and third degree skin burns, skin grafts, face peelin, infectious enhanced propertions of fibroblasts, collagen, blood vessels and granulation tissue as a result of the cellulosic pellicle use were seen in the healing wound. Wider application of this polysaccharide is obviously dependent on the scale of production and its cost. Therefore, basic studies run together with intensive research on strain improvement and production process development. In the research was studied, about role of plasmid on production of cellulose by elimination of plasmid DNA ,curing,. Curing is an important step in identifying the phenotypic traits encoded by a given plasmid. Plasmid curing studies have been focused on Escherichia coli, Salmonella typhimurium, Staphylococcus aureus, to a lesser extent, strains of degradative Pseudomonads. A multitude of different chemicals including inhibitors of DNA replication, intercalating drugs, bacterial surface agents and Sodium Dodecyl Sulphate (SDS) at elevated temperature has been used as curing agents. However, apart from its spontaneous loss, very little is known about how to cure the plasmids of A. xylinum. The plasmids may prove biotechnologically important as it has been implicated in the production of microbial cellulose [6] . In this study, Acridin orange properties were exploited for curing of the A. xylinum plasmid. MATERIALS AND METHODS Bacterium: A freeze-dried culture of A. xylinum was obtained from the Persian Culture Type Collection of Microorganisms. Following recultured in Schramm Hestrin (SH) medium , the culture was found to be pure and biochemical and morphological tests revealed that the organism conforms to the generic description listed for A. xylinum in Bergey's Manual of Systematic Bacteriology [7] . Culture conditions: The bacteria was inoculated with 10 8 to 10 9 cell per mL in SH medium at 28°C for a week under static conditions to produce cellulose pellicles. For plasmid isolation, Acetobacter xylinum was grown in SH agar. Determination of wet and dry weights of pellicles:The pellicle was lifted from the plate with a bent glass rod and allowed to drip for 30 min. The pellicle was then placed in a pre weighed plastic weighing plate. The weighing plate and pellicle were then placed in a drying oven for 6 h at 80ºC. After being dried and removed from the oven, the weighing boat and pellicle were weighed immediately. Pellicles were also washed with running tap water prior to the wet and dry weight determinations. There were no percentage differences between the wet and dry weight comparisons of pellicles that had not been washed before being allowed to drip for 30 min [8] . Rapid screening for plasmids: A small number of cells were picked by touching a colony with a toothpick. They were then inoculated into a microfuge tube containing 300 µL of SH broth. After overnight growth, the cells were pelleted in a microfuge (10,000 rpm, 2 min). The cells were then suspended by vortexing in 20 µL of gel-loading mix (0.25% bromophenol blue and 30% glycerol). Then 40 µL each of chloroform and phenol (saturated with 1.0 M Tris-HCl, pH 8.0) was added. The mixture was vortexed at full speed for 1 min followed by centrifugation for 10 min at 12,000 rpm. Then 10 µL of the aqueous fraction was subjected to electrophoresis on 0.7% agarose minigel (5.2×6.0 cm) with TAE buffer (40 mM Tris-acetate, pH 8.0 containing 2 mM Na 2 -EDTA) at 100 volts for 30 min. The gel was stained with ethidium bromide (0.5 µgmL -1 ) and the DNA bands were visualized under a UV transilluminator. Plasmid isolation: Plasmid isolation was performed by a modified method of Sambrook [9] . The protocol is as follows: Because Acetobacter xylinum produces cellulose in broth culture, SH medium was used. The bacterium were grown in SH medium supplemented with chloramphenicol, bismuth nitrate and sodium salicylate . The bacteria were suspended by mixing with a vortex mixer in suspension buffer, TE buffer (10 mM Tris, 1 mM EDTA), pH 8.0. An aliquot (200 µL) of freshly prepared lysis buffer (0.2 N NaOH containing 3% SDS) was added and after the contents were rapidly and thoroughly mixed without vortexing, the Eppendorf tube was allowed to stand on ice for 5 min. An aliquot (150 µL) of ice-cold neutralizing solution (acidic potassium acetate constituted by mixing 11.5 mL of glacial acetic acid, 28.5 mL of distilled water and 60 mL of 5 M potassium acetate) was added and the contents were gently mixed by inversion to neutralize the pH in the cell lysate which was noted to be viscous. The SDS and associated proteins as well as cell debris were then removed by centrifugation at 12000 rpm for 5 min at 4°C. The supernatant containing the plasmid DNA and possibly some RNA was transferred into a fresh Eppendorf tube containing 20 µL of DNase-free RNase (10 mgmL -1 ) to ensure the removal of any traces of RNA. After the addition of DNase-free RNase and incubation for 20 min in 37ºC, an equal volume of phenol/chloroform/isoamyl alcohol (DNA grade) was added to the supernatant. The contents of the Eppendorf tube were mixed gently but thoroughly and then centrifuged at 12000 rpm for 5 min at 4 °C in a microfuge. With the aid of a pipette, the supernatant (upper layer) was transferred to a fresh Eppendorf tube and great care was taken to avoid the transfer of the white precipitate present at the interface, if the upper layer was not clear owing to the presence of large quantities of polysaccharides, this step was repeated. Two volumes of ice-cold ethanol were then added to the supernatant and after thorough mixing and incubation at ¯7 0 °C or room temperature for 60 min, the plasmid DNA was precipitated by centrifugation at maximum speed in a microfuge at room temperature for 10 min, the supernatant was discarded and the Eppendorf tube was then allowed to stand inverted on a paper towel to drain away any excess ethanol. The plasmid DNA pellet was then washed twice with 1 mL of ethanol (70%) at 4 °C and after discarding the supernatant and removing large droplets of ethanol with the aid of sterile cotton buds or fine tissues, any traces of ethanol were further removed. Electrophoresis of the plasmid was carried out overnight on 0.7% agarose gels at 3.0 Vcm -1 in TAE buffer. Restriction fragments, however, were electrophoresed using TBE buffer at 6.0 Vcm -1 for 4 h as previously described [8,10,11] . Curing of Acetobacter xylinum: Acetobacter xylinum cured by growth at 28ºC in SH medium containg a subinhibitory concentration of Acridin orange (5 to 600 g). Cultures containing the highest concentration of Acridin orange which growth was clearly visible were diluted and spread into SH plates. Resulting clonies were tested for chloramphenicol sensitivity and electrophoresis. A cell lysate of A .xylinum and its cured derivative were cultivate as described in culture conditions of materials and methods and the presence or absence of plasmid DNA was tested using agarose gel electrophoresis. A purified sample of A. xylinum plasmid as well as lambda digested with HindIII) were also included. Restriction analysis of isolated plasmid DNA: Plasmids isolated from A. xylinum were digested with two restriction enzymes (BamHI and HindIII) at the same time. Aliquots of A. xylinum plasmid DNA, prepared as described above, were digested overnight at 37°C with ten units of various restriction enzymes and the fragments thus generated were fractionated by electrophoresis on 0.7% agarose gel as described above. The bands, which were visualized under ultraviolet light, were compare with standard marker. RESULTS AND DISCUSSION A. xylinum is an acetic acid bacterium which has the ability to produce cellulose extracellurarly. To improve the cellulose-producing ability of A. xylinum, recombinant DNA techniques seems to be proper method. The genus A. xylinum contains several species which are being extensively studied in connection with acetic acid production and cellulose formation [12] . There have been several reports of spontaneous mutation in A. xylinum strains affecting both morphological and physiological properties, including loss of the ability to produce cellulose [13,14] . Bacteria of this genus are not genitically well characterized and genetic investigations of the phenotypic instabilies have not been reported. To improve the cellulose producing ability of A. xylinum, we studied role of the plasmid in production of cellulose. Most strains of A. xylinum examined contain a rather complex system of plasmids [15] . Alterations in the plasmid profile were found in both cellulose negative mutants and cellulose producing revertants of A. xylinum. Cells of A. xylinum produce large amounts of exopolysaccharide, which interferes with the extraction of plasmid DNA [16,17] . To repress capsular polysaccharide production, bacteria were cultured in SH medium containing bismuth nitrate and sodium salicylate. After treatment, bacterial cells were more readily lysed in alkaline detergents. The resulting plasmid preparations contained virtually no capsular polysaccharide and relatively small quantities of lipopolysaccharide and protein, yet they produced yields of nucleic acids similar to those of conventional plasmid preparations. Conventional preparations from encapsulated organisms were largely insoluble and appeared as smears following agarose gel electrophoresis, with indefinite plasmid banding. Plasmids prepared by the new method were highly soluble in conventional buffers and exhibited highresolution plasmid banding patterns in agarose gels. The method proved effective with encapsulated or The method for plasmid isolation was not suitable for each bacteria because of the inhibitory effects of bismuth. Thus, removal of contaminating bacterial surface structures enabled the rapid isolation and characterization of plasmids from mucoid isolates, without the use of organic solvents, CsCl gradients, or expensive, disposable columns [16] . In this study, a simple and rapid method for screening plasmids of A.xylinum has been developed. It is especially useful when sample sizes are large. This method can be achieved in one step simply by phenol/chloroform treatment of the cells pelleted from a volume as small as 300 µL. The experiment from growing cells to the completion of plasmid extraction can be carried out using the same microfuge tube. The whole procedure takes about 40 min from harvesting the bacterial cultures to visualization of the DNA bands in agarose gel. Using this method, detected plasmids in A. xylinum have detected. In conclusion, this new method for the isolation of plasmid DNA from lysis-resistant A. xylinum has major advantages over the classic method that requires phenol-extraction and cesium chloride gradient centrifugation. This new method does not employ highly toxic reagents, is considerably cheaper and is easy to perform. This method provides high yields of pure DNA, which can be used for restriction analysis, transformation and other molecular techniques. This plasmid isolation technique is also applicable to other Gram-negative bacteria and will improve the efficient engineering of biotechnologically and therapeutically useful A. xylinum recombinants for cellulose production. To construct a physical map, isolate plasmid was cut with Marker Plasmid digested with BamHI different restriction endonucleases and subjected to agarose gel electrophoresis. The results showed that it carried single restriction sites for BamHI and HindIII. The molecular size of this plasmid was estimated to be 4.4 kb based on the sum total of the restriction fragment lengths (Fig. 1). Isolation of the plasmid have been performed reproducibily several times, both from stationary and log-phase cells. This result demonstrates that the band pattern observed in agarose gel electrophoresis are stable characteristics. From present data the following conclusions can be drawn : (I) studied wild type A. xylinum contains a 4.4 kb plasmid (II) Curing strains have identifiable changes in their cellulose production compared to the wild type. Indicating that these plasmids may be involved in cellulose biosynthesis. Success in isolation of plasmid has shown that allows insertion of foreign genes, demonstrating the potential of A.xylinum being developed into a vector for gene cloning.
3,277.4
2005-09-30T00:00:00.000
[ "Biology", "Environmental Science", "Materials Science" ]
Protection Principle for a Dc Distribution System with a Resistive Superconductive Fault Current Limiter A DC distribution system, which is suitable for access to distributed power generation and DC loads, is one of the development directions in power systems. Furthermore, it could greatly improve the energy efficiency and reduce the loss of power transportation. The huge short circuit current is always a great threat to the safety of the components, especially the capacitors and diodes. A resistive superconductive fault current limiter (SFCL), which could respond quickly once a fault happens and limit the fault current to a relatively low level, becomes a good solution to this problem. In this paper, the operational principle of the resistive SFCL is introduced first, and then, the DC short-circuit fault characteristic of the DC distribution system with the SFCL is analyzed and the effectiveness of the SFCL verified. In order to realize the selectivity of the protection in the DC distribution system with SFCL, a new transient current protection principle based on Ip (the peak value of the current) and tp (the transient time that the current takes to reach its peak value) is proposed. Finally, a model of a 10-kV DC distribution system with an SFCL is established and simulated in PSCAD/METDC. Simulation results have demonstrated the validity of the analysis and protection principle. Introduction Nowadays, more and more electric appliances, such as electric vehicles, LED lamps, mobile phones and computers, are becoming DC consumers with the development of power electronics technology.Scholars are now putting forward the DC distribution system based on the widely studied and applied distributed generation, which supplies DC power.Compared with the AC distribution system, a DC distribution system based on a voltage source converter (VSC) presents much more advantages, such as better power quality, larger power transportation capacity, higher reliability, being more economical, having lower energy waste, and so on [1][2][3][4][5].Therefore, the research on DC the distribution system has drawn more and more attention. However, in the DC distribution system based on a VSC, the Insulated Gate Bipolar Transistor (IGBTs) will be blocked from self-protection during the DC short-circuit fault.In this case, the VSC turns into an uncontrolled rectifier, as the freewheeling diodes will feed the fault [6].In the meantime, the DC-link capacitance discharges, and the DC current rises rapidly to a relatively large value, which may be dozens of times the normal one and poses a considerable threat with respect to the safety issue.As a result, a current-limiting device is necessary, and the resistive superconductive fault current limiter (SFCL) is a good choice because of its fast response and low power loss characteristics [7]. Currently, SFCLs are mainly applied to the AC system.For example, a 220-kV saturated iron-core superconductive fault current limiter (SISFCL) has been installed in a high voltage transmission system in Tianjin, China.There are few studies about how resistive SFCL is to be used in the DC system, and some have evaluated its performance based on the effect of limiting the voltage and current only [8,9], while some studied the suitable location for the SFCL in the DC system considering the influence of the current only [7,9,10].However, the influence of the SFCL on the fault characteristic and relay protection of the DC distribution system has had no research results till now.At present, most of the research about relay protection of the DC distribution system does not take the current limiters into consideration [11][12][13][14][15][16][17][18].These protection schemes all apply a great short-circuit current and a low voltage.The impact of the SFCL on the protection of the DC distribution system needs to be further investigated, for introducing the SFCL will bring about problems with respect to protection. In this paper, the fault characteristic of the DC distribution system with the SFCL and the influence of the SFCL on protection of the DC distribution system are studied.Then, a new transient current protection principle based on Ip (the peak value of the current) and tp (the transient time that the current takes to reach its peak value) is introduced and investigated in detail.Finally, many fault simulations are completed to verify the accuracy of the protection principle. Resistive Superconducting Fault Current Limiter A resistive SFCL is composed of a superconducting cable and a shunt resistance.The equivalent model of the SFCL is a variable resistance RSFCL and a shunt resistance in parallel, which is shown in Figure 1.The shunt resistance is necessary for reducing the overvoltage.Then, the resistive SFCL uses the transformation between the superconducting state and the normal resistive state to limit the increase of the current.When a fault happens, the resistive SFCL responds rapidly, so that the fault current is limited [8].In this paper, we choose Bi2212 as the material of the SFCL.The transformation of the resistive SFCL can be divided into three states as follows [8,19,20]: (1) Superconducting state: In this state, the current density going through the SFCL is below the critical value, and the resistance of the SFCL is very small.The electric field is: where Ec = 1 µV/cm; 5 ≤ α ≤ 15; J is the current density and Jc(T) is the critical current density, which is dependent on the temperature of the material.Additionally, the Jc(T) is: where Tc is the critical temperature expressed in units of K; and T0 is the initial temperature.Here, Tc = 95 K; T0 = 77 K; and Jc(77) = 1.5 × 10 7 A/m 2 . (2) Flux flow state: When the current density exceeds the critical value, the SFCL enters this state, and the electric field starts increasing.Then, the resistance of the SFCL begins increasing, and as a result, the current starts to be limited and the temperature of the material begins to increase.Additionally, the increase of temperature makes the Jc(T) decrease, so the electric field will increase continuously.In this state, the electric field is: where E0 = 0.1 V/m is the electric field during the transition from the superconducting state to the flux flow state. 2 ≤ β ≤ 4. (3) Normal resistive state: After the temperature gets higher than the critical temperature, the SFCL enters the normal resistive state.In this state, the resistance and the electric field are mainly dependent on the current density and the temperature.Additionally, the electric field is: where ρ(Tc) is the normal conducting resistivity, and here, ρ(Tc) = 1 × 10 −6 Ω•m. Overall, the resistance of the material is: where the isc is the current going through the superconducting wire and lsc is the length of the wire. Furthermore, in all three states, the heat diffusion equation is: where θsc = 1/[κ(lsc•2πr + 2S)]; κ = 1.5 × 10 3 W/(K•m 2 ); r is the radius of the superconducting wire and C = 1.58T (in units of J/(kg•K)).Here, the length of the superconducting wire is lsc = 200 m, and the volumetric density is ρv = 6 g/cm 3 .When the critical current is chosen, the cross-sectional area can be calculated by S = Ic/Jc.The resistive SFCL model is established in PSCAD/EMTDC.In order to limit the surge current discharged by the capacitor, the resistance of the SFCL needs to become large enough quickly before the current gets to the peak value.In this paper, the length of the superconducting wire is assumed to be 200 m.In the DC distribution system, when a fault happens at 0.5 s, the variation of the resistance of the SFCL with respect to time is as shown in Figure 2. Analysis of Fault Characteristic in the DC Distribution System with the SFCL When a DC short-circuit fault happens on the DC side of VSC, the IGBTs are blocked from self-protection.All of the diodes are blocked because of the reverse voltage.As a result, the AC system is separated from the DC side, and the AC current drops to zero [6].If a SFCL is not used, the fault characteristic can be divided into four stages [16], which are shown in Figure 3.In the first stage, the capacitor discharges rapidly, and the DC current gets to a large value.This is a great threat to the safety of the capacitors.Additionally, in the third stage, the AC system is equivalent to a three-phase short circuit.These are unfavorable conditions for the system, and the large AC current is harmful for the diodes.Whether the third stage appears is decided by the damping property of the circuit.However, the installation of a resistive SFCL can limit the discharge of the capacitor and make the third stage disappear.Therefore, the resistive SFCL can limit the fault current on both the AC and DC sides, and the system will be safer.Then, the breaking capacity of the DC circuit breakers could be reduced.The fault characteristic under a particular situation of the simulation system is shown in Figures 4 and 5.As shown in Figures 4 and 5, the peak current is reduced from 14 down to 2.20 kA after the SFCL is installed, about an 84.3% decrease.The peak current is limited because of the fast operation of the SFCL.Furthermore, due to the SFCL, the time that the current takes from the beginning of the fault to the peak becomes shorter.In the steady state of the fault, the current is limited to a very low level with the increasing resistance of the SFCL.At this time, the current is close to 0.3 kA, even smaller than the normal value. In addition, the decrease of the DC voltage is also limited by the SFCL.The rate of descent is reduced greatly.Additionally, the steady value of the voltage rises from about 1 up to 4.4 kV, about a 77% increase.Furthermore, once the SFCL is installed, the voltage will remain at a relatively high value, and the third stage disappears.Therefore, the resistive SFCL overcomes the equivalent three-phase short circuit on the AC side.For different distances of faults, the voltage of the steady state changes little. The analysis of the fault characteristic is carried on in detail as follows. The Transient State of the DC Short-Circuit Fault with the Resistive SFCL Stage 1: In the initial period of the fault, when the resistive SFCL is included and the IGBTs are blocked, the capacity, the cable and the SFCL compose the discharge circuit on the DC side.During this period, the voltage begins to decrease, and the DC current will rise to the maximum, then decrease.The equivalent circuit is shown in Figure 6.According to the circuit, the voltage and current of the DC side can be calculated by: where idc is the current of the circuit; udc is the voltage of the capacitor; C is the capacity; R and L are the equivalent parameters of the cable from the exit of the VSC to the fault position; RSFCL is the resistance of the SFCL, which changes to nonlinear.Additionally, it is easy to know that the rate of descent of the voltage and peak current reduces greatly.The fault waveforms of Stage 1 are shown in Figure 5. Stage 2: With the decrease of the DC voltage, the diodes start to turn on after the DC voltage is less than the AC voltage.The AC current begins to rise from zero.Then, the fault reaches the steady state gradually.The equivalent circuit at this stage is the same as the circuit in the steady state, which is shown in Figure 7.However, the discharge of the capacitor plays a major role at this stage.Additionally, the fault waveform of Stage 2 is shown in Figure 5. The Steady State of the DC Short-Circuit Fault with the Resistive SFCL In the steady state, the VSC changes into an uncontrolled rectifier, as shown in Figure 7.The output voltage is impacted by the flow angle of the diodes.Additionally, the voltage changes between 2.34 U and 2.45 U, where U is the RMS value of the phase voltage of the AC system. Because the resistance of the SFCL is much greater than the impedance of the cable, the analysis can be done ignoring the inductance.Then, the flow angle (θ) of the diodes is influenced by ωRC, and the relationship is shown in Figure 8. From Figure 8, it is known that the flow angle (θ) will decrease, with the increase of ωRC.The critical condition is 3 RC   .When 3 RC   , the AC current of the uncontrolled bridge rectifier will be discontinuous, as shown in Figure 5.With the large increase of the resistance in the fault circuit, the current in the steady state may be close to or less than the value in the normal condition.Additionally, in the steady state of the fault, the voltage increases greatly compared with the condition that the SFCL is not installed.On the other hand, the current in the steady state of the fault cannot be used for protection; otherwise, the protection will be incorrectly tripped. Transient Current Protection and Its Coordination Overcurrent protection using the peak current is widely used in the DC distribution system.As we know, before the integration of the SFCL, the overcurrent protection using the peak current could coordinate very well [16].It can satisfy the requests for protection.However, after the integration of the SFCL, the difference among the peak currents of different fault positions becomes little.In this situation, if we still apply the overcurrent protection, the reliability coefficient will be too small to satisfy the request for sensitivity and reliability.Therefore, the overcurrent protection needs to be improved. In this paper, it is assumed that the fault resistance of the DC short-circuit fault between the cables is very small.According to the analysis above, we can get the relationship between the fault position and peak current, as shown in Figure 9.If the SFCL is included, the peak current has an approximately linear relationship with the position of the fault.However, the peak current changes a little with respect to the increasing of the distance of the fault.However, the time that the current takes to reach the maximum varies greatly when a fault happens at different positions, as shown in Figure 10.Then, if the ratio of the peak current to the time is used for the protection, as shown in Figure 11, the sensitivity and reliability could be ensured. Transient Current Protection Under the condition that the SFCL is installed, transient current protection is set according to the principle that the threshold is above the ratio of Ik peak end 1 (the peak of the short circuit current at the end of the line) to ∆tk peak end 1 (the time that the current takes to reach the maximum from the beginning of the fault).The operating criterion is: The threshold is: where Ipeak is the maximum of the short circuit current during the fault; ∆t is the time that the current takes to reach the maximum from the start of the fault and I rel K is the reliability coefficient, which can be 1.2-1.3.Therefore, transient current protection can only protect about 80% or less of the line.However, the protection will operate without delay once the ratio exceeds the threshold. Time-Limit under Voltage Protection Started by the Transient Current Time-limit overcurrent protection coordinates with the downstream line by the time delay.The time delay can be 0.1 s or more in the DC system [16], considering the coordination with the downstream line and the operation time that the DC circuit breaker takes.However, when the SFCL is installed, the current has already been in the steady state at 0.1 s, and the value is close to the normal value after the fault.Therefore, traditional time-limited overcurrent protection cannot be applied directly to the DC system with the SFCL. However, the DC voltage will decrease to a relatively low level at 0.1 s after the fault.When the resistive SFCL is installed, the voltage in the steady state changes little with the change of the fault position.Therefore, the under voltage protection alone cannot ensure the selectivity.Then, the transient current protection is combined with the under voltage protection, called the time-limit under the voltage protection started by the transient current.This kind of protection still protects the whole line and does not exceed the range of the transient current protection of the downstream line.It coordinates with the downstream line by a time delay.The starting value is the threshold of the transient current protection, and it can guarantee the selectivity of protection.After the time delay, whether the breaker operates is decided by the under voltage protection.The operating criterion is: The threshold is: where   Case Study A 10-kV DC distribution system with the resistive SFCL is simulated by PSCAD/EMTDC.The structure of the system is shown in Figure 12.It is assumed that DC short-circuit fault occurs at 0.5 s in the different positions of the DC side.The fault resistance is 0.001 Ω.Each of the lines in the system is 10 km long.The equivalent parameter of the line is R = 0.078 Ω/km and L = 0.48 mH/km.When the system operates in the normal condition, the currents that go through each line are Iline1 = 0.35 kA, Iline2 = 0.252 kA and Iline3 = 0.157 kA.According to the analysis above, the thresholds of protection are listed in Table 1.Because Line 3 is the last section of the transmission line, traditional overcurrent protection can be applied for Line 3. Therefore, the threshold can be several times the load current, and the instantaneous overcurrent protection protects the whole line.In this situation, the time-limited overcurrent protection is the backup protection of Line 3.Then, the principle of the protection mentioned above is verified in this system, and the result is listed in Table 2. From the result, it can be seen that the presented protection principle could clear the fault quickly and accurately.The transient current protection can protect about 80% of the length of the section.Additionally, it operates without a time delay.With the increasing of the distance of the fault, the rising of current slows down.Therefore, the operation time of the transient current protection becomes longer, but still operates very quickly.On the other hand, the time-limited under voltage protection started by the transient current can protect the whole length of the line with a time delay.It can operate at about 0.2 s or more after the fault.This kind of protection may operate slowly, but it can guarantee the selectivity between the upstream line and the downstream line. Therefore, for the DC distribution system, including the SFCL, this principle of protection can operate within a short time for the feeder faults, and the coordination of the upstream and downstream protective relays is ensured. Conclusions A resistive SFCL has good performance for current limiting in the DC distribution system.It is suitable for a system with a high fault current and a high current rising speed.However, the integration of the SFCL has a serious influence on the coordination of the protection.In this paper, the influence of the SFCL on the protection and the fault characteristic is analyzed in detail.Then, the transient current protection is presented, and the time-limited under voltage protection started by the transient current is proposed.Finally, the principle of the protection is verified in PSCAD/EMTDC.The result demonstrates that this principle of protection can ensure the coordination of the upstream and downstream protective relays.Nevertheless, a DC short-circuit fault with fault resistance is not considered in this paper, which would be the main work in the future. Figure 2 . Figure 2. Variation of the resistance of the SFCL. Figure 3 . Figure 3.The four stages for the rectifier during a DC short-circuit fault without the SFCL. Figure 4 . Figure 4.The fault characteristic of the voltage source converter (VSC) without the SFCL during a DC short-circuit fault. Figure 5 . Figure 5.The fault characteristic of the VSC with the SFCL during a DC short-circuit fault. Figure 6 . Figure 6.The equivalent circuit of the VSC in the initial phase of the fault. Figure 9 . Figure 9. Peak currents of different fault positions. Figure 10 . Figure 10.The time that the current takes to reach the peak versus the fault position. Figure 11 . Figure 11.The ratio of the peak current to the time versus the fault position. 1 U 2 t current protection of the downstream line; II rel K is the reliability coefficient, which can be 1.1-1.2;II set  is the threshold of the under voltage protection; UN is the rated voltage; is the inherent opening time of the transient current protection and ∆t is the time delay, always ∆t = 0.1-0.3s. Figure 12 . Figure 12.The structure of the DC distribution, including the SFCL. Table 1 . The thresholds of protection. Table 2 . The action situation of each CB. Percentage of the Line) The Action of Breakers (Time for Receiving Trip Instruction) (s Note: # is defined as that the circuit breaker (CB) of the Line will not operate.
4,807.6
2015-05-26T00:00:00.000
[ "Physics" ]
Biomimetic Phospholipid Membrane Organization on Graphene and Graphene Oxide Surfaces: A Molecular Dynamics Simulation Study : Supported phospholipid membrane patches stabilized on graphene surfaces have shown potential in sensor device functionalization, including biosensors and biocatalysis. Lipid dip-pen nanolithography (L-DPN) is a method useful in generating supported membrane structures that maintain lipid functionality, such as exhibiting speci fi c interactions with protein molecules. Here, we have integrated L-DPN, atomic force microscopy, and coarse-grained molecular dynamics simulation methods to characterize the molecular properties of supported lipid membranes (SLMs) on graphene and graphene oxide supports. We observed substantial di ff erences in the topologies of the stabilized lipid structures depending on the nature of the surface (polar graphene oxide vs nonpolar graphene). Furthermore, the addition of water to SLM systems resulted in large-scale reorganization of the lipid structures, with measurable e ff ects on lipid lateral mobility within the supported membranes. We also observed reduced lipid ordering within the supported structures relative to free-standing lipid bilayers, attributed to the strong hydrophobic interactions between the lipids and support. Together, our results provide insight into the molecular e ff ects of graphene and graphene oxide surfaces on lipid bilayer membranes. This will be important in the design of these surfaces for applications such as biosensor devices. D ip-pen nanolithography (DPN) 1 with phospholipids (L-DPN) 2 and the use of polymer pen lithography (PPL) 3 and other stamping techniques for the generation of lipid membranes on supports 4−6 have gained increasing interest in recent years. 7 Supported lipid bilayers have diverse applications in biomedical research, 8−11 sensor and device functionalization, 12−17 protein crystallization, 18 and in generating lipid sensor structures. 18−20 In L-DPN, the tip of an atomic force microscope (AFM) probe is covered with phospholipids and brought into close contact with a solid support, allowing the lipid ink to transfer to the substrate and self-assemble into stacks of membranes. 21 This method has the benefit of direct and precise spatial control during lipid deposition onto the support surface, with the ability to tailor the lipid mixture in the ink to a desired composition. Other methods of lipid deposition, including imprinting, have proved useful in elucidating the spreading behavior of lipid membranes on support surfaces of varying degrees of hydrophilicity and roughness and have indicated that different spreading mechanisms and velocities may occur depending on the nature of the surface. 22,23 An interesting difference when generating membrane patches by L-DPN compared with other methods (e.g., vesicle fusion, 24 micropipettes, 25,26 or spreading membranes in microfluidic systems 27,28 ) is that the fabrication of the lipid patches generally takes place in air (i.e., under ambient conditions). Therefore, the structures will usually be transferred into liquid only after the lithographic step is completed, as most applications take place in liquid phase. This suggests that the lipid structures may rearrange, and different structural models have been proposed for the molecular organization of the lipid membrane in air and water and on surfaces with varying hydrophilicity. 13,28−30 Computational methods such as molecular dynamics (MD) simulations have been used to study lipid membrane organization and dynamics on different support materials. 31−35 Coarse-grained (CG) simulations of lipid bilayers on hydrophilic model surfaces indicate that preformed lipid patches are more ordered on these surfaces compared to free-standing bilayers, affecting lateral lipid diffusion in the lipid leaflet that directly contacts the support and resulting in reduced lipid mobility. 35 Increasing surface roughness reduces lipid ordering and results in higher lipid mobility relative to smooth surfaces, confirming that lipid membrane organization is directly influenced by the underlying surface topology of the support. 35 Additionally, CG simulations modeling lipid selfassembly on supports of differing hydrophilicity suggest that lipid diffusion is more strongly affected, and reduced, on hydrophilic supports compared to on hydrophobic supports, due to stronger attractive forces. 36 The nature of the supporting surface is thus important. Recent studies have focused on parametrizing models of various support materials, for example, graphite, to more accurately represent interactions of organic molecules, such as long-chain alkanes, with the support. 37 These models were shown to reproduce phase transitions and molecular organization of organic molecules on the surface, in line with experimental data. 31,37 MD simulations are therefore useful in studying the molecular properties of lipid−surface interactions, allowing graphene surface in vacuum. The lipid choline, phosphate, glycerol, and carbon groups are shown as licorice representations colored in green, red, yellow, and gray, respectively. The graphene surface is shown as black van der Waals spheres. The Hyperballs program 41 was used for image generation. Partial density profiles were calculated for the last 25% of the simulation time for the inverted bilayer (C) and monolayer (D) simulations. (E) and (F) The time evolution of the average COM distance between the lipid phosphate head-groups and the graphene surface (dashed line) for the inverted bilayer and monolayer simulations in (A) and (B), respectively. See also Movie S1 and Movie S2. ACS Nano Article microscopic details to be characterized. In this study, we examine the organization of supported lipid bilayers as generated by L-DPN using CG molecular dynamics (CG-MD) simulations and correlate the results with experimental evidence. Our simulation results are in close agreement with AFM measurements of lipid layers on graphene and graphene oxide supports. The AFM experiments suggested that inverted bilayer configurations are formed on pristine graphene, while altered lipid structures can be formed on a graphene oxide surface. Our simulations reproduce this behavior, capturing the dynamic process underlying the altered lipid configurations. Furthermore, the simulations revealed effects on lipid ordering and diffusion within the bilayer structures relative to freestanding bilayers in water, depending on the support surface. Together, the combined experimental and simulation results provide an integrated study of the dynamical properties of lipid membranes on graphene-based supports. RESULTS AND DISCUSSION Previous studies have shown that the formation of stable lipid structures, such as multilayered lipid membranes, on graphene, silicon dioxide, and other substrates can be generated by L-DPN and are influenced by surface morphology and experimental conditions. 13,38−40 Here, we used CG-MD simulations to investigate the molecular details of lipid interactions with pristine graphene and with graphene oxide surfaces. The simulated systems were constructed to match the experimental conditions used during L-DPN and AFM studies as well as currently possible. These experiments were either performed for systems in air or systems immersed in liquid. 13 Experiments in air were typically performed at 20−30% relative humidity (RH). Even for the largest systems simulated (30 × 20 × 20 nm 3 ) this corresponds to only 2 to 3 water molecules, i.e. <1 CG water particle, within the simulation box. The experiments in air were, therefore, reasonably well approximated by simulations in vacuum. Lipid Interactions with Graphene. L-DPN generated lipid structures on pristine graphene surfaces in air are thought to form inverted bilayer structures. 13,39 We investigated the stability of these inverted bilayer structures on a graphene surface in vacuum and in water using CG-MD. Two different graphene surface areas were modeled: a small (10 × 10 nm 2 ) and large (30 × 30 nm 2 ) surface. Preformed inverted bilayers consisting of 512 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) molecules (256 lipids per layer) were individually placed above the small and larger graphene surfaces and simulated for 0.5−1 μs. Simulations of the systems in vacuum (4 replicate simulations each of 4 μs duration) resulted in rapid adsorption of the lipid layers on the surface, mediated by strong hydrophobic interactions between the graphene surface and the lipid tails ( Figure 1A). The lipid structures maintained their inverted configurations ( Figure 1C), as shown by monitoring the center of mass (COM) distance between the lipid phosphate headgroup and the graphene surface ( Figure 1E). Visual inspection also indicated some degree of enhanced ordering of the lipid tails in the graphene-proximal layer. Polar interactions between the head-groups resulted in clustering of the lipid head-groups, stabilizing the inverted bilayer topology. Furthermore, CG-MD simulations of lipid monolayers with the small graphene surface (again, 4 replicate simulations each of 4 μs duration) resulted in spontaneous lipid reorganization to form an inverted bilayer structure in vacuum ( Figure 1B,D,F). 13 The bottom profile section reveals a bilayer membrane on top of a lipid monolayer (wetting layer). The wetting layer is similar to the layers observed on silicon dioxide, though thinner, probably due to reduced layer density. 28 Article These results corroborate AFM measurements of L-DPN deposited lipid layers on pristine graphene surfaces, 13,39 revealing that phospholipids form a flat inverted bilayer of uniform thickness (Figure 2A) on the hydrophobic support in vacuum (simulation) or air (experiment). The measured thickness of the inverted bilayer structure from the simulations compares well to the height measurements from the AFM experiments, corresponding to ∼4 nm. This can be seen from the density peaks of lipid tail groups as shown in the partial density profiles in Figure 1B,D. This matches the average height of ∼4−4.5 nm of the inverted bilayer as measured by AFM ( Figure 2A). However, the inverted bilayer topology was less stable (in terms of maintenance of the inverted bilayer structure after interactions with the support surface) in CG-MD simulations with the large graphene surface (30 × 30 nm 2 ) compared to the small graphene surface (10 × 10 nm 2 ). In simulations with the large graphene surface, the strong hydrophobic interactions between the lipid tails and graphene beads dominated lipid organization, resulting in disassembly of the layer and spreading of lipids across the available surface area, aligning with the surface in a lateral fashion ( Figure S2A). The same behavior was also observed in simulations of preformed regular bilayers with the large graphene surface ( Figure S2C). This contrasts with our initial observations of stable inverted bilayer configurations during simulations with the small graphene surface area, which did not disassemble. The difference between these simulations is the overall surface area of the graphene model. The preformed inverted lipid structures are roughly 10.5 × 10.5 nm 2 in x,y dimensions (prior to simulation) and thus occupy a slightly larger surface area than the small graphene surface (10 × 10 nm 2 ). Therefore, the larger density of lipids on this small graphene surface may affect the stability of the observed lipid configuration compared to the large graphene surface, resulting in maintenance of the inverted structure. Experimentally, inverted bilayer patches generated by L-DPN are stable on more extended graphene surfaces (20 × 20 μm 2 ), for which lipid spreading is thought to be a self-limiting process. 39 Consequently, the difference in the lipid density on the large graphene surface areas within the AFM experiment compared the simulations may be a deciding factor in the observed disassembly of the inverted bilayer structure (512 lipids) on the large graphene surface in the simulation. Interestingly, simulation of larger inverted bilayers (2110 lipids) exhibited higher stability and maintained the inverted configuration to a larger extent compared to the smaller bilayer systems (512 lipids) ( Figure S2E). However, lipid tail entanglement and lateral interactions with the surface, as well as headgroup clustering, resulted in deviation from the more "ideal" inverted bilayer configuration observed in simulations of the smaller bilayers on the small graphene surface in vacuum (10 × 10 nm 2 ; Figure 1). Importantly, the increased stability of larger lipid layers suggests that lipid density on the graphene surface is another determinant of the topology of a supported membrane. Lipid Interactions with Graphene Oxide. Surface polarity is known to affect the molecular properties of supported lipid bilayers, with effects on topology, 29,30 ACS Nano Article diffusion, 42 and lipid order. 36,43 To investigate these effects, L-DPN was also performed on graphene oxide substrates in air. AFM height measurements of the deposited lipids on the hydrophilic substrate indicated that the lipids organized into a "1.5 bilayer" configuration ( Figure 2B), as has also been suggested for lipid structures on silicon dioxide surfaces. 13 Two distinct lipid layers could be distinguished: a "wetting" layer composed of phospholipids with their head-groups oriented toward the hydrophilic support surface, and a second inverted bilayer that formed on top of this wetting layer. The same lipid organization is seen in CG-MD simulations (of duration 5 μs) of (non-inverted) bilayers with graphene oxide in vacuum. The lipid bilayer interacted with the surface, undergoing substantial reorganization within ∼0.2 μs to form a 1.5 bilayer on top of the oxidized surface ( Figure 3A). Simulation (also of duration 5 μs) of a preformed inverted bilayer configuration positioned above the graphene oxide surface converged to a similar 1.5 bilayer structure ( Figure 3B), indicating that the outcome was robust to the initial bilayer (inverted vs non-inverted) model used. The significant rearrangements facilitating formation of . Partial density profiles were calculated for the last 25% of the simulation time shown for the graphene (E, G) and graphene oxide (F, H) systems. The same color scheme is used as in Figure 1. The graphene oxide surface is shown in black (carbons) and red (oxygens). The graphene surface is shown in black. ACS Nano Article the 1.5 bilayer structure are thought to be driven by polar headgroup interactions with the hydrophilic graphene oxide surface. Partial density profiles calculated for the last 25% of CG-MD simulation time show very similar peaks in lipid headgroup and tail densities for both the inverted and noninverted bilayer systems ( Figure 3C,D), confirming that the differing starting structures converged to similar end structures. These results support the interpretation of a 1.5 bilayer configuration by AFM height measurements for supported lipid membranes on graphene oxide, suggesting that this arrangement is the preferred molecular state of lipid layers on polar oxide surfaces in air. In contrast, the 1.5 lipid bilayer configuration (which exposes the hydrophobic tails of the lipid molecules at the "uppermost" layer) was not observed in CG-MD simulations of the lipidgraphene oxide systems in water. Instead, preformed lipid bilayers reorganized to form bicelle-like configurations on both small and larger graphene oxide surfaces in water ( Figure 4E− H). The rearrangement of inverted bilayers was driven by lipid headgroup interactions with the underlying support, either directly or through bridging water particles, as well as by interactions with the surrounding solvent, resulting in a stable bicelle structure which does not subsequently move as a whole on the graphene oxide surface. Conversely, the addition of water to lipid-covered graphene systems in the CG-MD simulations resulted in destabilization of the previously formed inverted bilayers (formed in vacuum) and triggered disassembly of the structures ( Figure 4A−D). Subsequently, the lipids spread over the available surface area with head-groups oriented toward the surrounding solvent molecules. Importantly, similar behavior is observed for lipid-covered graphene transferred to an aqueous environment by AFM measurements. Two observations can be made from the combined simulation and experimental data. First, the addition of aqueous solvent can result in different lipid configurations on the same surface, for example, bicelle-like formations on graphene oxide in an aqueous environment compared to a 1.5 bilayer configurations in air (or vacuum). 13,21,28 Second, the polarity difference resulting from the oxygen-containing functional groups in graphene oxide compared to the hydrophobic surface presented by pristine graphene resulted in stabilization of different lipid configurations, for example, 1.5 layers vs inverted bilayers. These observations imply that the configuration of the supported membrane can be tuned as a function of system solvation as well as underlying surface polarity, resulting in relatively different end structures. Direct observation of lipid reorganization on graphene and silicon dioxide in liquid by AFM has been reported by a previous study. 13 These observations necessitated the use of bovine serum albumin (BSA), which binds to the surface, to ACS Nano Article block excessive lipid spreading and stabilize the membrane patches in aqueous solution. In this study, direct AFM measurements of the lipid layer structures on graphene oxide in liquid have been hindered by the blocking layers of BSA needed to stabilize the small membrane patches during scanning. Given that specific binding interactions in L-DPN applications are usually conveyed by functionalization at the lipids headgroup, it was inferred that the lipid patches undergo reorganization on graphene oxide, exposing the (otherwise buried functional groups) to the liquid phase, similar to the lipid configurations identified from the CG simulations of the graphene oxide systems in water. 13,44 Lipid Layer Properties on Graphene and Graphene Oxide: Lipid Order Parameters. So far, we have provided an overview of the topological differences generated in lipid organization on surfaces of varying polarity and system solvation. How does this affect the dynamic properties of the lipids within the observed structures? Calculation of lipid order parameters (S) as a function of simulation time provides a metric for lipid rearrangements on a support surface (see Methods). Briefly, the lipid order parameter is calculated by estimating the angle between the bonds connecting the lipid particles and the z-axis of the simulation box (i.e., the normal to the bilayer). Thus, alignment of the lipid particles with the zaxis yields an order parameter of S = 1, whereas a value of S = 0 corresponds to randomly orientated lipids. Order parameters were calculated for the phosphatidylcholine lipids of selected systems and compared to those for a free-standing bilayer (i.e., no supporting surface) simulated in water (Table S1). Systems that underwent substantial lipid reorganization showed major changes in the lipid order parameters over time, particularly for the simulations starting from an inverted bilayer on the graphene oxide surface in vacuum ( Figure 5). In this system, the average order parameter values change significantly during the first 2 μs of simulation time, particularly for the phosphate-glycerol bond and consecutive bonds for both sn-1 and sn-2 lipid tails up to the third CG particle. The changes represent altered configurations of the lipids as the molecules reorganized to form the 1.5 bilayer structure. Specifically, the average order parameters for particular lipid bonds decrease from around 0.3 to ∼0.1 during the first microsecond of the simulation time, indicating a random arrangement for many lipids during formation of an inverted bilayer on top of the monolayer (wetting layer). Toward the end of the rearrangement (∼2 μs), the average order parameters, for example, the phosphate-glycerol bond, return to 0.3, suggesting lipid alignment with the z-axis following the transition ( Figure 5). Other order parameter values, such as the C2−C3 bond, however, did not completely recover their initial value, which may be attributed to a degree of bilayer distortion brought about by micelle-like lipids within the bilayer formed on top of the wetting layer ( Figure 3A). This value (∼0.1) is reflected in those for subsequent bonds (e.g., C3−C4) in both sn-1 and sn-2 chains, characterizing the random arrangement of lipids tails within the 1.5 bilayer structure on graphene oxide. In general, the average lipid tail order parameters for all systems suggest that the tails were less ordered on both graphene and graphene oxide surfaces compared to a freestanding bilayer in water ( Figure 6; Table S1). For example, an average order parameter of 0.18 is obtained for lipid tails in an inverted bilayer-graphene simulation in vacuum (small surface area), compared to 0.36 for lipids within the free-standing bilayer. However, inspection of the time evolution of the order parameters from the inverted bilayer-graphene simulation in vacuum reveals a slight gain in order over the last 0.5 μs of simulation time, particularly for the sn-2 chain of the lipids (e.g., C1B−C5B) ( Figure S3A). This transition indicates the increased alignment of the lipid tails, as the inverted bilayer configuration becomes more stable on the pristine graphene surface. Furthermore, simulations of different starting lipid structures with graphene oxide resulted in similar order parameter values, particularly toward the end of the simulations, further indicating convergence to the 1.5 bilayer organization on this surface in vacuum ( Figure S3C and D). The overall values compare well with lipid tail order parameters calculated from CG simulations of related systems. 36 The overall reduced lipid order on the graphene (oxide) supports could be a result of lateral interactions between the lipid tails and the surface (partial alignment of the lipid tails with the surface), as is observed in the pristine graphene systems (due to strong hydrophobic forces), and/or headgroup interactions with the graphene oxide surface, disrupting lipid order in the layer. Lipid Layer Properties on Graphene and Graphene Oxide: Lipid Diffusion. Characterization of the lateral diffusion of lipids within supported membrane systems is fundamental to understanding the dynamic properties of phospholipids within these membranes. 45−48 Experimental studies report both linear and anomalous diffusion regimes of lipids depending on the systems and increasingly highlight the importance of the time and length scale over which the diffusion data are collected when making a distinction between anomalous and linear diffusion. 42,46,47,49 To characterize lateral diffusion of lipids within our simulated supported bilayer systems, both linear and anomalous diffusion models were considered. Mean square displacements (MSDs) were collected by tracking the displacement of lipids (see Methods). The resultant MSDs were then fitted with either a linear 50 or an anomalous 51 diffusion equation. This analysis revealed differences in the lateral displacement of lipids on surfaces relative to those free-standing bilayers. The diffusion of lipids within supported membranes, on either graphene or graphene oxide, was slower in the presence of the support compared to free-standing bilayers (Figures 7A and S5). 52 The presence of water appeared to result in increased lipid diffusion on the graphene oxide surface (Figures 7B and ACS Nano Article S6), suggesting the water layers could act as a lubricant for lipid diffusion within the layer. Indeed, bridging water particles between the lipids and the support surface were often observed within the simulations with this surface. Similar effects have also been observed in experimental systems of supported lipid membranes on hydrophilic surfaces in an aqueous environment. 46 Furthermore, fitting MSD data using the anomalous diffusion equation produced α values of 0.7−0.9 for all the supported membrane systems, suggesting that the lipids within the supported membranes may exhibit different diffusion regimes as a consequence of their interactions with the surface ( Figure 8). In particular, lipids exhibited subdiffusion (i.e. α < 1) for all supported membrane simulations. Importantly, the displacement of lipids within the freestanding bilayer was broadly consistent with a linear diffusion model, producing a value of D lin = 3.8 × 10 −7 cm 2 /s (±0.5). This value should be scaled by a factor of 4 to account for the reduced degrees of freedom within CG simulations 53 in order to compare with experimental estimates, resulting in D lin = 0.98 × 10 −7 cm 2 /s, which compares well to atomistic simulations of DOPC bilayers in water (1.5 × 10 −7 cm 2 /s). 54,55 However, it does remain challenging to compare diffusion data for membranes on surfaces with experimental values, or even other simulation studies, given the dependence on both the time scale over which diffusion is measured and the length scale of the system. 42,45,47,49 Nonetheless, we can reasonably conclude from the MSD data and fits across all of the supported membrane systems (see Figures S5−6) that in general D is lower for lipids in the supported bilayers relative to a free-standing bilayer, with the largest reduction (more than 2fold, with α = 0.7) for a membrane on pristine graphene or on graphene oxide in vacuum. CONCLUSIONS The dynamic organization of lipid membranes on graphene supports has been investigated using both experimental and computational techniques. As has been observed in a number of other studies of supported lipid membranes, the dynamic properties of the lipid configurations presented here were affected by the polar/apolar nature of the surface as well as system hydration. 32,[34][35][36]56 AFM measurements suggested the formation of inverted bilayer topology on pristine graphene, while a 1.5 bilayer structure was observed on graphene oxide in air ( Figure 2). 13 CG simulations of these systems supported initial observations of the differing lipid topologies on the different surfaces. Encouragingly, different preformed starting lipid structures spontaneously rearranged to converge to the same end structure, as observed by the AFM studies. For example, regular bilayers formed 1.5 bilayers on graphene oxide, while single lipid monolayers formed inverted bilayer structures on pristine graphene, converging to the observed end structures despite different starting configurations (Figures 1 and 3). Furthermore, the CG simulations suggested that stability of the lipid organization was related to lipid density on the surface. While small inverted bilayers were observed to disassemble on large graphene surfaces due to strong hydrophobic interactions ACS Nano Article with the surface, larger bilayers were more stable ( Figure S2). Importantly, initial AFM characterization of phospholipid patches on pristine graphene in air indicated that lipids are more mobile on this surface than on hydrophilic surfaces, such as silicon dioxide, also attributing to the strong hydrophobic interactions between lipid hydrocarbon tails and graphene. 13,39 Spreading of a very small membrane patch on a large graphene surface is, thus, perhaps not unexpected. System hydration was an important factor affecting lipid stability and organization on support surfaces. Specifically, lipid organization on graphene oxide is affected by the addition of water, resulting in formation of stable bicelle-like structures on graphene oxide surfaces, contrasting the preferred 1.5 bilayer lipid organization observed on the same surface in vacuum (simulation) or in air (AFM). Given the apparent high stability of bicelle structures on graphene oxide in water, patterning of hydrophilic surfaces in a discrete manner should be possible in aqueous support systems. A recent study investigating lipid localization across lipid monolayer−bilayer junctions exploited precisely this topological difference in lipid layering, which results from the different energies of interaction of lipids with surfaces of varying hydrophobicity. 57 Thus, lipid monolayer− bilayer junctions were established by patterning glass surfaces with hydrophobic molecules, resulting in formation of lipid monolayers on the hydrophobic sections and lipid bilayers on the surrounding hydrophilic (glass) sections. 57 Lipid bilayer formation on graphene oxide in solution has also been observed by Okamoto et al., who used AFM measurements to propose that both single and double lipid membranes are thermally stable on graphene oxide supports after vesicle fusion with the surface. 30 Furthermore, a recent study investigating lipid assembly on oxidized graphene surfaces confirmed that planar lipid bilayers were stable on this support. 58 The same study also observed that, in contrast with lipid bilayer formation on oxidized graphene, lipid monolayers were formed on a pristine graphene surface in solution, in which lipid tails interacted with the underlying graphene support. This is similar to what is suggested by our CG-MD simulations of preformed lipid bilayers on larger graphene surfaces, which disassembled into monolayers in the presence of water, again suggesting that the hydrophobic interactions between the lipid carbon tails and graphene surface dominate lipid organization on this support. A recent functional study of L-DPN generated membranes on pristine graphene observed the same lipid behavior upon the addition of buffer solution, in which fluorescence was used to verify the solvent-exposed orientation of the lipid head-groups with respect to surface. 39 The study also indicated that lipid spreading on pristine graphene is self-limiting and is more favorable than lipid interactions with silicon dioxide surfaces, attributing to the contrast in hydrophilicity between the two supports. 39 A similar trend in lipid spreading behavior has also been observed on other support surfaces, including glass (hydrophilic) and n-octadecyltrichlorosilane (hydrophobic). 59 Using ellipsometry to determine lipid film heights, it was suggested that lipids preferentially spread as monolayers on hydrophobic surfaces, while lipid bilayers form on more hydrophilic surfaces. 59, 60 We also observed that lipids may undergo dynamic changes in their configurations on either support surface, as is evidenced by the time evolution of lipid tail order parameters. This is particularly true for simulations of regular (i.e., non-inverted) lipid bilayers on the small graphene oxide surface, in which complete restructuring of the lipid layers occurred to form the apparently more favorable 1.5 bilayer configuration ( Figure 5). Furthermore, characterization of lipid mobility within the supported membrane structures suggested that lipid diffusion was anomalous, exhibiting subdiffusion for most systems. This is expected given that interactions between the lipid and the support alter MSD values over the longer time sampling windows. Tero et al. also identified subdiffusive behavior of DOPC molecules within supported lipid membranes on hydrophilic titanium oxide and related this to the atomic topology of the support surface. 42 Furthermore, other experimental studies of lipid diffusion in supported membranes have reported both linear and anomalous diffusion regimes and related this to lipid interactions with the support surface, surface topology of the support, membrane composition, and topology (e.g. planar bilayer vs vesicles) as well as system hydration. 42,46,47,49 The distinction between diffusion regimes is, however, also related to the temporal and spatial scales at which diffusion is measured. 45 It would, therefore, be of interest in the future to extend our simulations to substantially larger length scales and longer time scales, enabling more direct comparison with experimental data for similar systems. 61 The use of CG resolution simulations of the lipid membranegraphene systems has allowed us to characterize supported membrane properties for longer (i.e. several microsecond) simulation times than would be readily available, for example, all atom simulations. A number of previous studies have indicated the utility of CG simulation methods for studying lipid behavior on surfaces. 31,34−36 It is important to note that the mapping scheme used to model the CG graphene and graphene oxide surfaces (2 atoms:1 CG particle) differs from the 4:1 mapping scheme used for the lipid molecules. However, the Martini force field was parametrized to remain internally consistent and compatible between parametrization efforts (e.g., comparing Martini v2.1 and Martini v2.2). 37,53 Consequently, the parameters used to model the interactions between the lipid molecules and graphene/graphene oxide surfaces are largely independent of the mapping schemes employed to generate the graphene and lipid models. This is because the interactions between the different components are modeled using an interaction matrix, with different values assigned to interactions of each bead type. 53,54 The original parametrization of the graphite surface on which the graphene (oxide) model in this study is based involved reproducing thermodynamic values and adsorption behavior of long-chain alkanes on the graphite support. 37 The nonbonding interaction parameters between the different beads within the Martini models were adjusted to reproduce this behavior, so that the different mapping schemes did not affect the overall behavior of the systems. Full atomistic simulations would be of interest to capture in more detail the interactions between lipid molecules and the support surface in more detail. Importantly, the electronic structure of graphene and the polarizability of interacting molecules, which may play key roles in biomolecular adsorption and interactions, 62−64 are not captured by the CG model. With the development of polarizable force fields 62,65,66 and further characterization of the unique properties of graphene supports, 63 future simulations encompassing these aspects, while starting from the CG models described in the current study, would provide enhanced insight into the interaction forces underlying different supported membrane configurations. It would be of interest to explore the free energy landscape underlying the different lipid configurations observed on these support surfaces, as this would be likely to provide further ACS Nano Article insights into the origin of stability of the lipid structures observed for each of the surfaces. Characterization of the energetic differences between the different lipid structures formed on pristine graphene and the graphene oxide depending on system solvation might also be expected to provide insights into the link between surface wettability and the eventual lipid structure formed. However, accurate estimation of free energy landscapes for lipid/graphene interactions would be challenging in terms of achieving adequate sampling of the system to reach convergence. In a previous study, we have shown that both the choice of a suitable collective variable and adequate sampling are key parameters in obtaining a converged free energy landscape even for relatively simple systems, such as single protein−lipid interactions. 67 In conclusion, our combined simulation and experimental results highlight the effects of both surface polarity and the solvent environment on phospholipid membrane organization when interacting with a support surface. We demonstrate that lipids can form multilayered structures such as 1.5 bilayers on hydrophilic surfaces, such as graphene oxide, and can spontaneously rearrange to form a preferred topology even when starting from different structures (e.g., regular bilayers). Hydrophobic surfaces such as pristine graphene interact highly favorably with lipids, affecting lipid bilayer membrane stability on this surface. Characterizing these interactions will provide important insight into the applications of graphene/graphene oxide within biotechnology, including sensor devices and drug delivery systems. 68,69 METHODS Coarse-Grained Graphene (Oxide) Models. CG simulations were performed using the Martini force field. 53,70 The Martini model for graphite 37 was used to construct the graphene and graphene oxide model surfaces (single layer; Figure S1). Briefly, the model involved a 2:1 mapping scheme of atomistic carbon atoms into hexagonally packed graphene beads (SG4) making up the sheet. The published parametrization of this model was based on reproducing the adsorption and topological behavior of long-chain alkane molecules on graphite, suggesting the model is suitable to study the behavior of lipids on graphene surfaces. 37 The CG graphene model was used to build the graphene oxide model. To do this, the carbon beads (SG4) comprising the pristine graphene surface were randomly substituted by oxygen beads (SP1), which represent either a C−O−C (epoxy) group or a C−O−H group, 71 eventually reproducing the oxygen content of the graphene oxide substrates used in the experiments (2.82% C−O− C: 49.3% C−O−H based on X-ray photoelectron spectroscopy data) ( Figure S1). Simulation Details. The GROMACS 4.6 (www.gromacs.org) simulation suite 72 and the Martini 2.2 force field 53,54,71 were used to perform all CG simulations. Lipid bilayers consisting of DOPC molecules were constructed by self-assembly simulations; this lipid composition reflects the main lipids employed in the experiments and related studies. 13,39 Lipid molecules were randomly inserted into cubic box of dimensions 10 × 10 × 5 nm 3 (small bilayers) or 20 × 20 × 5 nm 3 (large bilayer); the systems were then energy minimized using the steepest descent algorithm for 10,000 steps. The z-dimension of the box was then extended to 15 or 30 nm. The system was subsequently solvated and simulated for 100 ns to allow bilayer self-assembly to occur. The inverted bilayers that mimic lipid structures seen in previous AFM studies 13,39 were constructed by placing two lipid monolayers on top of each other, with the lipid head-groups pointing toward each other, using the editconf tool from GROMACS. These lipid configurations were then placed above either graphene or graphene oxide surfaces. The polarizable Martini water model 73 was employed in simulations containing water. A round of equilibration simulations were performed for the solvated systems. This involved performing 10,000 steps of equilibration using incremental simulation time steps of: 1, 2, 5, 10, and 20 fs. The equilibrated systems were used to initiate production simulations performed at constant temperature and pressure (NPT ensemble). Temperature was regulated using the Berendsen thermostat 74 and a coupling constant of 0.3 ps at 298 K. Pressure was regulated using the Berendsen barostat 74 with a coupling constant of 3.0 ps, applying anisotropic pressure coupling using a compressibility of 0.5 × 10 −5 bar −1 in x and y and 3.0 × 10 −5 bar −1 in the z-direction. Simulations of systems in vacuum were performed at constant temperature and volume (NVT ensemble). The lipids, water, and graphene were coupled to separate external baths. Non-bonding interactions were modeled using shift functions; both LJ and Coulombic interactions were evaluated within a 1.2 nm cutoff and shifted within a 0.9 nm cutoff distance. These parameters reflect those applied in parametrization studies of the CG graphene model. 37 Lipid Order Parameter Analysis. Second rank order parameters were calculated for all of the lipids in the select CG systems ( Figures S3 and S4). These were calculated according to the S n cc second rank order equation: where θ is the angle between the bond connecting lipid bead particles (B n − B n−1 ) connecting beads within the lipid and the z-axis of the simulation box, normal to the bilayer. The angle brackets denote the mean angle calculated for all of the lipids in the system (ensemble average). The second rank term refers to the second-order Legendre polynomial used to describe the order parameter. 75 This has been used in many experimental and computational studies of similar bilayer systems. 55,76−78 Diffusion Analysis. The diffusion of lipids within the simulated systems was analyzed using documented open-source code (http://dx. doi.org/10.5281/zenodo.11827). This code employs an algorithm which calculates the MSD (MSD) values of individual lipid centroids over a range of time sampling windows, including: 1, 2, 5, 10, 20, 50, 70, 100, and 200 ns. The diffusion coefficients are then calculated by fitting the MSD vs time data using a linear diffusion equation 50 or an anomalous diffusion equation. 51 The linear approximation uses a leastsquares first-degree fit of the data, whereas the anomalous approximation uses a nonlinear least-squares fitting to a two-parameter equation of the form: where D α is the fractional diffusion coefficient, measured in units of length 2 time −α . For the linear diffusion fit, standard deviations of the diffusion coefficients were estimated as the difference of the slopes from the first and second halves of the MSD vs time data. This is the approach used by the GROMACS tools function g_msd. For the anomalous diffusion fit, the scaling exponent (α) was estimated as the slope of the log MSD vs log time data. The standard deviations of both parameters (D α and α) were calculated from the square root of the diagonal of the covariance matrix from the anomalous fit. This code has been used by previous simulation studies of lipid diffusion in virus particles. 79,80 Diffusion constants are reported without correction for the reduced degrees of freedom resulting from applying the Martini CG model. A simple scaling factor of 4 has previously been applied to compare diffusion of lipid and water particles in CG systems to experimental measurements. 54 Lipid Dip-Pen Nanolithography (L-DPN). L-DPN was performed on graphene and graphene oxide samples. Lithography took place in a DPN5000 system (Nanoink, U.S.A.) with a single cantilever probe (A-Type, ACST, U.S.A.). The cantilever was coated with 1,2dioleoyl-sn-glycero-3-phosphocholine (DOPC) using a microfluidic inkwell (ACST, U.S.A.) at high humidity (∼70%). The tip was then conditioned by writing some sacrificial patterns to strip off excessive ink. Membranes used for AFM imaging were written at 25% humidity. Atomic Force Microscopy (AFM). AFM imaging was performed on a Dimension Icon setup (Bruker) in tapping mode. NSC15 ACS Nano Article cantilevers (MikroMasch) with a nominal force constant of 46 N/m and a resonance frequency of 325 kHz were used. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsnano.6b07352. Additional details of the simulated graphene and graphene oxide models, as well as lipid diffusion and order parameter analyses (PDF) Movie S1 corresponds to the simulation in Figure 1A (AVI) Movie S2 corresponds to the simulation in Figure 1B (AVI) Movie S3 corresponds to the simulation in Figure 3A (AVI) Movie S4 corresponds to the simulation in Figure 3B (AVI)
9,128
2017-02-14T00:00:00.000
[ "Chemistry", "Biology", "Physics" ]
Physical shearing imparts biological activity to DNA and ability to transmit itself horizontally across species and kingdom boundaries Background We have recently reported that cell-free DNA (cfDNA) fragments derived from dying cells that circulate in blood are biologically active molecules and can readily enter into healthy cells to activate DNA damage and apoptotic responses in the recipients. However, DNA is not conventionally known to spontaneously enter into cells or to have any intrinsic biological activity. We hypothesized that cellular entry and acquisition of biological properties are functions of the size of DNA. Results To test this hypothesis, we generated small DNA fragments by sonicating high molecular weight DNA (HMW DNA) to mimic circulating cfDNA. Sonication of HMW DNA isolated from cancerous and non-cancerous human cells, bacteria and plant generated fragments 300–3000 bp in size which are similar to that reported for circulating cfDNA. We show here that while HMW DNAs were incapable of entering into cells, sonicated DNA (sDNA) from different sources could do so indiscriminately without heed to species or kingdom boundaries. Thus, sDNA from human cells and those from bacteria and plant could enter into nuclei of mouse cells and sDNA from human, bacterial and plant sources could spontaneously enter into bacteria. The intracellular sDNA associated themselves with host cell chromosomes and integrated into their genomes. Furthermore, sDNA, but not HMW DNA, from all four sources could phosphorylate H2AX and activate the pro-inflammatory transcription factor NFκB in mouse cells, indicating that sDNAs had acquired biological activities. Conclusions Our results show that small fragments of DNA from different sources can indiscriminately enter into other cells across species and kingdom boundaries to integrate into their genomes and activate biological processes. This raises the possibility that fragmented DNA that are generated following organismal cell-death may have evolutionary implications by acting as mobile genetic elements that are involved in horizontal gene transfer. Electronic supplementary material The online version of this article (doi:10.1186/s12867-017-0098-8) contains supplementary material, which is available to authorized users. Background Several hundred billion to a trillion cells are known to die in the adult human body daily and a considerable amount of resulting fragmented DNA and/or nucleosomes enter the blood stream [1,2]. These DNA and/or nucleosome fragments have been variously termed as circulating nucleic acids (CNAs), cell-free nucleic acids (cfNAs), cellfree DNA (cfDNA), DNA fragments (DNAfs) and chromatin fragments (Cfs) [3][4][5][6]. It is well known that bacteria can take up extraneous DNA [7]. However, introduction of DNA into mammalian cells require permeabilization of the cell membrane [8]. On the other hand, several reports suggest that apoptotic DNA can be spontaneously taken up by mammalian cells. Bergsmedh et al. [9] showed transfer of oncogenes following co-culture of apoptotic bodies with mouse fibroblast cells which led to their oncogenic transformation [9]. DNA released from leukemic cells can get integrated into chromosomes of surrounding stromal cells which result in DNA damage and apoptosis [10]. Mutated K-ras oncogene present in plasma of colon cancer patients were shown to be taken up by mouse fibroblast cells, leading to their cancerous transformation; and K-ras sequences could be detected in them by PCR and FISH [11,12]. Reconstitution of genes in vitro into chromatin were reported to be readily taken up by cells in culture to localize in the nuclei of recipient cells, which led to the suggestion that chromatinized DNA maybe an efficient means for gene delivery [13]. The detection of male foetal DNA in maternal brain cells suggests that circulating foetal DNA can cross the blood-brain barrier [14]. We have recently reported that fragmented cfDNA that circulate in human blood are not only readily taken up by mammalian cells but that they can evoke biological responses in the recipients [3,4,6,15]. When DNA was isolated from blood of cancer patients and healthy volunteers and added to cells in culture, they were rapidly taken up by the recipient cells to accumulate in their nuclei [3,4,6]. The internalized DNA associated themselves with host cell chromosomes triggering a DNA damage repair response which facilitated their genomic integration [4]. Damage to DNA also activated an apoptotic response resulting in death of some cells [4].Taken together, the above studies support the observation that fragmented cfDNAs can be internalized by healthy cells with potentially oncogenic consequences. In order to explain the above findings, we hypothesized that cellular entry and acquisition of biological activity are functions of the size of DNA. We show that while HMW DNAs were incapable of entering into cells, sonicated DNA (sDNA) from various sources were taken up by each other indiscriminately without heed to species or kingdom boundaries. The internalized sDNAs associated themselves with host cell chromosomes and integrated into their genomes. They also evinced biological activities in the form of their ability to phosphorylate H2AX and activate the pro-inflammatory cytokine NFκB. Our results raise the possibility that fragmented DNA from dying cells may act as mobile genetic elements with evolutionary implications by being involved in horizontal gene transfers. Acute environmental stress leading to organismal cell-death and DNA fragmentation may have played an important role in evolutionary processes. . Leaves were cut into small pieces and were ground with liquid nitrogen using a mortar and pestle pre-chilled to −20 °C. The yield of DNA from the four different sources was as follows: WI-38: 16.30 µg from 10 6 cells; MDA-MB-231: 18 µg from 10 6 cells; bacteria: 57 µg from 1 ml of overnight culture (OD = 0.8); plant: 30.6 µg from 40 mg banana leaf. HMW DNA (500 ng) in 50 µl of Tris-EDTA buffer, pH 7.5 from all four sources were sonicated at 10% amplitude (64 μ or 0.0025 in.) for 10 s using a double step 1/8″ microtip with coupler and a sonicator (Branson Digital Sonifier ® 250/450). This amplitude and timing of sonication was arrived at by hit-and-trial method until we got fragment sizes which were similar to those obtained from sera of human subjects [4]. The HMW and sDNA (500 ng each) were characterized by running on a 1% agarose gel at 100 V for 60 min. A 1 kb DNA ladder (Gene Ruler; Thermo Scientific; Cat. No. SM0311) was also run to ascertain the size range of sDNA. Flourescent labeling of DNA and confocal microscopy Fluorescent labeling In order to fluorescently label DNA, WI-38 and MDA-MB-231 cells were grown in DMEM medium containing BrdU (Sigma Chemicals USA, Catalogue No. B5002-100MG) at a final concentration of 10 μM for 24 h. Cells were washed ×3 with PBS, and the labeled HMW and sDNAs were isolated as described above. Bacteria, grown in Luria Broth were labeled with BrdU for 30 min at the above concentration of BrdU and labeled HMW and sDNA were extracted. Thirty minutes of labeling produced satisfactorily observable fluorescent signals under confocal microscope. Unlabeled DNA was not quantified. Since we did not grow plant cells in culture, DNA from plant was non-enzymatically labeled in vitro after HMW DNA extraction. We used Platinum Bright ™ 550 Red/ Orange Nucleic Acid Labelling Kit (Kreatech Diagnostics, The Netherlands. Catalogue No. GLK-004) in 50 μl reactions. Labelling was done as per manufacturer's protocol followed by sonication. Treatment of recipient NIH3T3 cells and laser scanning confocal microscopy (LSCM) NIH3T3 mouse fibroblast cells grown in 35 mm 3 dishes on cover slips (10 × 10 4 cells) were treated with fluorescently-labeled HMW and sDNA (10 ng each) isolated from WI-38, MDA-MB-231, bacterial and plant sources. After 30 min of treatment, cells were washed ×3 with PBS and, with exception of cells treated with plant DNA (see below), were processed for LSCM using anti-BrdU antibody (1:100 dilution) (Abcam, USA, Catalogue No. ab6326) and FITC-labeled secondary antibody (1:500 dilution) (Abcam, USA, Catalogue No. ab102263), mounted onto clean glass slides with Vecta-shield and were visualized using Zeiss differential LSCM platform. Since plant DNA was non-enzymatically labeled in vitro, the treated cells were directly visualized under LSCM platform. Fifty nuclei were randomly chosen for analysis in each case and the mean nuclear fluorescence intensity was measured using LSM Image Examiner 4.0 software (Carl Zeiss Jena GmbH, Germany). Treatment of recipient bacterial cells and fluorescent microscopy Fluorescently-labeled MDA-MB-231, WI-38, bacterial and plant DNA (10 ng each) were added to 2 ml of bacterial culture, incubated for 30 min, processed as above and examined under a fluorescent microscope. One hundred bacteria were analyzed and the number of cells showing positive signals was determined. Chromosomal association and genomic integration NIH3T3 cells were treated with fluorescently labeled HMW and sDNA (10 ng DNA each) as described above and metaphase spreads were prepared after 6 h of treatment and observed under fluorescent microscope. Fifty metaphases were analyzed and the average number of signals per metaphase was recorded. To investigate possible genomic integration of human DNA (WI-38 and MDA-MB-231), 10 ng DNA each of HMW and sDNA were added to NIH3T3 cells and metaphase spreads were prepared at 10th passage. FISH was performed using human whole genomic and pancentromeric probes as described by us earlier [4]. The human probes do not cross-react with mouse DNA. Fifty metaphases were analyzed for detection of human DNA signals. Detection of γ-H2AX and NFκB by immuno-fluorescence NIH3T3 cells were treated with HMW and sDNA from all four sources for 6 h and processed for detection of γ-H2AX signals by indirect immuno-fluorescence as described by us earlier [4]. Control cells were treated with 100 μl PBS. Experiments were done in duplicate and 100 DAPI-stained nuclei were randomly analyzed in each case. The average percentage of cells showing positive signals was determined. NFκB was analyzed by indirect immuno-fluorescence using specific antibodies against NFκB (Abcam ® , UK. Catalogue No. ab16502). Anti-rabbit secondary antibodies labeled with FITC (Abcam UK. Catalogue Nos. ab6717/ab6785/ab7121) were used. Mean nuclear fluorescent intensity (MFI) was determined in duplicate in 50 cells in each case. Background MFI of untreated cells was deducted from that in treated cells and results were depicted as mean ± S.E. Results We isolated HMW DNA from normal human lung fibroblasts (WI-38 cells), human breast carcinoma cells (MDA-MB-231), bacteria (E. coli) and plant (banana) and sonicated them to generate small fragments. Gel analysis of sDNAs revealed a smear with a fragment size range of ~300 to ~3000 bp. However, some fragments >3000 and <300 bp were presumably also present (Additional file 1: Figure S1). This fragment size range is similar to that of nucleic acids isolated from human plasma reported by us and others [4,5,16]. We have also shown that these cfDNA fragments are biologically active in doses of 5-10 ng in cell culture assays in 35 mm dishes [4]. We first examined if fluorescently labeled HMW and sDNAs (10 ng each) could access mouse fibroblast cells, especially their nuclei. HMW DNA from none of the above sources showed any intracellular fluorescent signals when examined by laser scanning confocal microscopy (LSCM) (Fig. 1). On the other hand, fluorescent signals could be clearly detected both in cytoplasm and nuclei of mouse cells after treatment with sDNA from all four sources by 30 min (Fig. 1). Bacterial sDNA was taken up by nuclei most avidly while plant sDNA was the least efficient (p = 0.0001) (Fig. 2 and Additional file 2). Although many unknown factors are likely to be involved, variations in methylation patterns between mammalian, bacteria and plant DNA might be an important factor responsible for differential DNA uptake [17,18]. Since DNA uptake by bacteria is well documented, we examined if bacteria could take up DNA from nonbacteria sources [19,20]. When examined by fluorescent microscopy, bacterial cells were found to have taken up sDNAs from mammalian (WI-38 and MDA-MB-231), bacterial (self ) and plant sources which were seen to be associated with their DNA by 30 min (Fig. 3a). Approximately 10% of cells showed intracellular signals in case of the two mammalian and bacterial DNA while 2% of the bacteria showed positive signals when treated with plant DNA (Fig. 3b and Additional file 2). HMW DNA were not taken up by bacteria (data not shown). The transformation rate (10%) in case of mammalian cells may seem high; this could be due to the experimental conditions used in the current study. We observed that by 30 min, the ingested intra-nuclear sDNAs from all four sources had associated themselves with mouse cell chromosomes (Fig. 4a). Once again, the average number of fluorescent signals per metaphase was the highest in case of bacterial DNA and lowest in case of plant DNA (p = 0.0001) ( Fig. 4b and Additional file 2). FISH analysis of metaphase preparations from mouse cells that had been treated with HMW and sDNA from the two human sources at 10th passage showed several positive signals indicating that sDNA had stably integrated into the host cell chromosomes (Fig. 5a). It is likely that incorporation of only larger sonicated fragments were detected by FISH, with smaller fragments escaping detection. Interestingly, significantly more signals were detected per metaphase with respect to sDNA from cancerous human cells (MDA-MB231) than that from normal human cells (WI-38) (Fig. 5b and Additional file 2). This finding is keeping with our earlier report that circulating nucleic acids isolated from cancer patients are biologically more active than those isolated from healthy volunteers [4]. No signals were seen in metaphases from HMW DNA treated cells (Fig. 5b and Additional file 2). We next investigated comparative biological activities of HMW and sDNAs from different sources. The endpoints examined were phosphorylation of H2AX and activation of the pro-inflammatory transcription factor NFκB. Treatment of recipient cells with 10 ng of sDNA from all four sources clearly showed up-regulation of both γ-H2AX and NFκB in a time dependent manner that was seen maximally at 6 h with respect to all four sDNAs (Fig. 6a-h and Additional file 2). A dose-response analysis of H2AX and NFκB activation at 6 h showed progressively increasing activation with increasing concentration of sDNA (Fig. 7a-h and Additional file 2). The dose response data indicated that these activities of sDNAs were biological in nature. HMW DNA showed minimal activation of H2AX at high concentrations and little activation of NFκB (Fig. 7a-h and Additional file 2). Taken together, our results support the hypothesis that cellular/nuclear entry and acquisition of biological activities are functions of size of DNA. These findings are consistent with our earlier observation that fragmented nucleic acids can freely access healthy cells of the body to activate H2AX and active Caspase-3 [4]. The surprising finding in the present report is that, once sonicated, DNA from different species and taxonomical kingdoms could indiscriminately enter into nuclei (or DNA in case of bacteria) of each other without heed to species or kingdom boundaries. Our earlier report that activation of a DDR is necessary for facilitating genomic integration of circulating DNA suggests that a similar DDR activation might be responsible for genomic integration of sDNAs [4]. Discussion There is considerable early literature from 1950s and 1960s that suggest that HMW DNA can be taken up by a variety of cells in culture. These have included cells derived from human bone marrow, rat hepatoma, human cervical cancer, mouse embryo and human promyelocytic leukemia, amongst others [21][22][23]. The internalized DNA reportedly integrated into host cell genomes and could be transcribed and translated into proteins [24]. Chromosomal damage and karyotype alterations were also reported [25]. However, we failed to demonstrate spontaneous uptake of HMW DNA by any of the recipient cells in the present study. This leads us to hypothesize that at least in some of these earlier experiments the The present study was undertaken to investigate our surprising observation that cfDNA derived from dying cells that circulates in human blood are biologically active molecules and can be readily taken up by healthy cells to integrate into their genomes and activate DNA damage and apoptotic responses in the recipients [3,4,6]. Since DNA is not conventionally known to be taken up by cells, we hypothesized that ability of DNA to enter into cells is a function of the size of DNA [8,15]. In order to test this hypothesis, we generated small DNA fragments by sonicating HMW DNA to mimic cell-free DNA that circulates in blood. We have shown here that sDNA from different sources can be indiscriminately taken up by other cells without heed to species or kingdom boundaries. We also show that the uptaken sDNA accumulate in the nuclei of host cells which is followed by their integration into host cell chromosomes. As we have earlier reported that most of the uptaken intra-cellular fragmented cfDNA isolated from plasma are rapidly degraded, the same presumably applies to the uptaken sDNA, with a small fraction that are integrated into host cell chromosomes escaping degradation [4]. Further research is clearly required to address several unanswered questions that are raised by our findings. For example, it remains to be seen if enzymatic digestion of DNA would produce similar results as sonication. Furthermore, the mechanism(s) by which sDNA are taken up by cells is not explained by our study. We observed differences in rates of uptake of sDNA by different cells. For example, sDNAs derived from MDA-MB-231 human cancer cells were taken up more avidly (Figs. 1, 2) and integrated to mouse chromosomes more efficiently than sDNAs from WI-38 normal human fibroblasts (Fig. 5a). Although, this finding is in accordance with our earlier report that circulating DNA fragments isolated from plasma of cancer patients are more active than those from healthy volunteers [4], the reason underlying this differential uptake/activity remains obscure at present. Although the size of cfDNA derived from cancer patients have been reported to be smaller than those from healthy individuals, this cannot explain the differential uptake observed by us, since sonication of HMW DNA from cancerous and non-cancerous sources produced similar sized sDNA (Additional file 1: Figure S1) [26]. It is possible that differences in methylation pattern between normal and cancerous DNA may be responsible for this differential uptake [27]. Our observation that bacterial sDNA was taken up most avidly and associated with chromosomes most efficiently while plant sDNA was least efficient begs for a mechanistic explanation. It should be noted that plant DNA, unlike mammalian and bacterial DNA, was labeled in vitro; however, differential labeling is unlikely to affect rate of uptake of DNA. The possibility that the DNA label itself was being taken up is excluded by our chromosomal association experiments wherein we show discrete, rather than diffuse, fluorescent signals on chromosomes (Fig. 4a). The mechanism by which sDNAs get integrated into host cells genomes will also require further research. A recent report by Overballe-Petersen et al. [28] is relevant to our study. These authors showed that degraded DNA isolated from 43,000-year old woolly mammoth bone could be taken up by naturally competent environmental bacteria to incorporate small fragments (>20 bp) into their genomes [28]. These transformations involved DNA recombination which was independent of RecA recombinase. The authors suggested that such a natural genetic exchange may have a strong possibility of driving bacterial evolution [28]. We have proposed, based on our earlier findings, that cfDNA that circulate in blood may act as a new class of intra-corporeal mobile genetic elements in light of their ability to enter into healthy cells, integrate into their genomes and activate biological responses [3]. Our present findings extend this possibility by showing that fragmented DNA, irrespective of their source, can be indiscriminately taken up by other cells without heed to species or kingdom boundaries. This leads us to propose that fragmented DNAs may act as mobile genetic elements with evolutionary implications and be involved in horizontal gene transfer (HGT). HGT is known to occur in bacteria and unicellular eukaryotes and thought to play an important role in their evolution [29][30][31]. Whether HGT occurs in higher organisms is less clear [32][33][34]. A recent study has, however, provided evidence that for HGT in vertebrate and invertebrate genomes has also occurred in a large scale [35]. However, the mechanism(s) that underlie the transfer of genes between organisms or animals has remained unclear. Our study may help to provide a mechanistic explanation of HGT by suggesting that small fragments of DNA from unicellular organisms and cells of higher species and taxa can be horizontally transferred to others. Our results also suggest a possible mechanism for vertical transfer of genes whereby fragmented extraneous DNA could get incorporated into germ line cells. Apoptosis, or apoptosis-like processes, are an evolutionarily conserved and are known to occur in bacteria [36,37], metazoa [38] as well as in higher organisms [39]. Environmental stress is thought to shape adaptation and evolution of species and the role of extreme environmental stress have been emphasized by many [40][41][42]. We propose that extreme environment stress leading to organismal cell-death and DNA fragmentation/degradation may have played an important role in HGT. The incoming DNA fragments may cause mutations, activate resident genes or even lead to expression of novel genes if the DNA fragments were to be large enough to accommodate such genes. Conclusion We show that while HMW DNAs from mammalian (cancerous and non-cancerous), bacteria and plant sources were incapable of entering into cells, sDNA from the above sources could do so indiscriminately without heed to species or kingdom boundaries. Thus, sDNA from human cells and those from bacteria and plant could enter into nuclei of mouse cells and sDNA from human, bacterial and plant sources could spontaneously enter into bacteria. The intracellular sDNA associated themselves with host cell chromosomes and integrated into their genomes. Furthermore, sDNA, but not HMW DNA, from mammalian, bacterial and plant sources could phosphorylate H2AX and activate the pro-inflammatory transcription factor NFκB in mouse cells, indicating that sDNAs had acquired biological activities.
5,019
2017-08-09T00:00:00.000
[ "Biology" ]
Completing the eclectic flavor scheme of the $\boldsymbol{\mathbb Z_2}$ orbifold We present a detailed analysis of the eclectic flavor structure of the two-dimensional $\mathbb Z_2$ orbifold with its two unconstrained moduli $T$ and $U$ as well as $\mathrm{SL}(2,\mathbb Z)_T\times \mathrm{SL}(2,\mathbb Z)_U$ modular symmetry. This provides a thorough understanding of mirror symmetry as well as the $R$-symmetries that appear as a consequence of the automorphy factors of modular transformations. It leads to a complete picture of local flavor unification in the $(T,U)$ modulus landscape. In view of applications towards the flavor structure of particle physics models, we are led to top-down constructions with high predictive power. The first reason is the very limited availability of flavor representations of twisted matter fields as well as their (fixed) modular weights. This is followed by severe restrictions from traditional and (finite) modular flavor symmetries, mirror symmetry, CP and $R$-symmetries on the superpotential and Kaehler potential of the theory. Introduction In this paper we extend our previous discussion [1] of the T 2 /Z 2 orbifold. T 2 /Z 2 is the only two-dimensional orbifold with two unconstrained moduli T , U that transform under SL(2, Z) T × SL(2, Z) U and under mirror symmetry, which interchanges T and U . Hence, it can serve as a building block for the discussion of six-dimensional orbifolds. 1 In our previous study, ref. [1], we had identified the traditional flavor symmetries and the finite modular symmetries Γ N for the T 2 /Z 2 orbifold. The groups Γ N (for small N ) are isomorphic to groups like S 3 , A 4 , S 4 and A 5 that could be suitable for a description of discrete flavor symmetries in particle physics [7][8][9]. Modular symmetries, however, require more than just a discussion of the finite modular groups Γ N . In addition, we have to include automorphy factors corresponding to the explicit modular weights of matter fields. In the present paper, we discuss the implications of these automorphy factors in the case of the T 2 /Z 2 orbifold. Once they are taken into account, we find an extension of the finite modular flavor symmetry in form of an R-symmetry, which implies further restrictions to the superpotential and Kähler potential of the theory. This is one of the reasons why a modular flavor symmetry has more predictive power than traditional flavor symmetries. In the top-down approach (which we adopt here), this extension of the symmetry reflects the symmetries of the underlying string theory, which restrict the modular weights to well-defined specific values. 2 In the bottom-up approach to modular flavor symmetries, the choice of the modular weights of matter fields is part of model building and can be used to obtain so-called "shaping symmetries" that appear as additional accidental symmetries for specific choices of the modular weights [10,11]. The main results of the paper can be summarized as follows: • We identify the full eclectic flavor symmetry [12] of the T 2 /Z 2 orbifold to be It includes a Z R 4 R-symmetry that originates from the discussion of the automorphy factors and extends the order of the eclectic flavor group from 2304 to 4608. With CP, the order of the eclectic flavor symmetry is further enhanced to a group of order 9216. • We provide a discussion of the landscape of flavor symmetries in (T, U )-moduli space and identify the local unified flavor groups at specific points and lines in this moduli space. The results are given in figure 3, accompanied by an explicit discussion of the flavor symmetries in the cases of two specific geometrical shapes (the tetrahedron and the squared raviolo) as well as T ↔ U mirror symmetry in section 4. • We observe a specific relation between mirror symmetry and the allowed values of modular weights of matter fields (discussed explicitly in section 3). • The additional R-symmetry is closely related to the modular symmetry and leads to further constraints on the allowed values of modular weights of matter fields. Hence, it further restricts the form of superpotential and Kähler potential, as explicitly discussed in section 5. • We discover the appearance of continuous gauge symmetries for specific configurations in moduli space. The paper is structured as follows. In section 2, we recall the results of our previous study [1]. Section 3 discusses the automorphy factors and modular weights of matter fields. We identify the additional R-symmetry and the extended eclectic flavor group accordingly. This includes a discussion of the interplay of the modular weights with both, T ↔ U mirror symmetry and the R-symmetry. In section 4 we analyze the unified local flavor groups that appear at specific points, lines and other hyper-surfaces in moduli space. The results including CP are illustrated in figure 3. Section 5 is devoted to the discussion of the superpotential and Kähler potential. We observe the appearance of continuous gauge symmetries for certain configurations of the moduli (naïvely, they might appear as accidental symmetries, but they are consequences of underlying symmetries in string theory). In section 6 we give conclusions and outlook. Technical details are relegated to several appendices that complete the general discussion of ref. [1]. What do we know already? Technical details of the eclectic flavor symmetries of T 2 /Z K orbifolds (K = 2, 3, 4, 6) have been given in section 2 of ref. [3]. In the cases K > 2, the complex structure modulus U has to be fixed to allow for the orbifold twist. For T 2 /Z 2 , in contrast, we have two unconstrained moduli T and U with the corresponding modular transformations SL(2, Z) T × SL(2, Z) U . For generic values of the moduli, we find the traditional flavor symmetry (we use the Small Group notation from GAP [13]) (D 8 × D 8 )/Z 2 ∼ = [32,49] (2.1) as the result of geometry and string selection rules (see refs. [14,15]) or, equivalently, as a result of the outer automorphisms of the (Narain) space group that describes the orbifold [4,5]. Furthermore, the finite modular symmetry for the T 2 /Z 2 orbifold is shown to be the multiplicative closure of Γ T 2 × Γ U 2 = S T 3 × S U 3 and mirror symmetry (which exchanges T and U ), as discussed in ref. [1]. The full mirror symmetry acting on the matter fields turns out to be ZM 4 (which acts on the moduli as Z 2 , cf. [16]). This leads to the finite modular group [144,115]. If we include a CP-like symmetry acting on the moduli as T → −T and U → −U , the finite modular group enhances to In combination with the generators of the traditional flavor symmetry [32,49], we obtained an eclectic flavor group with 4608 elements (2304 without CP). Only some of the eclectic flavor symmetries are linearly realized. For generic values of the moduli just the traditional flavor group [32,49] remains unbroken. For specific "geometrical" configurations, this symmetry is enhanced to a larger subgroup of the eclectic flavor group (via the so-called stabilizer subgroups). The generators of the unbroken groups are displayed explicitly in figure 7 of ref. [1]. Relevant values correspond to the moduli U = i (the squared raviolo) and U = exp(πi/3) (the tetrahedron) as well as the line T = U as a consequence of mirror symmetry. At T = U , we find the enhancement of [32,49] to [64,257]. For the tetrahedron, the group [32,49] is enhanced to [96,204] as discussed in section 4.2 of ref. [1], while for the raviolo, we shall see here in section 4 that [32,49] is enhanced to [128,523]. If we include the CP-like transformation, we gain a further enhancement of the number of elements by a factor of two. The largest linearly realized subgroup of the eclectic flavor group (including CP) was found (in ref. [1]) to be [1152,157463] at T = U = exp(πi/3). So far the results are based on the finite modular groups. A full discussion of modular symmetries should, however, not only include the finite symmetries Γ N (here Γ T 2 × Γ U 2 = S T 3 × S U 3 ), but also the so-called automorphy factors that arise from the non-trivial (fractional) modular weights (n T , n U ) of SL(2, Z) T × SL(2, Z) U . This leads to further restrictions on the action (given by Kähler and superpotential) of the theory with an enhancement of the symmetries. As discussed in refs. [2,3], these automorphy factors lead to discrete phases resulting in R-symmetries. In our previous paper [1], for the sake of clarity and simplicity, we had not included these automorphy factors in our discussion. We shall include them in the following in full detail. Discrete R-symmetries and mirror symmetry In this section, we show that a T 2 /Z 2 orbifold sector gives rise to a Z R 4 symmetry that originates from modular transformations, where the automorphy factors of certain modular transformations give rise to the R-charges. As the T 2 /Z 2 orbifold sector is equipped with two moduli, T and U , there exists a modular group for each of them, SL(2, Z) T and SL(2, Z) U , each associated with a modular weight (n T , n U ). Since R-charges can be defined in terms of both modular groups, these modular weights are highly constrained. Furthermore, we give a detailed discussion about the action of mirror symmetry on matter fields and discover a new relation between mirror symmetry and the R-symmetry. Automorphy factors of modular transformations Let us consider a general matter field Φ (n T ,n U ) originating from string theory with modular weights n T and n U corresponding to SL(2, Z) T and SL(2, Z) U . Then, under a (non CP-like) modular transformationΣ ∈ Oη(2, 2, Z), the field transforms as Here, j (n T ,n U ) (Σ, T, U ) is the automorphy factor of the modular transformation and ρ r (Σ) is the representation matrix ofΣ that forms a representation r of the finite modular group, as derived in appendix D. The modular weights of matter fields can be computed in string theory, as reviewed in appendix B, and it turns out that, apart from n T = n U , there is also the possibility in string theory. In order to determine the automorphy factor j (n T ,n U ) (Σ, T, U ), we might use as a first step the analogy to Siegel modular forms based on Sp(4, Z). However, Siegel modular forms are defined for parallel weights n := n T = n U only. In this case, following refs. [16,17], we have as defined in ref. [18]. Then, eq. (3.3) yields Using the dictionary [18] that relates Sp(4, Z) with the modular group Oη(2, 2 + 16, Z) of our string setting, the Sp(4, Z) element M (γ T ,γ U ) is equivalent toΣ (γ T ,γ U ) ∈ Oη(2, 2+16, Z) defined in appendix A.1. We take this as a motivation to define also for the case n T = n U . It turns out that for the other casesΣ =Σ (γ T ,γ U ) one can use the automorphy factor eq. (3.3) for the element M ∈ Sp(4, Z) that corresponds toΣ using again the dictionary of ref. [18]. It is important to note that the resulting automorphy factor will be independent of the specific choice n = n T or n = n U , since n T = n U mod 2. In the following, we will see this explicitly at some examples. Discrete R-symmetry In the T 2 /Z 2 orbifold sector, a Z 2 sublattice rotation is given bŷ i.e. in the Narain formulation,Θ (2) = −1 4 is a left-right symmetric 180 • rotation in the T 2 /Z 2 orbifold sector that leaves the orthogonal compact dimensions invariant, see refs. [2,3]. It is an outer automorphism of the full Narain space group of the six-dimensional orbifold and, bulk matter twisted matter As the generalized metric H(T, U ) is invariant under a transformation (A.29) withΘ (2) , this Z 2 sublattice rotation leaves T and U invariant. Hence, the sublattice rotationΘ (2) corresponds to a traditional flavor symmetry. Still, the transformation withΘ (2) originates from a modular transformation,Θ (2) =Ĉ 2 S =K 2 S . So, we expect the appearance of an automorphy factor. Since R ∈ SL(2, Z) T and R ∈ SL(2, Z) U are identified in Oη(2, 2, Z), we have to ensure that we compute the automorphy factor correctly: we can use either the factor (c T T + d T ) n T = (−1) n T or (c U U + d U ) n U = (−1) n U for the transformation R. Yet, the resulting automorphy factor must coincide in both cases, (−1) n T = (−1) n U . Hence, we see that This relation is satisfied for all (massless) matter from the T 2 /Z 2 orbifold sector, as one can see from table 1. Moreover, eq. (3.9) also holds for all massive strings, as shown in appendix B. Consequently, having control over the automorphy factor, we can choose R ∈ SL(2, Z) U and the modular weight n U in the following. The action ofΘ (2) on matter fields Φ (n T ,n U ) with SL(2, Z) U modular weights n U ∈ {0, −1, −1 /2, 1 /2, −3 /2}, as listed in table 1, is given by (3.11) Here, we used that ρ r (Θ (2) ) = ρ r (K S ) 2 = ρ r (Ĉ S ) 2 = 1. For the allowed modular weights n U ∈ {0, −1, −1 /2, 1 /2, −3 /2} the multivalued phase factor gives rise to Z R 4 R-charges R ∈ {0, 2, 3, 1, 1}, respectively. Thus, for the T 2 /Z 2 orbifold sector we find that the R-charge R is given in terms of the modular weight n U (or n T ) as cf. ref. [19]. Note that due to the fractional modular weights n U , (Θ (2) ) 2 gives a non-trivial transformation with charges 2R = 4n U mod 4 that turns out to be equivalent to the point group selection rule of eq. (D.24a). Since the R-symmetry transformation acts trivially on all moduli, it belongs to the traditional flavor symmetry, which gets enhanced to where the Z 2 in the latter quotient identifies the point group selection rule of T 2 /Z 2 contained in both the Z R 4 and the traditional symmetry (D 8 × D 8 ) /Z 2 . In string theory, modular symmetries are anomaly-free (see e.g. [20,21] for details on anomaly cancellation for modular symmetries). Hence, since the Z R 4 R-symmetry arises from modular symmetries, it is anomaly-free. Due to the Z R 4 R-symmetry, the eclectic flavor group of a T 2 /Z 2 orbifold sector gets extended to which results in a group of order 4608. Including a CP-like transformation, the order of the eclectic flavor group is further enhanced to a group of order 9216. The action of mirror symmetry on matter fields In order to analyze the action of mirror symmetryM ∈ Oη(2, 2, Z) on matter fields in our string setup, we have to determine the automorphy factor first. Using the results of section 3.1, we consider the mirror element in Sp(4, Z), which reads Thus, the automorphy factor (3.3) of a mirror transformation is given by (−1) n . Since n T = n U mod 2 as derived in eq. (3.9), one can assign n = n U and the automorphy factor of a mirror transformationM is given as (−1) n U , without loss of generality. Moreover, note that the Z R 4 R-charges given in eq. (3.11) are analogously defined. Thus, the automorphy factor ofM can be removed using the Z R 4 R-symmetry, as we will do in the following. Now, let us assume that under a mirror transformationM we have for a matter field Φ (n T ,n U ) that transforms in the representation r of the finite modular group 115] (where we have absorbed the automorphy factor (−1) n U using Z R 4 ). In the following, we will see that the transformation (3.16) is correct for n T = n U , but must be modified in the case n T = n U . To do so, let us consider the following chain of transformations under the assumption that eq. (3.16) were correct. However, a mirror transformation maps an elementΣ (1 2 ,γ U ) ∈ SL(2, Z) U to an elementΣ (γ U ,1 2 ) ∈ SL(2, Z) T , see eq. (A.25). Thus, eq. (3.17c) must be equal to where the 2 × 2 matrix γ T ∈ SL(2, Z) T inΣ (γ T ,1 2 ) has to be equal to the matrix γ U used in eq. (3.17b). Now, we have to distinguish between two cases: First, in the case of so-called parallel weights (i.e. if n T = n U ) the representation matrices ρ r (Σ (γ U ,1 2 ) ) and ρ r (Σ (1 2 ,γ U ) ) have to be related as follows and eq. (3.17c) coincides with eq. (3.18). In the second case (i.e. if n T = n U ), the (preliminary) chain of transformations given in eq. (3.17) has to be modified Then, we have to impose condition (3.19) and, consequently, eq. (3.20c) coincides with eq. (3.18) usingΣ (γ T ,1 2 ) with γ T equal to γ U . In other words, for each matter field Φ (n T ,n U ) with n T = n U (satisfying the constraint (3.9)) there must exist a partner field, denoted by Φ (n U ,n T ) , which coincides in all properties with Φ (n T ,n U ) but has interchanged modular weights. Then, a mirror transformation has to act on matter fields (Φ (n T ,n U ) , Φ (n U ,n T ) ) as and eq. (3.19) has to hold. On the other hand, the transformationsK S ,K T ,Ĉ S andĈ T act diagonally on (Φ (n T ,n U ) , Φ (n U ,n T ) ). For example, after an appropriate basis change one can check using the character (see table 1). Finally, in appendix B we show in string theory why a mirror transformation interchanges Φ (n T ,n U ) and Φ (n U ,n T ) if n T = n U , as we have seen in this bottom-up discussion that has lead to eq. (3.21). At these points, the couplingsŶ where c T , d T , c U , d U are integers that define γ. This relation can be straighforwardly extended to include stabilizer elements that involve the mirror symmetryM and the CP-likeΣ * generators, as they do not induce automorphy factors, see the discussion in section 3.3 and appendix D.3.2. Since the representation ρ 4 3 (γ) of the finite modular group is unitary, its eigenvalues and hence the automorphy factors must be phases, see also ref. [3, section 6]. Note that eq. (4.5) corresponds to the mechanism of flavon alignment in the context of modular flavor symmetries, as also discussed in e.g. refs. [23][24][25]. A consequence of the automorphy factors being phases at ( T , U ) is that the modular transformations from H ( T , U ) act linearly on matter fields. Hence, the stabilizer enhances the traditional flavor symmetry to the multiplicative closure of the traditional flavor group and the stabilizer modular subgroup, i.e. to the so-called unified flavor group (4.6) Explicitly, from eqs. (3.1) and (3.6), the action of a (non-CP-like) stabilizer element γ ∈ H ( T , U ) on a field Φ (n T ,n U ) with modular weights (n T , n U ) is given by where ρ t,( T , U ) is a t-dimensional representation t of the unified flavor group, whereas r is a representation of the finite modular group [144,115]. We stress here the presence of the automorphy factors in the transformation (4.7), which can enhance the order of the unbroken transformations due to the possibility of fractional weights of matter fields. Unified flavor groups at generic points in moduli space Even for generic values of the moduli, the traditional flavor group is enhanced for T = U . In this case, the mirror transformationM is left unbroken. Considering its ZM 4 action on matter fields, as given by eqs. (3.16) or (3.21), we find that the unified flavor group in this case is given by [128,2316]. We can consider also the case that one of the moduli has a generic value while the second modulus is fixed at one of the special points, i or e πi /3 . In figure 1, we display the different unified flavor groups achieved by incorporating the unbroken modular transformations at those special points in moduli space. Let us consider the results for generic U , as presented in figure 1a. At T = i, we know (cf. ref. [1]) that the stabilizer subgroup is generated byK S , which becomes a Unified flavor groups of the raviolo At U = i (cf. ref. [11]), the geometry of the T 2 /Z 2 orbifold adopts the form of a square raviolo, where the corners correspond to the singularities of the orbifold and the edges are perpendicular and have the same length. As just mentioned, in this case the traditional flavor There are two special points in moduli space for the raviolo, where further enhancements occur if the CP-like modular transformationΣ * is considered. First, at T = i we find the stabilizer H (i,i) = Ĉ S ,M ,Σ * , which enhances the traditional flavor symmetry to [32,49] is the traditional flavor group without Z R modulus fixed at T = i Im T > i is left invariant by Ĉ S ,Σ * . Acting on matter fields along with their corresponding automorphy factors, this yields the unified flavor group [256,25882]. Furthermore, the traditional flavor symmetry is enhanced to [256,6341] along the regions of the locus λ T where T = e iϕ with π /3 < ϕ < π /2, and T = 1 /2 + i Im T with Im T > √ 3 /2. In these regions, the stabilizers are given by Ĉ S ,K SΣ * and Ĉ S ,K TΣ * , respectively. Unified flavor groups of the tetrahedron When the complex structure is stabilized at U = e πi /3 , the T 2 /Z 2 orbifold sector has the shape of a tetrahedron, cf. ref. [1, figure 5]. As can be read off from figure 1b, in the tetrahedron with a generic value for T , theĈ TĈS modular transformation leaves the moduli invariant and enhances the traditional flavor group to the unified flavor group [192,1509], the generic flavor symmetry of the tetrahedron. This contains discrete R-symmetries due to the inclusion of the discrete rotation in the compact space generated byĈ TĈS . Along the locus λ T , there are two more enhancements of the traditional flavor group. For T = i Im T > i, not onlyĈ TĈS leaves the moduli invariant but alsoĈ TΣ * . This implies that the flavor symmetry of the tetrahedron [192,1509] is enhanced to the unified flavor group [384,20097]. Besides, for T = e iϕ with π /3 < ϕ < π /2, the stabilizer is Ĉ TĈS ,K SĈSΣ * and leads to the unified flavor group [384,20098]. The same flavor enhancement is obtained if the Kähler modulus sits at T = 1 /2 + i Im T with Im T > √ 3 /2, where the stabilizer is generated byĈ TĈS andK TĈSΣ * . A 4 flavor symmetry from the tetrahedron Let us turn back to the case of the tetrahedron, U = e πi /3 , with a generic value for the Kähler modulus. In this case, according to eq. (4.7), the stabilizer modular generatorĈ TĈS acts on matter fields as where due to the automorphy factor (c U U + d U ) n U we get The admissible modular weights of massless matter fields are n U ∈ {0, −1} for bulk matter, and n U ∈ { −3 /2, −1 /2, 1 /2} for twisted matter. Hence, eq. (4.10) describes a Z 6 transformation that we can write as Z 2 × Z 3 , generated by respectively. Using the admissible modular weights, we note that the Z 2 factor in eq. (4.12a) corresponds to the Z (PG) 2 point group selection rule of the T 2 /Z 2 orbifold sector. Moreover, the Z 3 factor eq. (4.12b) acts on the superpotential W as which is a discrete Z R 3 R-symmetry using the definition ω := exp ( 2πi /3). The group generated by the traditional flavor group elements ρ 4 (h 1 ), ρ 4 (h 2 ) from eq. (D. 21) together with the Z (PG) 2 factor from eq. (4.12a) and the Z R 3 factor from eq. (4.12b) turns out to be (4.14) Here, the alternating group A R 4 is a non-Abelian R-symmetry as it arises from ρ 4 (h 1 ), ρ 4 (h 2 ) and the Z R 3 R-symmetry, cf. ref [26] for a general discussion on non-Abelian R-symmetries. The matter fields and the superpotential W build the following representations of A R where we denote the irreducible representations of Z 2 by 1 0 and 1 1 and the ones of A R 4 by 3, 1, 1 and 1 , see appendix E.4. Combined with the Z R 4 symmetry associated with the sublattice rotationΘ (2) and the Z 2 × Z 2 generators ρ 4 (h 3 ) and ρ 4 (h 4 ) of the traditional flavor group, associated with the space group selection rule of the T 2 /Z 2 orbifold sector, the group A R 4 × Z 2 enhances to the unified flavor group of the tetrahedron, [192,1509], see figure 2b. Compared to the literature (see e.g. refs. [27][28][29]), we see that in the consistent string approach the naïve A 4 symmetry obtained by the compactification of two extra dimensions on a tetrahedron has to be extended in two ways: First, the Z 3 generator of A R 4 turns out to be an R-symmetry. This can be understood equivalently either as a discrete remnant of the extra-dimensional Lorentz symmetry or as a discrete remnant of an SL(2, Z) U modular symmetry. In addition, A R 4 is enhanced in the full string approach by stringy selection rules to [192,1509] of order 192, which still contrasts with previous results [14]. symmetry arising from embedding the orbifold sector in higher dimensions, ii) the CP-like modular transformationΣ * , and iii) the phases associated with the automorphy factors of modular transformations acting on matter fields Φ (n T ,n U ) . We use as reference axes the straightened lines describing the boundaries λ T and λ U of the T and U moduli spaces, respectively. Mirror symmetryM acts on the moduli and the modular generators as U ↔ T ,Ĉ S ↔K S , andĈ T ↔K T . As a first consequence, the points along the diagonal in figure 3, defined by T = U , are left invariant byM . Furthermore, the points below and above this diagonal are connected by a mirror transformation. It then follows thatM identifies the unified flavor groups in these two sectors of moduli space. Other CP-enhanced unified flavor groups Focusing on the lower half of the plane, below the diagonal of figure 3, we see that the sole enhancements that have not been discussed in the preceding subsections are those that lie at the diagonal, and those that are valid in the squared and triangular regions of figure 3. Let us consider two examples. In the lowest part of the diagonal, the stabilizer modular subgroup is H (ix,ix) = M ,Σ * , with x > 1. Considering the associated transformations on matter fields with their corresponding automorphy factors, as given in appendix D.3.2, we find that the traditional flavor group [64,266] is in this case enhanced to [256,56079]. Similarly, in the bottom triangle of the figure, the stabilizer is given just by H (ix,iy) = Σ * , with x, y > 1 and y > x, which yields the unified flavor group [128,2326]. Other cases can be easily determined by using the proper stabilizer subgroups provided in our previous work [1, figure 7]. Effective field theory of the Z 2 orbifold In this section, we focus on four 4-plets of twisted matter fields that we label by the winding numbers (n 1 , n 2 ) ∈ {(0, 0), (1, 0), (0, 1), (1, 1)} and not by the modular weights n T and n U . Hence, each twisted matter field φ i (n 1 ,n 2 ) is localized at one of the four fixed points of the T 2 /Z 2 orbifold sector, see appendix A and ref. [1, figure 1]. In other words, we consider four 4-plets We assume that the 4-plets differ in some additional charges, for example with respect to the unbroken gauge group from E 8 × E 8 (or SO (32)). Then, we use the eclectic flavor symmetry to write down the most general Kähler and superpotential to lowest order in these fields. The Kähler potential The Hermitian Kähler potential K of a single twisted matter field Φ ( −1 /2, −1 /2) reads to leading order [30] K whereK is an Hermitian bilinear polynomial of the formK = c n 1 n 2 n 1 n 2 φ (n 1 ,n 2 )φ(n 1 ,n 2 ) . In the following, we constrainK by imposing the traditional flavor symmetry step by step. First, the space and point group selection rules (D.24) enforce n 1 = n 1 and n 2 = n 2 , resulting iñ K = c n 1 n 2 φ (n 1 ,n 2 )φ(n 1 ,n 2 ) . Then, invariance under eq. (D.21a) forces all coefficients c n 1 n 2 to be equal (and we normalize them to 1). Hence, Now, we can generalize this easily to four 4-plets of twisted matter fields Φ i ( −1 /2, −1 /2) , for i ∈ {1, 2, 3, 4}. Due to our assumption of additional (gauge) charges that distinguish between φ i (n 1 ,n 2 ) and φ j (n 1 ,n 2 ) for i = j, we obtaiñ Consequently, for the T 2 /Z 2 orbifold sector the traditional flavor symmetry already enforces the Kähler potential to be diagonal in twisted matter fields. Hence, this diagonal structure can not be changed by the full eclectic flavor symmetry: Since additional terms involving modular formsŶ (T, U ) (as suggested by ref. [31]) are singlets of the traditional flavor group, the Kähler potential must remain diagonal, cf. ref. [6]. Yet, additional corrections to the Kähler potential that involve flavons are still possible, cf. ref. [32]. The superpotential To lowest order in twisted matter fields Φ i ( −1 /2, −1 /2) , the superpotential reads schematically Here, we have imposed the Z R 4 R-symmetry and the fact that the modular weights of matter fields and couplings have to add up to (−1, −1) for the superpotential, see table 1. Thus, Y (0) (T, U ) has to carry modular weights (0, 0), while the modular formŶ (2) (T, U ) has modular weights (2, 2). There exists a unique modular form of weight (2, 2), which we denote bŷ Y (2) 4 3 (T, U ) in the following (see eq. (4.1)). In addition, the superpotential has to be covariant under the full eclectic flavor symmetry, i.e. it has to be invariant simultaneously under the traditional non-R symmetries and the finite modular flavor symmetry but transform with the appropriate phases (automorphy factors) under the R-symmetry (modular symmetry). Constraints from the traditional flavor symmetry Let us start with invariance under the traditional flavor symmetry (D 8 × D 8 ) /Z 2 ∼ = [32,49]. First, we consider the product of two twisted matter fields Φ i ( −1 /2, −1 /2) Φ j ( −1 /2, −1 /2) needed for the terms (5.4a) in the superpotential W. The fields Φ i ( −1 /2, −1 /2) transform as irreducible 4-plets of the traditional flavor group [32,49]. Hence, we need to consider the tensor product This tensor product contains one trivial singlet 1 ++++ , which corresponds to the terms for i, j ∈ {1, 2, 3, 4}. The total R-charge is 2 as one can see easily from table 1. As a remark, one can check the invariance of the terms I ij 0 explicitly using the orthogonality of the representation matrices given in eq. (D.21). Since Φ (0,0) is a trivial singlet 1 ++++ of [32,49] with R-charge 0, the terms Φ (0,0) I ij 0 ⊂ W are allowed by both, the traditional flavor symmetry and Z R 4 . Next, we study the product of four twisted matter fields in order to construct the superpotential terms in eq. (5.4b). Since we know from eq. (5.5) that there are 16 invariant combinations I i , i ∈ {1, . . . , 16}. We list them in appendix E.2. Consequently, out of the 4 4 = 256 possible terms from , invariance under the traditional flavor symmetry [32,49] allows only 16. Constraints from the modular symmetry As explained in ref. of the finite modular group [144,115]. Only the two 4 3 representations yield invariant terms when they are combined with the modular formŶ (5.10b) Note that the quartic polynomial Q 1 is antisymmetric when The invariant terms in the superpotential then read Hence, the number of unfixed superpotential parameters in eq. (5.4b) is reduced from 16 in the case of imposing only the traditional flavor symmetry to 2 (i.e. c 1 and c 2 ) when we include the constraints from the full eclectic flavor symmetry. This is in contrast to the leading order Kähler potential, see eq. (5.3), where the finite modular symmetry did not yield additional constraints compared to the traditional flavor symmetry. Finally, the superpotential of eq. (5.4) is thus explicitly given by where Q 1 and Q 2 are the quartic polynomials in the twisted matter fields Φ i ( −1 /2, −1 /2) , given in eq. (5.10). Gauge symmetry enhancement in moduli space Let us analyze the "accidental" continuous symmetries of the superpotential eq. (5.12) that appear at special points in (T, U ) moduli space, cf. ref. [3] for the analogous discussion in the case of the T 2 /Z 3 orbifold sector. We assume that the four twisted matter fields Φ i transform identically under the enhanced symmetry. To identify continuous symmetries, we define a general U(4) transformation that leaves the Kähler potential of Φ i Table 2: Gauge symmetry enhancements by Lie algebra elements t in the case of the T 2 /Z 2 orbifold sector at special points in moduli space, uncovered as "accidental" symmetries of the superpotential due to the alignment of couplings in flavor space. Note that the field basis of twisted matter fields Φ i ( −1 /2, −1 /2),g with well-defined gauge charges differs from the field basis of localized twisted strings Φ i ( −1 /2, −1 /2) using the basis change M g in eq. (5.14). symmetries for general values of the moduli (T, U ), we do observe subgroups of U(4) being unbroken at special values of the moduli. We discuss such cases in the following. Note that from the top-down perspective of string theory, the appearance of continuous symmetries is expected. As discussed in appendix C, these "accidental" symmetries are actually gauge symmetries: at special points in moduli space some winding strings become massless, giving rise to the gauge bosons of enhanced gauge symmetries. Consequently, the enhanced symmetries that we uncover in this section are exact symmetries to all orders in the superpotential. In the following, we briefly discuss the results for three special configurations: i) T = U for generic U , ii) T = U = ω, and iii) T = U = i. In each case, we first evaluate the superpotential eq. (5.12) in the respective vev configuration by analyzing the alignment of the couplingsŶ (2) 4 3 ( T , U ) in flavor space. Then, we identify the unbroken Lie algebra elements t := α a T a from the transformation (5.13). Afterwards, we perform a (unitary) basis change such that the unbroken Lie algebra elements t g := M g t M −1 g are (block-)diagonalized. Finally, we identify the continuous symmetry and the charges (or representations) of the twisted matter fields Φ i ( −1 /2, −1 /2),g . The results are summarized in table 2. At T = U , there appears an enhanced U(1) symmetry. Note that the traditional flavor subgroup Z 4 ⊂ (D 8 × D 8 )/Z 2 generated by h 1 h 3 is a subgroup of this U(1). Hence, one can verify that the traditional flavor symmetry gets enhanced to where the Z 2 quotient identifies (h 1 h 3 ) 2 from the left factor with (h 2 h 4 ) 2 from the right factor. At T = U = i in moduli space, there is an enhanced U(1) 2 symmetry. In this case, the traditional flavor subgroup generated by the order 4 elements h 1 h 3 and h 2 h 4 is a subgroup of this U(1) 2 symmetry. Hence, the traditional flavor symmetry is enhanced to where the Z 2 quotient identifies (h 1 h 3 ) 2 with (h 2 h 4 ) 2 , as before. Conclusions and Outlook We performed a detailed analysis of the modular symmetries of the T 2 /Z 2 orbifold which (among others) might be relevant for the (discrete) flavor symmetries of string compactifications with an elliptic fibration. The T 2 /Z 2 case has two unconstrained moduli with SL(2, Z) T × SL(2, Z) U modular symmetry and allows contact with previous bottom-up constructions that have more than one modulus [16,17,[33][34][35]. In the present paper, we completed the discussion of our earlier work [1] now including the automorphy factors of modular symmetry. This leads to an additional R-symmetry Z R 4 (for the given modular weights of matter fields) that plays the role of a so-called "shaping symmetry" and extends the discrete flavor symmetry. In more detail, the traditional flavor symmetry of ref. [1] is extended from [32,49] via Z R This picture reveals the fact that the top-down discussion of modular flavor symmetry constitutes an extremely restrictive scenario, which is confirmed in other top-down scenarios [36][37][38][39][40]. As in the case of the bottom-up discussion, firstly the role of (otherwise freely chosen) flavons is played by the moduli T and U , and secondly we arrive at a specific finite modular group, being Γ T 2 × Γ U 2 = S T 3 × S U 3 for the T 2 /Z 2 orbifold. In addition, we have to consider the restrictions from the automorphy factors with modular weights fixed from string theory (in contrast to the bottom-up case where these values can be chosen freely). Moreover, in addition to the finite modular symmetry, string theory provides a traditional flavor symmetry, which gives severe restrictions for Kähler-and superpotential of the theory (discussed in section 5). Finally, the representations of the relevant matter fields of the traditional and modular flavor symmetries are determined by the theory. We summarize them (along with the corresponding modular weights) in table 1. Compared to the earlier discussions [4,5] where one modulus was frozen, the two-modulus case allows a full understanding of mirror symmetry (as discussed in section 3 and appendix B), including the situation of matter fields whose modular weights n T and n U differ from each other, see eq. (3.9). In this case, mirror symmetry requires the presence of matching representations where n T and n U are interchanged. We observe enhancements of the traditional flavor group at specific locations in moduli space. These unified flavor groups are discussed in section 4 and summarized in figure 3. The largest group is located at T = U = exp( πi /3) and has 2304 elements (including CP). We also provide a detailed discussion of the tetrahedral T 2 /Z 2 orbifold, which leads to the group [192,1509] as extension of the traditional flavor group. It includes a "geometrical" A 4 as an R-symmetry, where twisted matter fields transform as 3 ⊕ 1 of this A R 4 , as explained in section 4.3.1. The restrictions on Kähler-and superpotential are discussed in section 5. The traditional flavor symmetry is extremely powerful towards the restrictions on the Kähler potential. As in the T 2 /Z 3 discussed earlier [6], the traditional flavor group restricts the Kähler potential to its trivial diagonal form (5.3): a fact that seems to hold in full generality. In contrast, both symmetries are relevant for the form of the superpotential given in eq. (5.12). The traditional flavor symmetry reduces the 256 terms in eq. (5.4b) down to 16, and the modular flavor symmetry reduces the remaining 16 to 2, see eq. (5.12b). A further special feature of string theory is the possible appearance of continuous gauge (flavor) symmetries in moduli space. At special points in moduli space, winding modes of the string can become massless and are candidates for the gauge bosons, as discussed in section 5.3 (see table 2) and appendix C (with table 3). These symmetries, of course, reflect themselves in the symmetries of the superpotential. From a bottom-up perspective they might appear as accidental symmetries, but from the top-down point of view they correspond to continuous gauge symmetries of string theory. Together with our earlier discussion [3] of the T 2 /Z K orbifolds with K = 3, 4, 6, we now have uncovered the basic properties of the flavor symmetries of two-dimensional orbifold compactifications for the case of up to two unconstrained moduli. One might expect that some of these properties will generalize from the case of toroidal orbifolds to more general string compactifications with an elliptic fibration. Moreover, from the string theory point of view, the next step would be the consideration of orbifolds with Wilson lines as additional moduli. This would require the embedding of SL(2, Z) T × SL(2, Z) U and mirror symmetry in the Siegel modular group Sp(4, Z), as discussed in ref. [18], see also refs. [16,17], where bottom-up model building based on Sp(4, Z) has been initiated. We focus here on the case without background Wilson lines. In the Narain formulation of the heterotic string [44][45][46], the string coordinate in extra-dimensional space y is split into right-and left-moving string modes y R and y L , respectively. Then, we define and eq. (A.1) is extended to Here, k ∈ {0, . . . , K − 1} for a Narain twist Θ, with θ R , θ L ∈ SO(2), that is of order K, i.e. Θ K = 1 4 . The Narain twist generates the Narain point group P Narain ∼ = Z K and the orbifold action Y → Θ k Y + EN defines the so-called Narain space group S Narain , Each (conjugacy class) g ∈ S Narain defines a closed string and, therefore, we call g the constructing element. We focus on symmetric orbifolds by setting θ R = θ L = θ and choose Therefore, the Narain lattice Γ is an even, integer, self-dual lattice of signature (2,2). In the absence of Wilson lines, the Narain vielbein E can be parameterized in terms of the geometrical vielbein e, its inverse transposed e −T , the geometrical metric G = e T e and the B-field background B, see for example refs. [47,48] (where we changed the convention from B to −B). Then, a twotorus compactification can be parameterized by a Kähler modulus T and a complex structure modulus U , defined as In the last equation of U , we have taken both two-dimensional column vectors e i of the geometrical vielbein e to be complex numbers, e i ∈ C, so that e 2/e 1 is defined. Note that T determines the strength of the B-field and the area of the extra-dimensional two-torus, while U specifies the shape of the two-torus. It is convenient to associate a generalized metric H to the Narain vielbein E and express H in terms of the moduli T and U , (A.8) see for example ref. [3]. Furthermore, we define the Narain twist in the lattice basis aŝ Let us focus in the following on bulk strings, i.e. on strings that close under the identification eq. (A.3) with constructing element (1 4 , EN ) ∈ S Narain . Then, right-and left-moving momenta p R and p L of a string have to be quantized, because the extra dimensions are compact. As the Narain lattice Γ is self-dual, p R and p L must belong to Γ, too. Hence, In order to identify (massless) string states from the bulk, one has to consider the right-and left-moving mass equations where In eq. (A.13a), q = (q 0 , q 1 , q 2 , q 3 ) denotes the bosonized momentum of the right-moving worldsheet fermions. It is called the H-momentum. q has to be an element of one of the following weight lattices of SO(8): either the vector lattice 8 v or the spinor lattice 8 s , see for example ref. [49]. The shortest H-momenta q satisfy q 2 = 1, i.e. Here, in the first case (8 v ), the underline denotes all permutations and, in the second case (8 s ), the number of plus-signs must be even. The first component q 0 of q defines the fourdimensional chirality. For example, q 0 = 0 yields a scalar. Note that in the four-dimensional effective quantum field theory, we use the convention that the scalar components of left-chiral superfields φ from the bulk are associated with string states having q ∈ { 0, +1, 0, 0 }, such that string states with q ∈ { 0, −1, 0, 0 } give rise to their CPT partners. Furthermore, we demand that Σ leaves the metric η invariant, In other words, we demand that Σ leaves any Narain scalar product P T ηP for P, P ∈ Γ invariant. The resulting transformationsΣ form a group Oη(2, 2, Z) := Σ Σ ∈ GL(4, Z) withΣ TηΣ =η , the so-called modular group of the Narain lattice Γ. It is easy to see that Oη(2, 2, Z) contains two factors of SL(2, Z), i.e. we can definê As a remark,Σ (γ T ,γ U ) satisfies the property of being a representation, for all γ T , δ T ∈ SL(2, Z) T and γ U , δ U ∈ SL(2, Z) U . The generators S and T of the modular group SL(2, Z) can be represented by the 2 × 2 matrices S = 0 1 −1 0 and T = 1 1 0 1 , (A. 22) respectively. Then, we can definê where one can easily show that mirror symmetryM interchanges the SL(2, Z) factors, Having identified the modular symmetries of a toroidal compactification, the modular symmetries of an orbifold are given by the rotational outer automorphisms of the Narain space group S Narain that preserve the Narain metric η. They can be understood as those modular transformationsΣ ∈ Oη(D, D, Z) (with D = 2 in the present case) that are also from the normalizer of the Narain point group, Note that the Narain twist is not an outer automorphism, but an inner automorphism of S Narain . For example, for the T 2 /Z 2 orbifold we haveΘ = −1 4 andP Narain ∼ = Z 2 . Hence, Oη(2, 2, Z)/Z 2 is the modular group of the T 2 /Z 2 orbifold. However, we consider the twodimensional T 2 /Z 2 orbifold to be contained in a full six-dimensional orbifold. Hence, we assume that the underlying six-dimensional torus is factorized as T 6 = T 2 ⊕ T 2 ⊕ T 2 and that the Narain twist of the (6, 6)-dimensional Narain lattice takes the form Here,Θ (K i ) denotes an order K i Narain twist of the i-th (2, 2)-dimensional Narain sublattice for i ∈ {1, 2, 3}, where K 1 = 2 andΘ (2) = −1 4 . Then, we can define a so-called sublattice rotationΘ (2) ⊕ 1 4 ⊕ 1 4 which is an outer automorphism of the Narain space group of the full six-dimensional orbifold. Consequently, the modular group in the T 2 /Z 2 orbifold sector is Oη(2, 2, Z). A.2 Transformation of bulk fields under modular symmetries In this section, we analyze the action of modular transformations from Oη(2, 2, Z) on those fields of the effective four-dimensional theory that originate from the bulk of the extra dimensions. The transformation of twisted matter fields will be discussed later in appendix D. First, we discuss the moduli (i.e. the Kähler modulus T and the complex structure modulus U , see eq. (A.7)). According to eq. (A.16), a modular transformationΣ ∈ Oη(2, 2, Z) acts as on the Narain vielbein E. Consequently, we can use the generalized metric to compute the transformation of the moduli, This can be used to show thatΣ (γ T ,γ U ) from eq. (A. 19) acts on the moduli as Moreover, using eq. (A.29) we can confirm that the mirror transformationM interchanges the moduli, T ↔ U , while the CP-like transformationΣ * acts as As a remark, we can now understand the conditions (A.10) on a Narain twist as follows: A Narain twistΘ ∈P Narain must be a modular transformation (Θ ∈ Oη(2, 2, Z)) that leaves the moduli invariant (compare eq. (A.29) to eq. (A.10)). Next, we consider a general (massive) bulk field φ (N ) labeled by its winding and KK numbersN ∈ Z 4 that corresponds to a closed string with boundary condition (A.4) given by the constructing element (1 4 , EN ). Its total mass M 2 (N ; T, U ) is moduli dependent viâ N T H(T, U )N , as shown in eq. (A.14). Then, the corresponding mass terms in the superpotential read schematically Under a (non-CP-like) modular transformationΣ ∈ Oη(2, 2, Z), moduli and bulk fields trans- where we suppress the automorphy factor for φ (N ) . In addition, we haveN =Σ −1N as shown in eq. (A.16) and the factor ±1 of φ (N ) will be derived later in eq. (D.15). Then, due to its moduli-dependence, the total string mass M 2 (N ; T, U ) transforms as where ∆N i := N i −Nī ∈ N 0 is the total number of holomorphic minus anti-holomorphic oscillators with internal index i = 1 orī =1 in the direction of the T 2 /Z 2 orbifold sector, i.e. and n T and n U coincide if the associated string state carries no oscillator excitations. For example, a massless matter field from the bulk (k = 0) has no oscillators and q ∈ { 0, +1, 0, 0 }, so that n T = n U ∈ {0, −1}, while a massless twisted matter field (k = 1) without oscillators has q 1 sh = 1 2 and n T = n U = −1 /2, see table 1. In addition, there are twisted string states (massless or massive) that are excited by oscillators. According to eq. (B.2), the modular weights are increased/decreased by adding oscillator excitations, add holomorphic oscillator : n T → n T − 1 , n U → n U + 1 (B.4a) add anti-holomorphic oscillator : n T → n T + 1 , In both cases, the resulting string states transform identically under the Z 2 orbifold projection (this is a special property of Z 2 orbifolds and not true for Z K orbifolds with K = 2). Hence, for each matter field Φ (n T ,n U ) with n T = n U , there exists a partner with exactly the same mass and identical quantum numbers except for interchanged weights, i.e. Φ (n T ,n U ) has a partner Φ (n U ,n T ) if n T = n U , cf. table 1. Mirror symmetry interchanges holomorphic and anti-holomorphic left-moving oscillators. In order to see this, we rewriteM (given in eq. (A.24)) into the left-right coordinate basis (y R , y L ) at T = U in moduli space. This results in Recall that a general transformation Σ := EΣ E −1 acts on the coordinate Y eq. Hence, the mirror transformation M acts on the complex leftmoving string coordinate z 1 in the direction of the T 2 /Z 2 orbifold sector as Hence, a mirror transformation interchanges holomorphic and anti-holomorphic oscillators resulting in eq. (3.21). C Gauge symmetry enhancement It is a well-known feature of string theory that at special points (T, U ) in moduli space, additional gauge symmetries arise whose gauge bosons are associated with massless winding strings. These massless strings become massive by moving in moduli space away from the special points. Hence, the enhanced gauge symmetry gets broken spontaneously by the moduli vevs. In order to identify the enhanced gauge symmetries, we look for additional massless strings from the orbifold bulk that become massless only at certain points in moduli space. We do this in two steps: first, we construct the massless string states on the torus T 2 and then move on to the T 2 /Z 2 orbifold by projecting the massless torus states onto Z 2 -invariant states. In general, a massless string has to satisfy M 2 R = M 2 L = 0. Then, from eq. (A.13a) together with q 2 = 1 it follows that N R = 0 and p R = 0. Hence, for p R = 0 eqs. where the indices i = 1 andī =1 lie in the two-torus that will be orbifolded by the Z 2 action. Note that the string states (C.7) correspond to the Cartan generators, while the string states (C.6) correspond to raising operators (with +N ∈ N g (T, U )) and lowering operators (with −N ∈ N g (T, U )) of some non-Abelian, enhanced gauge symmetry. The root lattice of this symmetry group is spanned by the left-moving momenta p L that correspond to the solutionsN ∈ N g (T, U ) using eqs. where ±p L is given by ±N ∈ N g (T, U ), respectively. We analyze three special points in moduli space: 4 i) T = U , ii) T = U = i and iii) T = U = ω and summarize the results in table 3. Consequently, the enhanced continuous symmetries identified in section 5.3 are actually gauge symmetries. D Vertex operators of the Z 2 Narain orbifold The spectrum of the T 2 /Z 2 orbifold sector includes untwisted strings, associated with constructing elements (1,N ) ∈Ŝ Narain , and twisted strings constructed by elements (Θ,N ) ∈ S Narain . In this appendix, we study how the symmetries of the theory act on these strings by inspecting the transformations of their corresponding vertex operators. D.1 Untwisted vertex operators The zero-mode vertex operator corresponding to a bosonic string on a toroidal background with winding and Kaluza-Klein numbersN = (n, m) T ∈ Z 4 is given by [50, eq. (3.41)] where the string coordinate operator Y results from promoting E −1 Y to an operator (see eq. (A.2)). Y satisfies the commutation relations 5 (derived from the action of the sigma model) is the symplectic structure in the Narain basis. The nonzero value of the commutator (D.2) is a result of intrinsic non-commutative effects of closed strings [50]. The zero-mode vertex operators (D.1) in combination with the commutator (D.2) are subject to the so-called Weyl quantization relation According to ref. [51], this relation is instrumental to evaluate the time ordering of operators as required in the computation of scattering amplitudes. The quantization relation (D.3) must hold independently of whether the vertex operators have been affected by modular transformations. As we shall shortly see (cf. eqs. (D.13)), this helps determine the phases required for the modular generators to act consistently on twisted vertex operators [52]. Using eq. (D.1), one finds that the Z 2 orbifold-invariant untwisted vertex operators are given by To determine the transformation of VN 0 under a translationĥ i , we observe that This implies that V (N ) acquires a Z 2 phase, which is identical for V (−N ). Consequently, the orbifold invariant vertex operator (D.4) inherits the same phase. It thus follows that the untwisted vertex operator class VN 0 gets a Z 2 phase too, Under a rotational outer automorphismΣ,N transforms toΣ −1N . Then, we expect that the vertex operator V (N ) transforms according to 1N ) . (D.10) Here, we propose, due to the nontrivial commutation relations (D.2), a phase ϕΣ(N ) that is given by the ansatz The 4 × 4 matrix AΣ (with only half-integral off-diagonal entries) and the vector CΣ ∈ Z 4 will be determined next. Note that, with these conditions, ϕΣ(N ) can only be a Z 2 phase. By demanding that the Weyl quantization relation (D.3) be preserved byΣ and using the abbreviationμ = 1 2 (η +ω), one arrives at In contrast, CΣ cannot be constrained by the quantization condition. However, the effect of CΣ is equivalent to the one of a translationĥ i in the Narain lattice, given in eq. (D.9). These translations generate the traditional flavor symmetry, which is unbroken independently of the moduli. Therefore, the traditional flavor symmetry allows for a free choice of the vector CΣ. D.2 Operator product expansions of twisted vertex operators Even though vertex operators of twisted string states are more involved than untwisted vertex operators, OPEs of two twisted states in two-dimensional orbifolds are known. Let us consider the twisted vertex operators φ n a and φ n b of twisted strings localized at orbifold fixed points given by the winding numbers n a , n b ∈ {(0, 0) T , (0, 1) T , (1, 0) T , (1, 1) T }. Up to a constant overall factor, they satisfy the OPE [53] φ n a φ n b = These expressions together with the transformation properties of untwisted operators, eqs. (D.9) and (D. 16), can lead to the corresponding transformations of the twisted vertex operators φ n , as we now discuss. D.3 Transformations of twisted vertex operators With the help of the explicit relations (D. 19) between the OPEs of twisted string states and the untwisted states, and the transformations (D.9) and (D.16) of the latter, we can deduce the action of Out(Ŝ Narain ) onφ n a φ n b . One can then infer the action of those transformations on the single twisted operators, arranged in a twisted multiplet (φ (0,0) , φ (1,0) , φ (0,1) , φ (1,1) ) T . Note that, since no oscillator excitation is present in these twisted string states, this multiplet must correspond to the components of the twisted matter field Φ ( −1 /2, −1 /2) , i.e. with n T = n U = −1 /2. In these terms, the transformations of twisted states can be encoded in transformation matrices ρ r (Σ) or ρ r (h i ), which denote r-dimensional representations of the outer automorphismsΣ and h i . Our goal here is to present those transformation matrices. Since twisted strings transform in the (real) representation 4 of [32,49], the OPEsφ n a φ n b can be associated with the tensor product Finally, since oscillator excitations are not affected by the transformations associated withĥ i , all other twisted matter fields Φ (n T ,n U ) (see table 1) must transform in the same 4-dimensional representation defined by eq. (D.21). E.4 A 4 character table We denote as a, b and c the abstract A 4 generators associated with h 1 , h 2 and (Ĉ TĈS ) 2 , respectively, see section 4.3.1. In these terms, A 4 is defined by the presentation A 4 = a, b, c a 2 = b 2 = c 3 = (ab) 2 = 1, bca = bacb = c , (E. 12) and has four conjugacy classes: (E.13) The character table of A 4 is shown in table 5, where we also present the order and number (size) of the elements in each conjugacy class.
14,366.8
2021-04-08T00:00:00.000
[ "Physics" ]
Parkinson Patients’ Initial Trust in Avatars: Theory and Evidence Parkinson’s disease (PD) is a neurodegenerative disease that affects the motor system and cognitive and behavioral functions. Due to these impairments, PD patients also have problems in using the computer. However, using computers and the Internet could help these patients to overcome social isolation and enhance information search. Specifically, avatars (defined as virtual representations of humans) are increasingly used in online environments to enhance human-computer interaction by simulating face-to-face interaction. Our laboratory experiment investigated how PD patients behave in a trust game played with human and avatar counterparts, and we compared this behavior to the behavior of age, income, education and gender matched healthy controls. The results of our study show that PD patients trust avatar faces significantly more than human faces. Moreover, there was no significant difference between initial trust of PD patients and healthy controls in avatar faces, while PD patients trusted human faces significantly less than healthy controls. Our data suggests that PD patients’ interaction with avatars may constitute an effective way of communication in situations in which trust is required (e.g., a physician recommends intake of medication). We discuss the implications of these results for several areas of human-computer interaction and neurological research. Introduction Parkinson's disease (PD) is a common, chronic progressive disease with a median annual incidence of 14 per 100.000 increasing to 160 per 100.000 in the age group over 65 [1]. Typical symptoms include bradykinesia, resting tremor, rigidity, and impaired postural reflexes [2]. In addition to these 'cardinal motor symptoms' , non-motor symptoms occur, even in the early stages of the disease [3]. Cognitive and behavioral disturbances include deficits in the following domains: attention, memory, visuospatial functions and decision-making [4], as well as impairment of face recognition (e.g. [5]), risk-taking (e.g. [6]), trust [7], and ability to explain and predict other people's behavior (referred to as Theory-of-Mind or mentalizing, e.g., [8,9]). The underlying pathology is a dopaminergic cell loss in the substantia nigra pars compacta. In addition, neuronal loss is reported in the following brain areas: the dorsal motor nucleus (DMN) of the vagus, the olfactory bulb, the caudal raphe nuclei and the locus ceruleus [10]. Also, PD leads to the occurrence of Lewy bodies and neurochemical alterations in the brainstem, limbic system, diencephalic and basal brain structures, and in advanced stages, the neocortex [10]. Importantly, all mentioned deficits and impairments may negatively affect the social life of PD patients. In fact, cognitive impairment has been shown to predict quality of life in PD patients and often troubles patients more than their motor symptoms [11]. Patients suffering from all kinds of illnesses are frequently using information technology (IT) for private and disease specific information search and communication [12]. However, interaction with computers and the Internet may provoke a perception of complexity and uncertainty [13]. Evidence shows that nearly 80% of computer users with PD, report significant, severe, or highly severe difficulties using a computer [14]. Underlying factors identified in this study are inertia, muscle stiffness, tremor, and issues related to the use of input devices, among other aspects related to ergonomics. To overcome impairments, the design of humancomputer interfaces for disease populations has become a major topic in human-computer interaction (HCI) research [15]. While the effects of motor symptoms on PD patients' interaction with computers has received considerable attention in the HCI domain (e.g. [16]), behavioral symptoms and their effects have largely been neglected. However, there is growing evidence that computer and Internet use may help to provide essential information, reduce loneliness, and increase social contact among older adults in assisted and independent living communities [17]. This finding is of paramount importance for PD patients, because they frequently suffer from social isolation [18]. Avatars and social behavior in HCI In the HCI domain, avatars are defined as user-created digital representations symbolizing the user's presence in a virtual environment [19]. In addition, artificial faces and characters might be part of a computer interface. In this case, characters are created by systems engineers and controlled by computer algorithms (and not by a human user). Characters controlled by algorithms are referred to as agents [19]. Examples for agents are virtual salespersons in an online shop or digital tutors in e-learning environments. To avoid complex sentence structures, we only use the term avatar in the present paper. However, we note that in this paper we refer to both avatars and agents. Social information processing theory [20] postulates that users engage in interpersonal processes online in order to attain interpersonal goals. A major goal of communicators is the reduction of uncertainty about others [21]. Visual appearance plays a significant role in the formation of first impressions of strangers, and this visual and nonverbal information is processed to form a first impression within seconds [22]. Thus, provision of visual information may significantly reduce uncertainty in social interaction. Importantly, first impressions are frequently very resistant to change [23]. It follows that first impressions significantly affect future behavior of communicators in the social interaction process. In online environments, visual information is often provided based on static information (e.g., by showing a communicator's face or body [24]). Importantly, avatars are able to induce trust and reduce uncertainty despite being presented statically, or as an animation [25,26]. There is vast evidence on how healthy humans interact with avatars. Several studies based on healthy subjects have demonstrated that avatars are often perceived as social agents, and thus avatars may socially influence humans. Hence, humans interacting with avatars have an experience of being with another person [27] and are willing to disclose information that is considered highly personal, even when avatars are low in visual realism [28]. Moreover, recent evidence indicates that avatars are trusted to a similar extent as humans [29]. Thus, avatars help to overcome perceptions of uncertainty in computerized environments by imitating faceto-face interaction [30]. In addition to research on healthy subjects, there have been first studies to investigate how patients suffering from psychiatric diseases, such as schizophrenia (e.g. [31]) and autism (e.g. [32]), interact with avatars. However, to the best of our knowledge, neither does a peer-reviewed scientific journal article focus on neurological patients' interaction with avatars, nor does peerreviewed research at the nexus of avatars and PD exist. This finding is surprising, because avatars are used in (i) computer systems that are specifically designed to evaluate PD patients [33], (ii) assistive technologies [34], and (iii) neurorehabilitation systems for PD patients [35]. To close this significant research gap, we investigate the trust behavior of PD patients during their interaction with avatar and human faces in a trust game context. In this game, originally developed in behavioral economics [36], an investor (also referred to as trustor) receives an initial monetary endowment (e.g. €10). The investor decides how much of this initial endowment he or she would like to send to another player in the game, the trustee. This amount is then multiplied by the experimenter (e.g. €10 x 6 = €60). In the next move, the trustee decides whether to send money back to the investor (and, if so, how much). The amount of the investor's transfer is a behavioral measure of initial trust [36][37][38]. In the next section, we give an overview of (1) the pathoanatomical basis of major behavioral symptoms in PD and (2) the vast literature on the neurobiology of trust in both interpersonal and human-avatar interaction. Behavioral symptoms in PD Behavioral and cognitive neurology, a field that studies behavioral symptoms and cognitive functions in patients suffering from neurological disorders, has identified several impairments in PD. Cognitive symptoms of PD primarily affect executive functions [39]. It is still a matter of research why some PD patients develop cognitive impairments in addition to the cardinal motor symptoms in early stages of the disease, while others do not, or only at a very late stage of the disease. The reason for this observation might be that motor symptoms and cognitive deficits have distinct underlying anatomical bases [40]. In the following, we review the PD literature on pathological changes in brain regions relevant to trust, and we relate these regions to cognitive and behavioral impairments in PD. First, PD pathologically affects the limbic system, particularly the amygdala and the thalamus (e.g. [41,42]). A network of brain structures including the amygdala underlies the ability of face recognition [43]. This explains why PD patients have been shown to have problems in decoding facial expressions and hence perform significantly worse in recognizing sadness, anger, and disgust in human faces when compared to healthy controls (e.g. [44]). Second, in PD, the loss of dopaminergic neurons in the substantia nigra leads to pathological functioning of striatal structures [45] and to impairments in the meso-cortico-limbic dopaminergic system, including the ventral tegmental area, ventral striatum, and medial orbitofrontal cortex [46]. Moreover, a network of brain structures that involves multiple cortical and subcortical regions (prefrontal, parietal, limbic) constitutes the neural basis of decisionmaking under risk or uncertainty [47]. This network overlaps in critical parts with the reward system (striatum and frontal cortical regions, e.g. [48]), and therefore explains altered risk processing in PD patients. Drug naïve patients are typically risk averse, while dopaminergic treatment makes these patients more prone to risk-seeking behavior (for a review, see [6]). Imaging data also suggests that frontal cortical regions are affected by PD (e.g. [49]), and frontal lobe functions are disturbed in PD in about 30% of the patients [3]. Impairments in mentalizing (Theory-of-Mind) have also been reported for PD patients (for a review, see [50]), and these disturbances are mainly caused by frontal cortex dysfunction, because the frontal cortex is considered a key region for the neural implementation of mentalizing (e.g. [8]). A recent review of 13 studies focusing on mentalizing in PD concludes that (i) deficits are common and (ii) those deficits probably play an important role in daily living and affects quality of life in these patients [9]. To sum up, we have briefly reviewed evidence on the pathological changes of the limbic system, the basal ganglia, and the frontal cortex in PD. Importantly, human trust relies on all of these brain regions (for a review, see [51]), and PD patients have recently been shown to have lower trust in other humans than healthy controls [7]. Next, we provide a short overview of the literature on the neurobiology of trust. The neurobiology of trust Trust is a fundamental prerequisite for human relationships, both in private and public life, and thus is essential for the functioning of society in general [52]. It is important to differentiate trust behavior from its antecedents (beliefs about trust, behavioral intentions, and trust disposition) as these concepts are frequently mixed up in the literature [53]. Following the behavioral intentions concept, Rousseau et al. [54] defined trust as a psychological state comprising the intention to accept vulnerability based on positive expectations about the actions of another party (the trustee). Formally, trust is therefore the subjective assessment of the probability that someone is trustworthy [55]. Trust behavior has been defined as trusting acts influenced by beliefs about another person's trustworthiness and risk preference [56]. Initial trust is trust a person has in a stranger before having any interaction experience with that person. This kind of trust plays an important role in online environments, as users often have little common history or may not share the same cultural background [57]. Knowledge-based trust, in contrast, is trust that emerges through engagement with the other person. While initial trust strongly depends on the characteristics of another person (e.g. face), knowledge-based trust primarily develops from another person's reciprocal behavior. One person, the trustor, is acting in a way that makes him vulnerable to the actions of another person, the trustee, in the sense that trust might not be reciprocated. Due to a social preference referred to as"betrayal aversion", humans generally take risks less willingly when the cause of uncertainty is another person, and this fact differentiates trust from non-social risk-taking (e.g. [58,59]). Trust has become a major research topic in various scientific disciplines, including economics, neuroscience, medicine and information systems research (e.g. [58,60,61]). Neuroscience has used several methodological approaches (e.g. functional brain imaging) to detect the neural correlates of trust. In the following, we summarize key findings from an anatomical point of view and, in order to enhance understandability from a non-neuroscience perspective, group several smaller brain areas into three main regions that were found to be significantly activated in trust situations, namely the limbic system, basal ganglia, and the frontal cortex (for more comprehensive reviews, see [51,58,62]). The limbic system is a term used for several brain regions, including the amygdala and the hippocampus that are strongly related to emotions and memory [63]. The amygdala is active during the assessment of the trustworthiness of human faces (e.g. [64][65][66]), a fact that holds particularly true for the assessment of untrustworthy faces [43,66]. A study based on the trust game found that patients with unilateral amygdala damage showed more trust than healthy controls [67]. This finding suggests that the amygdala plays a crucial role in trust behavior between humans. The basal ganglia are a subset of subcortical nuclei that are interconnected with several cortical areas (especially the frontal cortex), as well as the thalamus and the brainstem. The basal ganglia have important functions for the motor system, but also for learning, cognition, and behavior (e.g. [68]). One part of the basal ganglia is the striatum, which is active in trust situations because the striatum is associated with reward processing and reward anticipation (e.g. [64][65][66][67][68][69]70]). The caudate nucleus, which is also part of the basal ganglia, has also been shown to be active in trust game studies [65]. Theory-of-mind brain regions are needed to predict another person's trustworthiness based on anticipation of intentions and future behavior. The outcome of this process is known as 'calculative trust' [71]. Theory-of-mind regions that have been shown to be active in trust behavior are the paracingulate cortex [72] and the medial prefrontal cortex [73]; we group both regions together and refer to them as frontal cortical regions in our paper. These regions seem to have distinct functions in trust behavior, as the paracingulate cortex has a role in building a trust relationship by inferring another person's intentions to predict subsequent behavior, and the medial prefrontal cortex has implications for decisions and choices based on calculative expectations of what others will do [74]. Hypotheses We provided an overview of the most important aspects of the biological foundations of social behavior deficits in PD and the neurobiology of trust. It becomes apparent that there is a significant overlap between pathoanatomical changes in PD and the anatomical basis of trust. We visualize this overlap in Fig 1. In order to develop a theoretical basis for our hypotheses that describe how PD patients are expected to behave in a trust situation with avatars, we summarize evidence on PD patients' trust in human faces and healthy controls' trust in avatars and human faces in this conceptualization, we identify two significant gaps in literature. In the following, we develop hypotheses for these two gaps. Based on our preceding discussion of neuroscience research, we argue as follows: 1. Trust is mainly represented in three brain regions, namely the limbic system, the basal ganglia, and frontal cortical regions; PD affects all these regions. It has been shown that PD patients have lower initial trust in simulated face-to-face interactions compared to healthy controls [7]. 2. There is evidence that mentalizing brain regions are more active during the evaluation of the trustworthiness of human faces if compared to avatar faces [29]. Furthermore, avatar faces elicit less activation in the limbic system if compared to human faces [26]. 3. One could argue that even though PD patients show lower initial trust in simulated face-toface interactions than healthy controls [7], these patients have less or even no trust deficit when interacting with avatars, because brain regions that are regulating trust behavior (e.g. medial frontal cortex and the amygdala) are less active in the interaction with avatars compared to interaction with humans [26,29]. Therefore, impairments in these regions caused by PD should have less impact on trust behavior towards avatars. Following this line of argumentation, we formulate the following hypotheses that we tested in a laboratory experiment: H1: PD patients show higher initial trust in avatar faces than in human faces. H2: PD patients show similar initial trust in avatar faces if compared to healthy controls. Methods This study was approved by the ethics committee of Upper Austria. Informed written consent was obtained from all participants prior to the experiment. Participants Forty right-handed, Caucasian participants were recruited for the study. Twenty of the subjects were patients diagnosed with PD according to UK PDS Brain Bank criteria [75], an instrument with a diagnostic accuracy of 90% [76]. The patients were recruited at an outpatient clinic specialized in movement disorders (mean age 72.35 years, SD 9.16, equally gendered). The same patient population was studied before and more clinically relevant variables are available elsewhere (see [7]). This study is part of an extended project to explore trust in PD. All patients were clinically stable under treatment prior to the experiment. Healthy controls had a mean age of 68.4 years (SD 10, also equally gendered), no history of neurological disease, and a neurological status with no pathological findings. There was no significant difference between PD patients and healthy controls with respect to: age (two-sided unpaired t-test; t = 1.3, p = 0.20), income (classified in 6 categories; median patients: 4; median controls: 3; Mann-Whitney-Utest; z = -0.86; p = 0.39), education (two-sided unpaired t-test; mean education patients: 10.9 years, SD 4.02; mean education controls: 11.3 years, SD 5.11; t = -0.28; p = 0.79), and religion (all participants were Roman-Catholic); note that these factors might affect trust behavior [77]. We screened for psychiatric co-morbidities like dementia in all patients and controls, as well as for impulsive-compulsive disorders, and/or apathy in all patients and excluded those meeting pathological criteria. As screening tools we used the complete version of the Patient Health Questionnaire (PHQ-D, [78]), the Questionnaire for Impulsive-Compulsive Disorders (QUIP, [79]), the Mini Mental State Examination (MMSE, [80]), and the Lille Apathy Rating Scale (LARS, [81]). To quantify the severity of symptoms in PD we also used the Unified Parkinson's Disease Rating Scale (UPDRS, [82,83]), a scale commonly used in clinical practice. The mean UPDRS-III score of patients under treatment was 19.85 (SD 10.59). For a summary of the characteristics of our study population see Table 1. Experimental Procedure We designed a behavioral experiment to test our hypotheses. When subjects entered the laboratory for the experiment, they received written instructions explaining the rules and payoff structure for the experiment (the instructions are available from the first author on request). The experimenter subsequently checked whether the subjects understood the instructions by going through several hypothetical examples. All subjects correctly answered the control questions. We measured trust using a version of the original trust game [36]. The trust game was developed to measure trust as actual behavior of players in an economic exchange game. In this game, one player (the trustor), has an initial endowment of x monetary units. First, the trustor decides whether to keep his or her endowment, which ends the game, or to send (a part of) it to a second player (the trustee). The trustee observes the trustor's action and, if money was sent, decides whether to keep the amount or share (some of) it with the trustor. The experimenter multiplies the trustor's transfer, so that both players are better off collectively if the trustor transfers money and the trustee sends back a sufficient amount. In the trust game, the amount sent by the trustor is used as a behavioral measure for trust [36][37][38]. Our version of the game focused on the trustor's behavior, whose role was played by the participants on a computer. The participants were told that they would be playing against human beings presented in the form of face photographs or avatar faces, but actually played against an automatic and randomized computer-generated strategy that simulated a trustee's behavior (see Fig 3 for examples of human faces and avatar faces which we used as stimulus material). The initial endowment of the participants was €10, and they were told that they would be playing with real money and that the recompense for participating in the experiment would reflect their gain in the game. The participants could decide to send any amount to the trustee but had to use whole numbers between €0 and €10. This amount sent by the participants was used to operationalize initial trust. The trustee was illustrated with 16 human face images (taken from an established face database [84,85]) and 16 avatar faces (taken from the stimulus material from published research [29,86]). The selection of the human and avatar stimuli ensured the same face trustworthiness and gender in both groups. None of the human faces or avatar faces were known or previously seen by the participants. It follows that participants had no interaction experience with the stimulus material. In total, each subject made 32 decisions in the experiment, and the order of face presentation was completely random. It follows that no participant interacted with the same trustee twice. We used the monetary amount sent by each participant as a measure of trust behavior. Fig 4 illustrates our version of the trust game. We included a gambling task (Game-of-Dice task, for a detailed description of the game see [4]) in our experimental design, because non-social risky decision-making is considered a standard control condition in trust research (e.g. [87]). Statistical Analysis We used SPSS 1 (Version 20) to perform statistical analyses. Descriptive statistics were run to illustrate the demographic characteristics of the sample and to describe disease specific information about the patient group (e.g. UPDRS-II scores). A Kolmogorov-Smirnov test revealed that game behavior of both experimental groups conformed to normality and a Levene's test of equality assured that there was homogeneity of variances between groups. Unpaired sample two-sided t-tests were performed to compare game behavior between groups and paired sample two-sided t-tests to compare within-group behavior between the human face and avatar face conditions. Results H1 states that PD patients show higher initial trust in avatar faces than in human faces. H1 is supported by our data. PD patients invested significantly more in trust games played against avatar faces versus games played against human faces (mean avatar faces: €4.91, SD 1.55; mean human faces: €3.43, SD 2.00; two-sided paired sample t-test; t = 2.62; p = 0.017, see Fig 5). In contrast, healthy controls' mean investment amounts did not differ significantly in trust games The upper value in the square brackets indicates the investor's payoff (any amount between €0 and 60), the lower value the trustee's payoff depending on the investor's first move (investment of €0 or any amount between €1 and 10). When the investor is not sending any money to the trustee (€0) the payoff is €10 for both players. In any other case, the investor's payoff is dependent on the trustee's willingness to send some money back. Whereas the investor's payoff determined the overall gain of the participants in our study, the trustee's payoff was not paid out, as this role was not played by participants, but was part of the computerized experiment (see Experimental procedure). doi:10.1371/journal.pone.0165998.g004 H2 states that PD patients show similar initial trust in avatar faces if compared to healthy controls. H2 is also supported by our data. There was no significant difference in the mean investment amount between the PD patients group and the control group in trust games played against avatar faces (mean patients: €4.91, SD 1.81; mean controls: €4.94, SD 1.55; two-sided unpaired t-test; t = -0.07; p = 0.949; see Fig 6). Further analyses to determine the effect size of this condition revealed a Cohen's d = 0.02 and an effect size r = 0.01. PD patients, if compared to healthy controls, invested significantly less in games against human faces (mean PD patients: €3.43, SD 1.996; mean controls: €5.53, SD 1.56; two-sided unpaired t-test, t = -3.70; p = 0.001). The Game-of-Dice Task showed a significant difference in risk behavior between the patient group and the control group, with a larger number of highrisk choices (out of 18 choices) in the PD patient group (mean PD patients: 10.2, SD 3.928; mean controls: 7.05, SD 3.620; two-sided unpaired t-test; t = 2.637, p = 0.012). Discussion The results of our study confirm H1 that PD patients have significantly higher initial trust towards avatar faces when compared with human faces. Our data further support H2, which states that there is no significant difference between initial trust levels towards avatar faces between PD patients and healthy controls. Although it cannot be ruled out that a larger sample size could lead to a significant difference in this condition, we performed additional statistical analyses (Cohen's d), which indicates that there is indeed a negligible difference between mean investments of PD patients and healthy controls (see Fig 5). This is an intriguing result, because PD patients are known to invest less in a trust game, if compared to healthy controls, when playing against human faces [7]. Our control condition (Game-of-Dice task) confirms that lower trust in human faces is not a consequence of higher risk aversion in PD. Because of our study design (emotionless human faces and avatar faces, as well as no difference in colors used in the stimulus material) we can rule out effects of emotion recognition deficits (e.g. [88]) or color discrimination problems [89], known to affect PD patients, on our data. Even though the results of our behavioral study do not make it possible to determine the underlying neurological mechanisms of the observed trust differences, our results have important implications for HCI in general and for specific HCI technologies used in medicine. Because evidence shows that the belief of interacting with either an avatar (user-controlled virtual character) or an agent (machine-controlled virtual character) does not result in significant differences with regard to the evaluation of the virtual character nor in different behavioral reactions [90], our discussion of implications is relevant for interaction of PD patients with both avatars and agents. The findings of the present study are relevant to both computer scientists and programmers with a focus on HCI research, as well as medical scientists and physicians. Therefore, we discuss our results from a multidisciplinary perspective. We start the following discussion with HCI, where we group the implications in e-commerce, e-learning, and social media. A reflection on our results from the perspective of computer use in medicine follows, grouped into diagnosis and therapy in PD and patient-physician communication. Following the implications sections, we describe limitations of our study and close the paper with final remarks. Implications for HCI Avatars are used in several domains of HCI, such as e-commerce, e-learning, and social media. In e-commerce, avatars can lead to a higher satisfaction with online retailers [91], because they can substitute a real-life salesperson (in the sense of an automated online assistant) and thereby reduce information overload [92], provide recommendations on suitable products [93], and enhance consumer decision-making (e.g. [94]). Furthermore, avatars have a significant effect on how consumer reviews are perceived by consumers [95]. This is important, because consumer reviews have a major impact on purchase decisions (e.g. [96]). Our data implies that online shops frequently used by PD patients could increase trust by using avatars as virtual sales consultants. It has been argued by Javor et al. [60] that all research involving patients or disabled persons and possibly affecting marketing should be evaluated from an ethical point of view, as disease specific weaknesses of these populations can be subject to target marketing. Our study might raise concerns whether PD patients can be targeted by online marketing measures using avatars. Importantly, the results of our study show that avatars are trusted to a similar extent by PD patients and healthy controls. Hence, the interaction of PD patients and avatars in a trust-related context such as online shopping does not seem to be affected by the disease process and therefore, in our understanding, the use of avatar salespersons in the interaction with PD patients is ethically uncritical. E-learning is another application domain of avatars. In recent years, this domain has grown considerably, and the market for learning management systems has also exceeded growth predictions, currently worth around $2.55 billion worldwide according to the E-Learning Market Trends & Forecast 2014-2016. The use of virtual environments and avatars in e-learning systems play a key role in creating a sense of presence for users in the shared environment [97]. Grujic et al. [98] argue for the use of a virtual tutor in an electronic educational system. The tutor's major aim is not necessarily to teach students, but to emotionally respond to their actions with gestures and facial expressions. In a randomized controlled trial to evaluate the effectiveness of a patient education and health promotion program in the treatment of Parkinson's disease, the intervention group had significantly increased exercise, decreased "time off " (i.e., time periods of increasing symptoms mostly due to medication effects) and percentage of time off, reduced side effects, and decreased summary Parkinson's scores by approximately 10% [99]. There have been efforts to use e-learning for patient education in neurological patient populations, but these have not shown comparable results to face-to-face education (e.g. [100]). Given the positive effect of trust in the teacher on student performance [101], our data might help to improve patient education through e-learning by the use of avatars as tutors. Research in the field of Alzheimer's disease has already shown that patients suffering from neurological diseases might benefit from interaction with avatars [102]. Our results suggest that this finding could also hold true in the population of PD patients. Avatars have also found their way into social media environments such as Facebook [103]. A recent review focusing on studies about the use of social media in patients and caregivers identified discussion forums as being highly popular (66.6%), and social networking sites (14.8%) and blogs/microblogs (14.1%) were the next most prevalent applications [104]. Several researchers highlight the importance of trust in social media environments (e.g. [105]). Dubois et al. [106] state that "[w]ith so much user interaction and content created, the question of whom and what to trust has become an increasingly important challenge on the web. A user is likely to encounter dozens if not hundreds of pieces of user-generated content each day, and some of it will need to be evaluated for trustworthiness. Trust information can help a user make decisions, sort and filter information, receive recommendations, and develop a context within a community with respect to whom to trust and why"(p.1). A review of public health messages through social media indicates that more and more consumers turn to the internet for health related information and health organizations have therefore begun to turn to social media as a tool for connecting with the public, but only a very limited number of studies have analyzed the efficacy of social media in this context so far. Preliminary reports point to considerable outreach of social media applications and have the potential for engaging specific target audiences [107]. We argue for the use of avatars in public health social media messages aimed at PD patients in order to increase trust towards these contents. We could not identify a scientific study on the use of social media by PD patients. Hence, we make a call for research in this domain. It has already been discussed that avatars could be used to facilitate access to technologies for the elderly [108]. Our data support the idea of integrating avatars in the design of social media platforms to increase PD patients' trust, thereby increasing the inclusion of these patients in online social interaction. Implications for the use of computers in medicine In the medical field, several applications of avatars have emerged (e.g., diagnosis and therapy of neurological disorders and patient-physician communication). Avatars have been used in the diagnosis and therapy of neurological disorders for almost 20 years [109]. An important part of therapy in PD is physical exercise that has been shown to positively affect physical functioning, health-related quality of life, strength, balance, and gait speed of PD patients [110]. Problems frequently occurring in intensive daily training are often motivational. Virtual environments have been used frequently in rehabilitation of PD in recent years to overcome these problems, because they contribute to motivating patients to exercise more [111]. In PD, two aspects of rehabilitation are of major concern. First, the above mentioned cardinal motor symptoms of the disease are being targeted. In this context, virtual environments often use avatars to represent the PD patient (e.g. [112]). In the light of evidence from cyberpsychology that players of online role-playing games sometimes identify themselves more strongly with the avatar compared to the real self (e.g. [113]), PD patient self-representation with an avatar seems to be an effective way to enhance rehabilitation. Second, behavioral and cognitive rehabilitation of non-motor symptoms (e.g. social interaction for behavioral symptoms or playing "Sudoku" for cognitive symptoms) [114] plays an important role, and avatars are often used in this context (e.g. [115]). Previous literature argues for the use of avatars in the treatment of mentalizing deficits of patients suffering from autism [116], and we argue that patients with PD might also benefit from such a training. Further, given the importance of trust in rehabilitation [117], our data supports the use of avatars in both motor and cognitive rehabilitation of PD patients. With respect to avatars in patient-physician communication, a recent study advocates telemedicine in the care of PD patients and is arguing that"[t]ravel distance, growing disability, and uneven distribution of doctors limit access to care for most Parkinson's disease (PD) patients worldwide. Telemedicine, the use of telecommunications technology to deliver care at a distance, can help overcome these barriers" [118]. It has previously been argued that avatars could be used in patient communication to promote behavioral change, such as life style modification [119]. Our data show that avatar faces are trusted significantly more than human faces by PD patients, and this level of trust is similar to that of healthy controls. It can be theorized that communication with PD patients through avatars in the context of telemedicine could promote the patient-physician relationship and therefore improve health care in PD. Limitations The stimulus material consisted of unknown human faces and unknown avatar faces. It follows that results cannot be generalized to interactions with known counterparts. Furthermore, we make a call for studies replicating our investigation in order to substantiate the results of the present study. As our study examined trust behavior of PD patients towards avatar faces (i.e., patients acted in the role of investor in our trust game), there is still missing evidence how PD patients would behave in the trustee role (i.e., the party who is given trust and subsequently has to reciprocate, or not). Thus, we make a call for future studies investigating PD patients' reciprocation of trust. Specifically, it will be rewarding to see whether differences exist in PD patients' interaction with both humans and avatars acting in the role of the investor. It might turn out that PD patients not only exhibit increased trust towards avatars when compared to humans; rather, it is possible that PD patients are also more trustworthy in their interactions with avatars than in their interaction with humans (i.e., they reciprocate more in the avatar condition than in the human condition in a trust game where they are given money by the investor). Furthermore, we acknowledge that the results of the present study should be validated with a larger sample size. However, our statistically significant results based on a sample size of n = 20 in each experimental group along with a conservative level of significance (p<0.05) imply a large effect size. Additionally, the characteristics of our subjects (e.g. age, education level, all Caucasian, all Catholic) should be considered when interpreting our findings, because trust has been shown to correlate with those factors [69]. Finally, our study design using behavioral methodology does not allow for direct inferences on the exact nature or neurological basis of the differences in trust behavior of PD patients towards human faces and avatar faces. Therefore, we encourage studies using electrophysiological (e.g. electroencephalography, EEG) or functional brain imaging tools in order to shed light on the neural basis of trust behavior in PD. Conclusion Our study is a first step to examine human-avatar interaction at the nexus of behavioral neurology (neurological disorders affecting behavior) and PD. Our results suggest that avatars can be used to overcome behavioral deficits caused by a neurological disease, in our case PD patients' reduced trust in human social interaction. We hope that the present study instigates further interdisciplinary research in the domain of behavioral neurology and HCI. Advancements in this research field are urgently needed because neurological diseases and disorders are on the rise, and hence it is important to determine the positive and negative effects of IT use (e.g. in the form of avatars) for different patient populations.
8,660.4
2016-11-07T00:00:00.000
[ "Biology", "Psychology" ]
Some results about positive solutions of a nonlinear equation with a weighted Laplacian 2∗ = 2n n− 2 appears, and it is known that if 1 < q < 2∗, all bounded solutions have a first positive zero, and if q ≥ 2∗, then the solutions are positive in (0,∞). More recently, in 1993, the case of (E) with a weight in the right hand side, B(r) = 1 1+rγ , γ > 0, that is the Matukuma equation, was studied by Ni-Yotsutani [10], Li-Ni [7], [8], [9], and Kawano-Yanagida-Yotsutani [5], where the problem ∗ This research was supported by FONDECYT-1030593 for the first author, Fondap Matemáticas Aplicadas for the second author and FONDECYT-1030666 for the third author. Introduction We consider the problem of classification of bounded positive solutions to Here q > 2, and A, B are weight functions, i.e., a.e.positive measurable functions.Many authors have dealt with the non weighted case, i.e., with positive solutions to the equation where q > 2, see for instance [4]. In this case, when n > 2, the critical number 2 * = 2n n − 2 appears, and it is known that if 1 < q < 2 * , all bounded solutions have a first positive zero, and if q ≥ 2 * , then the solutions are positive in (0, ∞). then there exists a unique α * > 0 such that the solution u(•, α) of (1.1) satisfies -u(r, α) > 0 for all r > 0 with lim Later, in 1995, Yanagida and Yotsutani [11] considered the case of a more general weight in the right hand side, and they studied the problem for K satisfying decreasing and nonconstant in (0, ∞). They defined the critical numbers From (K 1 ) σ > −2, and then they set and proved the following: Theorem B. Let n > 2 and assume that the weight K satisfies (K 1 ) and (K 2 ).Then (i) If 2 < q ≤ q , then for any α > 0, the solution u(•, α) of (1.2) has a first positive zero in (0, ∞). (iii) If q < q < q σ , then there exists a unique α * > 0 such that the solution u(•, α) of (1.2) satisfies -u(r, α) > 0 for all r > 0 with lim -u(r, α) > 0 for all r > 0 with lim Clearly, the result in Theorem A is a particular case of that of Theorem B, since 1+r γ satisfies all the assumptions with σ = 0 and = −γ.We will deal here with the case A = B in (P ) when the solutions are radially symmetric: where |x| = r and now the function b(r) := r N −1 B(r) is a positive function satisfying some regularity and growth conditions.We will see in section 3 that under some extra assumption on the weight K in (1.2), the problem considered in [11] is a particular case of ours. Since we are interested only in positive solutions, we will study the initial value problem Our note is organized as follows: in section 2 we will introduce some necessary conditions to deal with with our problem and we will state our main results which are a particular case of the work in [2].Finally, in section 3 we compare our result with the one given in Theorem B. Main results We introduce next some necessary assumptions to deal with (IV P ).We note that if u is a solution to our problem, then for all r > 0, and thus u (r) < 0 for all r > 0. If for some positive R it happens that u(R) = 0, u(r) > 0 for r ∈ (0, R), then for all r ≥ R and such that u(r) ≤ 0, we have that implying that u remains negative for all r ≥ R. If on the contrary it holds that u(r) > 0 for all r > 0, then and thus, for r ≥ s we have and we conclude that 1/b ∈ L 1 (s, ∞) for all s > 0. Putting it in another way, if 1/b ∈ L 1 (1, ∞), then u must have a first positive zero.Therefore, keeping in mind that we are interested in the positive solutions to (P r ), there is no loss of generality in assuming that 1/b ∈ L 1 (s, ∞) for all s > 0. Moreover, if u is any solution to our problem, then for r ≥ s small enough it holds that b|u and thus b ∈ L 1 (0, 1) is a necessary condition for the existence of solutions to (IV P ).Finally, it can be shown that is necessary and sufficient for the existence and uniqueness of solutions to (IV P ).Hence, our basic assumptions on the weight b will be: By a solution to (IV P ) we understand an absolutely continuous function u defined in the interval [0, ∞) such that b(r)u is also absolutely continuous in the open interval (0, ∞) and satisfies the equation in (IV P ). We will show that the behavior of function is crucial in the study of solutions to (IV P ).This function played a key role when studying the problem of existence of positive solutions to the corresponding Dirichlet problem associated to our equation, see [1].The behavior at 0 of this function is closely related to the inclusion of weighted Sobolev spaces.For a proper definition of these spaces we refer to Kufner-Opic [6].Now also the behavior at ∞ of this function will be crucial for our classification results.Let us define and put where we set ρ ∞ = ∞ if W = ∅.It can be proved that condition (H 3 ) implies that 2 ∈ U and thus We will prove in section 2 that these critical numbers can be computed as We will denote the unique solution to (IV P ) by u(r, α).As it is standard in the literature, we will say that -u(r, α) is a crossing solution if it has a zero in (0, ∞). In the case that u is a crossing solution, we will denote its (unique) zero by z(α). Our main results consist of a classification of the solutions according to the relative position of q with respect to the critical values ρ 0 and ρ ∞ .In these results, the function plays a fundamental role, the connection of this function with the critical values follows since and Also, we note that in the non weighted case, that is, b(r) = r n−1 , n > 2, we have and thus c(r) ≡ 2n n − 2 . Our first classification result generalizes the non weighted case: Theorem 2.1 Let the weight b satisfy assumptions (H 1 ), (H 2 ) and (H 3 ).Let q > 2 be fixed and assume that c(r (i) If q < ρ * , then u(r, α) a crossing solution for any α > 0. (ii) If q = ρ * , then u is the rapidly decaying solution given by where C is a positive constant. Theorem 2.2 Let the weight b satisfy assumptions (H 1 ), (H 2 ) and (H 3 ), and assume that they also satisfy the function r → c(r) is decreasing on (0, ∞). This result, as well as some very strong generalizations will appear in [2]. Final remarks In this section, we will compare our result in Theorem 2.2 with Theorem B stated in the introduction.To this end, we will show that if in addition to (K 1 ) and (K 2 ), we assume that then the assumptions in Theorem 2.2 are satisfied.Indeed, as in [3], we make the change of variable and the problem By (3.1), r(0) = 0 and r(∞) = ∞.Next, we will see that assumptions (H 1 ), (H 2 ), and (H 3 ) are satisfied for this b.Clearly, we only need to check that the first in (H 2 ) and (H 3 ) are satisfied.We begin by showing that b ∈ L 1 (0, 1).Indeed, by making the change of variable r = t 0 K 1/2 (τ )dτ , we find that where here and in the rest of this note t 1 is defined by 1 = t1 0 K 1/2 (τ )dτ , and thus b ∈ L 1 (0, 1).Also, Finally, we will see that under (K 2 ), c is decreasing, and thus our theorem applies: Indeed, it can be seen that in the variable t, Hence, if c (t) > 0 for t ∈ (0, t 0 ), then tc (t) c(t) must decrease in (0, t 0 ).This, together with the fact that lim t→0 tc (t) c(t) = 0, implies that c (t) < 0 in (0, t 0 ), a contradiction.Hence, there are points t > 0 in every interval (0, t 0 ) where c (t) < 0, implying that if c is not always decreasing, it must have a minimum, which is not possible.
2,126.6
2009-06-28T00:00:00.000
[ "Mathematics" ]
Exotic Physics at ATLAS A number of proposed explanations to observed phenomena predict new physics that will be directly observable at the LHC. Each new theory is manifested in the experiments as an experimental signature that sets it apart from the many well understood Standard Model processes. Presented here is a summary of a selection of such searches performed using 8 TeV center of mass energy data produced by the LHC and collected with the ATLAS detector. As no significant deviations from the standard model are observed in any search channel presented here, the results are interpreted in terms of constraints on new physics in a number of scenarios including dark matter, sequential standard model extensions, and model independent interpretations depending on the given search channel. Introduction During 2012, the 20 f b −1 of proton proton collision data delivered by the LHC [1] at a center of mass energy of 8 TeV gave access to a new energy regime with higher statistics than before and allowed for ATLAS [2] to begin to more fully exploit the LHC as a discovery machine.Indeed, this was evidenced by the discovery of a new resonance [3], which has since been found to be described well in many ways by the standard model Higgs boson [4,5].But there are still many questions that remain in light of this.For instance, is the standard model as currently described just a limiting case of some larger grand unified theory?Do the fermions that we have currently found to be point-like (leptons and quarks) have underlying structure?What is dark matter and if it has a particle description, how well can we study it at the LHC? These questions, and many more, span a very large space of models, each of which produces a distinct experimental signature when produced in LHC collisions.An efficient way to search for the presence of new physics is to search for deviations from the well understood Standard Model processes in unique, final-state signatures.To this end, the searches for exotic new physics at ATLAS can be categorized based on the final state and interpreted based on the multiple, new physics scenarios that may produce such a signature. Mono-X Signatures The first category of signatures for new physics are those in which there is only a single visible object in the final state, called "Mono-X", where the single object can be representative of a quark or gluon, as in the mono-jet search [6], or vector boson, as in the mono-W/Z search [11].In both of these searches, momentum is conserved in the transverse plane of the collision and the single detectable object is balanced by a large amount of missing transverse energy (MET) that is the primary signature of new physics, which is representative of a number of undetectable final state particles.The mono-jet search is focused on single high transverse momentum (p T ) jets produced primarily from initial state radiation of a quark or a gluon and reconstructed using the anti-k T (R=0.4) jet algorithm [7].The search is then divided into four regions based on the MET and the highest p T jet to search for the contribution of new physics at different energy scales as in Figure 1(a).The data are well modelled by all known background processes, as shown in Figure 1(a), and the results are used to constrain models with large extra dimensions [8], weakly interacting massive particle (WIMP) dark matter [9], and gravitinos produced through gauge interactions [10] as in Figure 1(b). [Events/GeV] (a) [GeV] g / q m 0 500 1000 1500 2000 2500 3000 3500 [pb] Figure 1.From [6], shown in (a) is the MET spectrum for one kinematic signal region comparing stacked background predictions to data with several signal models overlayed.Shown in (b) is the interpretation of the combination of all kinematic signal regions for the 95% confidence level upper limits on the cross section times acceptance times efficiency for gauge mediated production of gravitinos as a function of squark/gluino mass. On the other hand, the mono-W/Z search focuses on searching for a single hadronically decaying W/Z boson.These hadronic decays are identified as high p T jets, reconstructed with the Cambridge-Aachen (R=1.2) algorithm [12], that pass a mass drop and filtering algorithm [13] in order to identify jets whose underlying constituents are consistent with the hard decay of a massive particle and whose mass is consistent with that of a W or Z boson.This jet is required to be balanced by a large amount of MET and events are further divided into two signal regions in which the search is performed.The data are observed to be well modelled by background processes (Figure 2(a)) and the results are used to constrain the production of dark matter interpreted in terms of an effective interaction with standard model particles.These results are transformed into limits on dark matter nucleon cross section as a function of WIMP mass such that the results can be directly compared to those found in direct detection experiments as in Figure 2 Figure 2. From [11], shown in (a) is the comparison of the background prediction to data for the leading jet invariant mass spectrum for events with large MET with signal templates from the D5 dark matter effective operator overlayed.Shown in (b) is the interpretation of the search in terms of the WIMP-nucleon cross section as a function of WIMP mass. Dijet Signatures The next class of signatures focuses on new physics that would be observed decaying to pairs of hadronic jets.These searches benefit from high statistics and leverage the ability to fully reconstruct the invariant mass of the potentially new, resonantly-produced physics.The inclusive dijet resonance search [14] determines the mass of a central-jet pair with |y * | < 0.6 that are reconstructed with the anti-k T (R=0.6)algorithm. 1.The resulting spectrum is smoothly falling and modelled by the empirical function f (m; p 1,2,3,4 ) = p 1 (1 − x) p2 x p 3 +p 3 ln(x) where x is the reconstructed dijet mass m (in units of 8 TeV) and p 1,2,3,4 are four free parameters.This background estimation is used to initially perform a search with the BumpHunter algorithm [15] in all mass windows for the largest excess in data above the smooth background hypothesis as shown in Figure 3(a).As no significant resonant excess is observed, the results are interpreted as limits at 95% confidence level on cross section times acceptance for a benchmark excited quark model [16] in the mass range of 1 TeV to 5 TeV as well as simplified Gaussian models parametrized by the relative width, σ G /M G as in Figure 3(b). In addition to the search for inclusive dijet resonances, a search is performed in which the dijet system is accompanied by an additional leptonically decaying W or Z boson [17].This W/Z-tag is required to suppress the large multi-jet background present in the inclusive case and makes this search sensitive to other new physics, namely low scale technicolor (LSTC) [18].To increase sensitivity, the W/Z boson is required to have p T > 50 GeV and the dijet system, composed of the two highest p T jets, is required to be central and back to back in azimuth by making requirements on Δη( j j) and Δφ( j j) respectively.The dijet invariant mass spectrum (Figure 4(a)) is then inspected for deviations from the expected background.No such deviations are observed and 95% confidence level upper limits on cross section times branching ratio for the benchmark LSTC processes of a techni-rho (ρ T ) decaying to a W/Z boson and a techni-pion (π T ), both as .From [17], shown in (a) is the dijet invariant mass spectrum in the signal region associated with the tag of a leptonically decaying W → ± ν boson.Shown in (b) are the 95% confidence level upper limits on cross section times branching ratio as a function of M π T assuming the relation M ρ T = 3/2 * M π T + 55 GeV.Note that the corresponding spectrum and exclusion limits exist for the neutral current ρ ± T → Zπ ± T channel in [17]. Dilepton Signatures The next class of experimental signatures that can be used to constrain a wide variety of new physics scenarios are those in which pairs of leptons are used to reconstruct an invariant mass spectrum.In the case of muon pairs, this provides a rich spectroscopy for resonances such as the J/Ψ, Υ, and Z boson and which, in the presence of new physics, may reveal resonant excesses in the high mass tail.The search looks for resonant excesses in the Drell-Yan spectrum of pairs of well-reconstructed and isolated muons or electrons2 extending to masses of nearly 2 TeV [19].This search benefits greatly from the reconstruction and identification of electrons and muons using the full detector information.The absence of any deviations from the background expectations make it possible to set 95% confidence level upper limits on cross section times branching ratio for a sequential standard model Z boson [20], with a lower bound on the Z mass of 2.86 TeV.A limit is also set on a spin-2 Randall-Sundrum graviton (G * ) [21], with a lower bound on the G * mass of 2.47 TeV.Expected limit Figure 5. From [19], shown in (a) is the dilepton invariant mass spectrum for the Z → ee channel with multiple Z signals overlayed.Shown in (b) is the 95% confidence level upper limit derived for the cross section times branching ratio for the Z signal by combining the Z → ee and Z → μμ signal regions. The second dilepton search has a very different final state as it searches for pairs of oppositely charged τ leptons that decay hadronically.The identification of hadronically decaying τ leptons is not as clean as compared to electrons and muons and requires a boosted decision tree to discriminate τ jets from quark and gluon-initiated jets.The charge of the τ is reconstructed as the sum of the charges of all tracks in the jet.The final state mass is not fully reconstructed due to the presence of neutrinos in the decay of the τ leptons and so only the reconstructed transverse mass of the visible decay products of the τ's, m tot T , can be calculated for each event.Therefore, the resolution and sensitivity to signals such as those in the previous di-electron and di-muon search is not as great.Nonetheless, by requiring the τ identified jets to have high p T allows for the Z → ττ background to be clearly seen as in Figure 6(a) and as new physics need not obey lepton universality probing all lepton flavors is critical.However, no excess above the estimated background processes is observed and a 95% confidence level upper limit on the cross section times branching ratio for a sequential standard model Z boson [23] is derived and a lower bound on the Z mass is found to be 1.9 TeV. Figure 6.From [22], shown in (a) is the τ + τ − invariant mass spectrum comparing the background expectation to data with a Z signal overlayed for comparison.Shown in (b) is the 95% confidence level upper limit on σ(pp → Z ) × BR(Z → τ + τ − ) as a function of Z mass. Photon+X Signatures The "photon+X" class of searches cover new states that decays to a photon and a jet or a photon and a lepton.The photon-jet search [24], is very similar to the inclusive dijet search but selects a with a well-reconstructed, isolated photon in addition to an anti-k T (R=0.6)jet to reconstruct the invariant mass spectrum.Both the photon and the jet are required to have high p T and be in the central region of the detector.Similar to the dijet search, a selection is made on |Δη( j, γ)| to help identify s-channel production of the pair.The photon-jet mass spectrum (Figure 7(a)) is then scanned with the BumpHunter algorithm and no signal is observed so the background is modelled by the same function as in the dijet search.This background prediction is used to set 95% confidence level upper limits on cross section times branching ratio for benchmark models of excited quarks [16], quantum black holes [25], and model independent limits on general Gaussian signals.Shown in Figure 7(b) is the interpretation of the search in terms of the quantum black hole signal.The second such "photon+X" search focuses on searching for excited leptons [26] ( * is limited to e * or μ * ) produced through an effective field theory description [27].The effective field theory production mechanism requires the emission of an additional Standard Model electron or muon with the excited lepton then decays via emission of a single photon to a Standard Model lepton of the same flavor.Thus the search is carried out by selecting events with a single photon produced in association with a pair of well-reconstructed electrons or muons, where M( ) is required to be greater that 110 GeV to suppress Drell-Yan background.However, when reconstruting the * → γ decay, it is ambiguous as to which Standard Model lepton is a result of the decay of the excited lepton and which came from the production mechanism.Thus, the search is performed by reconstructing the total invariant mass of the three-body (eeγ or μμγ) as shown in (Figure 8(a)).In the absence of a signal, exclusion limits at 95% confidence level are set in the two dimensional plane of the excited lepton mass and the scale of the new physics present in the effective field theory description (Λ) as in Figure 8(b). EPJ Web of Conferences 00086-p.6 Figure 7. From [24], shown in (a) is the comparison of data and background estimate from the smooth fit of the photon-jet invariant mass spectrum with several quantum black hole signals overlayed.Shown in (b) is the 95% confidence level upper limit on cross section times branching ratio times acceptance times efficiency for the quantum black hole signal as a function of the black hole production energy threshold. [GeV] Figure 8. From [26], shown in (a) is the comparison of the background prediction to data for the M(μμγ) spectrum with the 95% confidence level exclusion limit in the two dimensional plane of the excited muon (μ * ) mass and the scale of the new physics present in the effective field theory description (Λ).Note that the corresponding spectrum and exclusion limit exists for the excited electron (e * ) channel in [26]. Multi-Lepton Signatures The next class of searches involving a more complex final state is that of the multi-lepton search [28].In the Standard Model, the production of events with three or more real leptons is very rare.Thus, the production of multiple leptons by new physics, such as fourth generation quarks [29], supersymmetry [30], and models with doubly charged Higgs bosons [31], will be clearly visible as an excess on the small background.Therefore, the analysis selects well-reconstructed, isolated electrons or muons and leptonically and hadronically decaying τ leptons in addition to jets reconstructed with the antik T (R=0.4) algorithm.Events are then classified based on the number of electrons, muons and τ's.They are further subdivided into regions based on the number of b-tagged jets and whether there is an electron or muon pair present which can reconstruct an on-shell Z boson.In the case of an on-shell Z boson the effective mass of the MET and the highest p T lepton in the event (meant as a proxy for the presence of a W boson). Events are divided into kinematic regimes based on the MET and the scalar sum of the energy of leptons or jets (H leptons T and H jets T respectively) as in Figure 9(a).Figure 9. From [28], shown in (a) is the comparison of the background estimation to data for the MET spectrum for the signal region containing ≥ 3e/μ with a pair that is off-shell from the Z boson mass.Shown in (b) is an example of the interpretation of the final results in terms of the 95% confidence level upper limits on visible cross section (σ vis 95 ) for four of the final signal region categories. In these numerous signal regions, no deviations from the background estimation is observed and the results are reported as 95% confidence level upper limits on the visible cross section (σ vis 95 ) in each region as in Figure 9(b).In addition to these limits on the visible cross section, detector level efficiencies ( f id ) for the identification of the reconstructed objects in the event are reported to allow for new models at particle level to be constrained by transforming the limit on visible cross section to a limit on fiducial cross section as σ f id 95 = σ vis 95 / f id . Diboson Signatures The last class of experimental signatures covered is that in which new physics couples to pairs of W and Z bosons, that form an intermediate state before decaying to leptons, neutrinos, or jets in the final state.One search that focuses on the WZ intermediate state identifies the presence of three wellreconstructed, isolated leptons (electrons or muons) in addition to a neutrino (reconstructed as large MET) in the final state [32].The four possible signature combinations (eeνe, eeνμ, μμνμ, μμνe) are all required to have an oppositely charged lepton pair whose invariant mass is near the Z boson mass.The longitudinal momentum of the neutrino is reconstructed using the W boson mass constraint with the third lepton.The resulting WZ boson pair is required to be central and back to back in the transverse plane by making selections on Δη(W, Z) and Δφ(W, Z) respectively.The total invariant mass of the four body system is then reconstructed, as shown in Figure 10(a), and in the absence of a resonant excess, 95% confidence level upper limits on cross section times branching ratio are obtained for a benchmark extended gauge model W [33] as in Figure 10(b). The second search of this type focuses on the semi-leptonic ( q q) final state which can be representative of the ZZ or ZW (indistinguishable due to the resolution of the measurement of the hadronic decay of the W/Z boson) intermediate state pair [34].The final state is identified by first reconstructing a leptonic Z boson decay to a pair of well-reconstructed and isolated electrons or muons.The hadronic side of the event, although it is dominated by quark and gluon initiated jets from Standard EPJ Web of Conferences 00086-p.8Model production of Z+jets, is used to identify the decay of the second boson.For the case of low signal mass, this W/Z boson is reconstructed with two high p T anti-k T (R=0.4) jets whose combined invariant mass is near the Z boson.However, in the case of very massive signals (above 1 TeV), the q q pair coming from the W/Z boson decay are so heavily boosted that they are indistinguishable in the calorimeter and thus identified as a single high-p T , massive anti-k T (R=0.4) jet.The invariant mass of the boson pair is then reconstructed as either m( j j) (Figure 11(a)) or m( j) (Figure 11(b)) depending on whether the selection is made in the low mass or high mass regime.By using these two distinct event topologies, maximal sensitivity can be obtained across a broad range of reconstructed signal masses.In the absence of any clear excess, 95% confidence level upper limits on cross section times branching ratio are obtained for a benchmark Randall-Sundrum graviton (G * ) [21] decaying to a pair of Z bosons, and a lower limit on the G * mass found to be 850 GeV (Figure 11(c)). Conclusion The data delivered by the LHC during Run 1, and the tremendous effort put towards understanding the performance of ATLAS detector, has allowed direct searches for new physics to cover a wide array of experimental signatures.Although no clear signs of exotic physics have been found, the data put direct constraints on many scenarios including technicolor, grand unified theories, fermion compositeness, the production of quantum black holes, and even dark matter couplings to the Standard Model.Many of these constraints are pushing the TeV scale of new physics and with the increased energy of the LHC and the upgraded ATLAS detector to begin taking data during Run 2 in 2015, the prospects for such future searches are very exciting. a e-mail<EMAIL_ADDRESS>10.1051/ C Owned by the authors, published by EDP Sciences, shown in Figure 4 Figure 3 . Figure3.From[14], shown in (a) is the comparison of the dijet invariant mass spectrum measured in data to the background estimate obtained from a smooth fit to data.Shown in (b) is the 95% confidence level upper limits on cross section times acceptance derived from the observed spectrum for a number of generic Gaussian signal templates. Figure 4 Figure 4. From[17], shown in (a) is the dijet invariant mass spectrum in the signal region associated with the tag of a leptonically decaying W → ± ν boson.Shown in (b) are the 95% confidence level upper limits on cross section times branching ratio as a function of M π T assuming the relation M ρ T = 3/2 * M π T + 55 GeV.Note that the corresponding spectrum and exclusion limits exist for the neutral current ρ ± T → Zπ ± T channel in[17]. Figure 10 . Figure 10.From[32], shown in (a) is the comparison of the background estimation to data for the combination of all signal regions.Figure (b) shows the expected and observed 95% confidence level upper limits on σ(pp → W ) × BR(W → WZ). Figure 11 . Figure 11.From[34], shown in (a) and (b) are the comparisons of the Monte Carlo estimated backgrounds to data for combined electron and muon channels for the low mass and high mass selections respectively with theoretical signal predictions overlayed.Figure (c) shows the expected and observed 95% confidence level upper limits on σ(pp → G * ) × BR(G * → ZZ) obtained in the absence of signal. Figure 11.From[34], shown in (a) and (b) are the comparisons of the Monte Carlo estimated backgrounds to data for combined electron and muon channels for the low mass and high mass selections respectively with theoretical signal predictions overlayed.Figure (c) shows the expected and observed 95% confidence level upper limits on σ(pp → G * ) × BR(G * → ZZ) obtained in the absence of signal.
5,202.4
2014-01-01T00:00:00.000
[ "Physics" ]
Grammars of Classical Arabic in Judaeo-Arabic An Overview This article presents an overview of medieval Classical Arabic grammars written in Judaeo-Arabic that are preserved in the Cairo Genizah and the Firkovich Collections. Unlike Jewish grammarians’ application of the Arabic theoretical model to describing Biblical Hebrew, Arabic grammars transliterated into Hebrew characters bear clear evidence of Jewish engagement with the Arabic grammatical tradition for its own sake. In addition, suchmanuscripts furnish new materialon the historyof the Arabicgrammat-ical tradition by preserving otherwise unknown texts. The article discusses individual grammars of Classical Arabic in Judaeo-Arabic and tries to answer more general questions on this little known area of Jewish intellectual activity. An analysis of the corpus suggests that Jews who copied and used these texts were less interested in the intricacies of abstract theory than in attaining a solid knowledge of Classical Arabic. Court scribes appear to have been among those interested in the study of Classical Arabic grammar. Introduction Medieval Jewish grammatical interests centered around the study of the language of Jewish Scripture-Biblical Hebrew.Although recent research1 sug- The consistent vocalisation in T-S NS 301.25r is significant for determining the fragment's function.Al-Zaǧǧāǧī's Kitāb al-Ǧumal was traditionally used in the classroom to teach students the basics of the Classical Arabic language and grammar.12It is clearly with the same purpose that this work was transliterated into Hebrew characters.That the single currently identified part of Kitāb al-Ǧumal in Hebrew characters is the chapter on inflection, and the following chapter on verbs was not copied even though enough space remained on the page to do so, may indicate that only a portion of the book was transcribed and vocalised, possibly as a vocalisation exercise.It seems fitting that a basic text on grammatical cases, which mainly deals with vowels and ends with a summary of all case markers, should be used as a sample text to practice one's vocalising skills.The imperfect vocalisation of the fragment may indicate that this is not a teacher's work to be copied by future students but the product of a learner who has not yet attained full mastery of this subject. 3) RNL Evr Arab II 185 (25 Fols.185 5v,5r,2r,2v,4r,4v,3v,3r,1v,1r;RNL Evr Arab II 253r,RNL Evr Arab II 253v;RNL Evr Arab II 185 6v,6r,7r,7v,15v,15r,8r,8v,16r,16v,10r,10v,12r,12v,11v,11r,13v,13r,18r,18v,20r,20v,19r,19r,17v,17r,14r,14v,21r,21v,9r,9v,22v,22r,25r,25v,24v,24r,23r,23v Copied in the 12th century, the Judaeo-Arabic text is an early witness of Šarḥ Mulḥat al-iʿrāb.The following chapters fully or partially survive: on the noun, on the verb, on the particle, on the indefinite and the definite, on the division of verbs, on the inflection, on the inflection of triptote nouns, on the initial item and the predicate, on the agent, on the patient, on the sisters of ẓanantu, on the exclamatory construction, on the construction of instigation, on the construction of warning, on the sisters of inna, on the sisters of kāna, on mā of negation, on the vocative, on the apocopation (of the vocative), on the diminutive, on the appositives, on diptotes, on poetic license, on numerals.17 A comparison of the manuscripts with a printed Arabic script edition indicates that the Judaeo-Arabic version is a straightforward copy without significant changes.The only deviations are occasional omissions of short bits of text, such as examples and Islamic honorifics.The text carries relatively many transliteration mistakes conditioned by the shapes of letters and letter combinations in Arabic script.The mistakes in transliteration are particularly common in chapters dealing with finer details of Classical Arabic, which the copyist may have been less familiar with, e.g., the case endings in different vocative construction. Intellectual History of the Islamicate World 8 (2020) 284-305 five nouns ab, aḫ, fū, ḥam and ḏū, dual and sound masculine plural forms.It then goes on to discuss each case and mood separately, but breaks off in the middle of a discussion of the nominative. 5) T-S Ar.31.254 (1 Folio), T-S 24.31 (1 Folio) and T-S AS 155.132 (1 Folio),19 Late 11th-Century Handwriting The fragments are part of a rotulus that originally held a petition to a dignitary penned in large Arabic characters.20Such state documents were written on only one side of the paper and laid out with wide spaces between the lines, which made them very attractive for recycling as writing paper for other texts.21In the rotulus, a Judaeo-Arabic grammar of Classical Arabic has been copied on the blank side and between the lines, penned upside down compared to the original text.It is most likely that the Judaeo-Arabic text is in the hand of the prolific court scribe Hillel b. ʿEli, who wrote numerous Genizah documents between 1066-1107.22 The grammar is divided into short chapters dealing primarily with the correct cases and moods in different syntactic constructions, such as the predicative construction, annexation, circumstantial clauses, the passive, etc., lists of operators that require certain cases and moods, the formation of nisba adjectives, and the spelling of final weak verbs.Each chapter summarises the subject matter in one or two sentences and provides a large number of examples. An analysis of the fragments shows that this grammar does not belong to the mainstream of the Arabic grammatical tradition.The text can be identified as a pedagogical grammar representing the so-called Kūfan school of grammar.In the Arabic grammatical tradition two schools are distinguished, the Kūfan and the Baṣran.Although the authenticity of the schools is debated,23 distinctive terminology and grammatical theories are consistently ascribed to them in medieval Arabic sources.24The terminology, notions and theories embraced in T-S Ar.31 presented in the sources as Kūfan.25The shibboleth Kūfan terms used in the fragments are qaṭʿ for circumstantial qualifier and ṣifa for locative qualifier, for which the corresponding Baṣran terms are ḥāl and ẓarf respectively.26In the chapter on qaṭʿ, the author explicitly dissociates himself from the Baṣrans: while consistently using qaṭʿ to denote circumstantial qualifier, the author remarks that the Baṣrans' term for qaṭʿ is ḥāl.27 It is unfortunate that several words are missing in the manuscript where the author most probably alludes directly to the group that uses the term qaṭʿ, viz.his in-group.A famous Kūfan theory embraced in the grammar is that infinitives are derived from finite verbs, i.e. ḫurūǧ is derived from ḫaraǧa.In contrast, Baṣran grammarians maintained that verbal derivation occurs in the opposite direction, from infinitives to finite verbs.28A conspicuous feature of the Judaeo-Arabic version of this grammar is the occurrence of numerous mistakes in transliteration.These mistakes demonstrate that the grammar was copied into Hebrew characters from an Arabic script Vorlage rather than composed directly in Judaeo-Arabic.The mistakes in transliteration reveal that the scribe, in all probability Hillel b. ʿEli, was not a proficient reader of cursive Arabic texts.Moreover, at the time of copying he was not educated in Classical Arabic grammar, for he clearly did not understand the grammatical analysis.One of the most conspicuous demonstrations of this is found in the chapter on the past form of final weak verbs, where the unpointed tooth element in ‫ى‬ ‫ا‬ ‫ء‬ is consistently interpreted as b instead of y, which results in the chapter discussing final waw and final bāʾ verbs.The text preserved in the fragments is not a coherent treatise but an eclectic compilation of grammatical materials put together by association, with additions from other disciplines, such as orthography, philosophy, and biographical literature.The compilation is not well structured: sometimes a chapter is started, left unfinished as the compiler diverges into another subject and then resumed or even started again.The text stops abruptly in the middle of a sentence leaving most of the final page empty.32It is most likely that this is a private compilation prepared in the process of studying Classical Arabic grammatical theories.It is possible that this compilation was put together by Nathan b.Samuel Nezer ha-Ḥaverim. The discussed topics are: types of predicates, parts of speech, principles of inflection, the actualisation of moods and cases in words of different patterns including diptosis, and negation particles.The level of text oscillates between a basic statement of linguistic facts and a more abstract discussion of theoretical issues.Parts of The Chapter on Parts of Speech and of The Chapter on Inflection are identical with corresponding sections of a short grammar Al-Tuffāḥa fī l-naḥw "The Apple of Grammar" by Abū Ǧaʿfar al-Naḥḥās (d.338/949).33In the more theoretical sections, the fragment deals with such issues as why verbs are secondary to nouns, why nouns cannot have the apocopate form (ǧazm), why certain factors cause diptosis, etc.A section on graphic signs (šadda, waṣla, tanwīn, etc) is also included, which is common in treatises on Arabic orthography but not in Muslim grammars. The most noteworthy feature of this text are the cited authorities.The fragments give four definitions of parts of speech: by Sībawayhi (d.c. 180/796), by ʿAlī b.Abī Ṭālib (c.600-40/661), by Aristotle and by al-Dumayk (c.457/1060-510/1117).1) Sībawayhi's definition of the noun: Babylonian academies by Nathan ha-Bavli (for an identification of the hand see Gil,In the Kingdom of Ishmael,vol. 2, … Aristotle: the definition of a noun is exactly the same (i.e.ism), and the verb is "word" (kalima) and the particle is "instrument" (adāt).This definition establishes correspondences between Arabic grammatical terms for parts of speech (ism, fiʿl and ḥarf ) and Arabic translations of Aristotelian terms (ism, kalima and adāt).As is well known, Aristotle divided speech into nouns, verbs and particles, calling them in Greek onoma (lit."name"), rhema (lit."word, utterance, thing said") and sundesmos (lit."some- Īḍāḥ fī ʿIlal al-naḥw "The Explanation of Linguistic Causes" as definitions which are "taken from the technical language of the logicians" and "do not meet linguistic requirements".46 The mention of al-Dumayk merits special attention.Manṣūr b. al-Muslim b. ʿAlī b.Muḥammad b.Aḥmad b.Abī al-Ḫaraǧayn, known as al-Dumayk (c.457/ 1060-510/1117), is mentioned in biographical literature as a poet, a teacher and a grammarian.47To the best of my knowledge none of his grammatical works survive.Quotations attributed to al-Dumayk in the Judaeo-Arabic compilation constitute the only source on the grammatical teachings of this scholar known today.Al-Dumayk pursued his career in Damascus, and it may not be a coincidence that he is quoted in a work penned by a scribe who lived in that city at approximately the same time. In addition to the definition of a noun cited above, the following is transmitted in our fragments in the name of al-Dumayk: vidro Intellectual History of the Islamicate World 8 (2020) 284-305 composed a book on Arabic inflection according to the Greek system in two discourses (maqālatān), no copies of which have so far been identified.60 3 Concluding Remarks on the Corpus of Grammars of Classical Arabic in Judaeo-Arabic Grammars of Classical Arabic copied in Hebrew characters are interesting from two perspectives.Firstly, they furnish new material on the history of the Arabic grammatical tradition.These manuscripts complement Muslim sources on the subject by preserving otherwise unknown texts, some of which do not belong to the mainstream of the Arabic grammatical tradition.In the corpus presented above there are three examples of this-the Kūfan pedagogical primer (item 5), quotations from al-Dumayk (item 6) and a grammar of Classical Arabic possibly associated with Ḥunayn b.Isḥāq (item 7).Secondly, grammars of Classical Arabic in Judaeo-Arabic are important because they testify to Jews' active interest in grammar other than the grammar of Biblical Hebrew.One of the most challenging questions that arises in connection with the corpus is whether Jews composed any of the treatises or simply transliterated Muslim works.In the present state of research it is impossible to give a definitive answer.On the one hand, not all fragments could be identified with Muslim grammars.On the other hand, no anonymous grammars known to me carry explicit indications of Jewish authorship.Hebrew is never mentioned, either for comparisons with Arabic or as a language that an author masters.Hebrew terminology, which is often found in Judaeo-Arabic works on Hebrew grammar, is equally absent.The Kūfan primer (item 5) was clearly copied from a Vorlage in Arabic script, and the adaptation of al-Lumaʿ fī l-ʿarabiyya (item 1) contains additional quotations from the Qurʾān not found in the original text.Although these facts do not preclude Jewish authorship, they make it less probable.Of all texts discussed here, the compilation of language-related materials in item 6 seems less likely to have been copied as a whole, and may have been put together by a Jew. Even if simply transliterated from Muslim works, the fragments bear clear evidence of Jewish engagement with Classical Arabic grammar for its own sake vidro Intellectual History of the Islamicate World 8 (2020) 284-305 customer.The Kūfan primer in the hand of Hillel b. ʿEli is a faulty copy on scrap paper.The compilation in the hand of Nathan b.Samuel is a poorly organised and unfinished collection of materials.Both appear to be the scribes' private books prepared with the intention of studying Classical Arabic grammar. 73.1 (1 Folio), T-S Ar 5.4529 (1 Bifolio and 1 Folio), 12th-Century Handwriting The fragments are in the hand of the well-known court scribe and poet Nathan b.Samuel Nezer ha-Ḥaverim (or he-Ḥaver).30Nathan b.Samuel was born and started his scribal career in Damascus, moved to Egypt in 1127 and was active until his death in 1163 as a scribe of the Palestinian academy at the court of the gaon Maṣliaḥ ha-Kohen b.Solomon.31 .11 Below, The Chapter On Verbs is announced but is not copied, leaving a large empty space at the bottom of the page.The text is consistently vocalised with Arabic signs, which occasionally reflect non-standard pronunciation (e.g. Downloaded from Brill.com10/22/2020 07:42:07AM via University College London Intellectual History of the Islamicate World 8 (2020) 284-305 chapter‫פ‬ َ ‫ג‬ ْ ‫מ‬ ‫יِ‬ ‫ע‬ ُ Century HandwritingThe manuscripts belong to a partially preserved Judaeo-Arabic copy of Šarḥ Mulḥat al-iʿrāb, a commentary on the didactic grammatical poem Mulḥat al-iʿrāb "Witty Poem on Inflectional Endings" by a renown Arabic author Abū Muḥammad al-Qāsim b. ʿAlī l-Ḥarīrī (446/1054-516/1122).14The grammatical poem was composed by al-Ḥarīrī in c. 504/1110 at the prompting of the Chris- ), RNL Evr Arab II 253 (1 Folio), RNL Evr Arab I 4631 (1 Folio),13 12th-11 Al-Zaǧǧāǧī, Kitāb al-Ǧumal, pp.3-6, esp.p. 6. 12 Carter, "Grammatical tradition," Binaghi, La postérité andalouse, pp.155-156, 158-159.13 I thank Dr José Martinéz Delgado (University of Granada) for drawing my attention to these manuscripts.The correct order of pages is: RNL Evr Arab II .254, T-S 24.31 and T-S AS 155.132 correspond with those commonly 23 An up-to-date overview of the different views of modern scholars is Baalbaki, "Introduction," esp.pp.xxxix-xlii.24 See, e.g., Weil, Streitfragen.vidro Intellectual History of the Islamicate World 8 (2020) 284-305 and Mosseri II.214, a piyyuṭ fragment where the name Nathan b.Samuel ḥazaq is marked on verso.Sībawayhi said: the definition of a noun is what can receive one of the particles that govern the genitive.In fact, Sībawayhi did not give a definition of the noun in the Kitāb "The Book", but simply exemplified nouns with raǧul "man" and faras "horse."34In the sources a number of definitions are ascribed to Sībawayhi,35 but to the best of my knowledge not the one given here.Instead, the above definition strongly resembles a part of the definition given by Abū l-ʿAbbās al-Mubarrad (d.285/898) in al-Muqtaḍab "The Epitome": Abī Ṭālib said: the definition of a noun is what signifies the meaning of its nominatum.The verb is the nominatum(?).The particle is what signifies the meaning …The passage is not well preserved, and at least the definition of the verb seems corrupt.ʿAlīb.Abī Ṭālib is frequently named as the initiator of the Arabic grammar in Muslim bibliographical literature.37WhereasArabicgrammatical works not usually cite definitions of parts of speech ascribed to ʿAlī, bibliographical treatises do, for example:The noun is what gives information about the nominatum.The verb is what informs about the movement of the nominatum.The particle is what gives information about a meaning that is neither that of a noun nor that of a verb.38 Noun is what gives information about the nominatum.Verb is that by which information is given.Particle is what comes to a meaning.39Aset of definitions similar to the one in the former quotation appears to have given rise to the now corrupt version in the Judaeo-Arabic compilation.
3,676.4
2020-07-30T00:00:00.000
[ "Computer Science" ]
Implementation of science process skills using ICT-based approach to facilitate student life skills . The purpose of this study is to describe the results of the implementation of a teaching-learning package in Plant Physiology courses to improve the student's life skills using the science process skills-based approach ICT. This research used 15 students of Biology Education of Undergraduate International Class who are in the Plant Physiology course. This study consists of two phases items, namely the development phase and implementation phase by using a one-shot case study design. Research parameters were the feasibility of lesson plans, student achievement, Including academic skills, thinking skills, and social skills. Data were descriptively Analyzed According to the characteristics of the existing data. The result shows that the feasibility of a lesson plan is very satisfied and can be improvements in student's life skills, especially with regards to student's thinking skills and scientific thinking skills. The results indicate that the science process skills using ICT-based approach can be effective methods to improve student's life skills. Introduction In 21 st Century Information and communication technologies (ICT) have become very common today in many aspects of life. ICT is also becoming very important in the social and economic life of society. The field of education is also now taking advantage of ICT in learning, but the effect is not covering other aspects. More education leads to traditional educational activities, where professors become the center of information and attention of students. The demands of 21 st -century education forced the shifting of paradigms. Students are required to master ICT effectively as one's ability to compete in the 21 st century [1,2]. Student results based on observations conducted by researchers showed that academic achievement by using traditional methods alone had unsatisfactory results. These obstacles arise because the materials studied abstract so that the student is difficult to establish the concept of knowledge. Learning by using ICT has many advantages, one of which is learning can be done independently. Students can use computer animation to understand the concepts learned. Attewell et al. believe that technology can make learning becomes easier [3], but it takes too much training, preparation and production of appropriate materials for such learning to be more effective [4]. As the education world continues to grow, various phenomena in the development of real-life demands graduates have a variety of skills to face the competition. National Council for Curriculum and Assessment (NCCA) conduct online surveys to evaluate the response of academics know the priority that should be practised in the world of education. One result of the survey is important to develop life skills [5]. MONE explaining the concept of life skills (life skills) as follows: the Tim Broad-Based Education Education Ministry argued that the purpose of life skill education is to: (1) to actualize the potential of learners that can be used to solve problems encountered, (2) provide an opportunity for the school to develop flexible learning, in accordance with the principle of broad-based education and (3) optimizing the use of resources in the school environment, by allowing the use of existing resources in the community, in accordance with the principle of school-based management [7]. Those goals are the main purpose of life skill education is to prepare students to be concerned is able, capable and skilled maintain the viability and growth in the future [8]. Methods This study implemented a variety of learning models are packed with the application process skillsbased approach to ICT for Plant Physiology courses with related concepts which include enzyme plant metabolism, photosynthesis, respiration, and nitrogen metabolism. This study consisted of two phases, the first phase of the development of learning tools, and the second stage is the implementation of learning tools Process Skills ICT to teach Life Skills students. Assessment skills (life skills) includes several components that are tailored to the skills of the selected process as an approach to this learning process. life skills (life skills) that are Life generic for Skills consist of personal skills(personal skills) and social skills (social skills). Personal skills include academic skills (academic skills), while the specific life skills(specific life skills) consisting of academic skills (academic skills). Learning software development using 4-D models which consists of four stages namely Define (definition), Design (Design), Develop (Development) and Disseminate (deployment). The study design when implementing the learning device is a One-Group Pretest-Posttest Design developed by Campbell and Stanley [8]. Subjects were students of International Class is being reprogrammed subjects Plant Physiology, the student of class 2012, which amounted to 15 people. The research was conducted in the Department of Biological Science Education UNESA, the even semester 2013/2014 year. Results Summary of results score of implementation learning SAP at the meeting of 1, 2, 3, and 4 are presented in Table 3.1 show that for every aspect observed for each category include: (1) introduction, (2) the core activities and (3) cover respectively show a score of 4; 3.6; and 3.5 which are in both categories. For implementation learning instrument reliability value of 97%. These results indicate that the enforceability of the skills-oriented learning processes used to improve life skills students can be done well. Student mastery learning data obtained by administration of post-test the first and second to students at the end of the lesson, Achievement test product is a test mastery of the material on the topic of water transport, translocation through the phloem, enzymes, photosynthesis and respiration, while the achievement test used to measure aspects of the process of thinking skills and aspects of academic skills. The results of the analysis of students' test results are shown in Table 3.2. From the results of student learning completeness in Table 3.2 can be revealed that the results test of students is reaching the classical completeness with an average value of 76.42 which is the range of values with the letter B +. But there are three students who have not completed to obtain the value of B such provisions considered sufficient thoroughness to this course is B, such as students with no 1, 3 and 5. In addition to learning the results of the product, in this study also uses the test results to the process of learning know the life skills (thinking skills and academic skills)students. 1234567890''"" The Consortium of Asia-Pacific Education Universities (CAPEU) IOP Publishing IOP Conf. Series: Materials Science and Engineering 296 (2018) 012035 doi:10.1088/1757-899X/296/1/012035 Scores of the learning outcomes process are presented in Table 3.3. Based on Table 3.3 it can be seen that the average results learning process to test the 1st and 2nd measure thinking skills(thinking skills)and Academic Skills(academic skills)are respectively 81.3 and 72.3. These results suggest that mastery learning process for all students achieved the following study. Thinking skills(thinking skills)and Academic Skills(academic skills)students, it can be seen from the values obtained after the student answer the test results that reflect aspects of the process of learning thinking skills and academic skills.Details of thinking skills and academic skills of students are shown in Table 3 Table 3.4 can be disclosed that in test 1 and 2 percentage thoroughness of thinking skills and of academic achieve mastery skills, with an average reach 83 (rated A-on to the system font). Observation of social skills(Social skills) is recorded using the observation sheet social skills(social skills)when students perform lab activities and presentation of observations. The percentage breakdown aspects of social skills emerge, are presented in Table 3.5. Description: P1 =observer 1 P2 =observer 2 According to Table 3.5 show that in general, assessment of social skills ranging from grades 59% to 88.9%. This means that if used a scoring system that applies in Unesa then the average value of the social skills of students still at a fairly wide range of values of between grades C to A. Social skills are still low since the ability to use verbal communicative language is followed by the ability to use language communicatively in writing. This study begins to develop learning tools courses Plant Physiology-based process oriented ICT skills to improve life skills student incorporating ICT-based animation media. Further validation was done by a team of experts in the field of plant physiology to get feedback and suggestions regarding the feasibility of learning tools developed for the learning process using English as the language of instruction. SAP (Lesson Plan) prepared in accordance with good process skills approach basic skills processes and integrated process skills. Based on the results of the analysis indicate that the validation and revisions have been made to the SAP (Lesson Plan) and the completeness of their supporters could feasibly be used to study skills-oriented processes and ICT-based and feasible to improve life skills (thinking skills, academic skills, and social skills)to students being programmed course Plant Physiology. MFIs are arranged oriented learning process skills approach to teaching life skills that include thinking skills, academic skills, and social skills.The analysis showed that students had no difficulty in following the process of learning the skills approach. The results of the validation of THB. THB products and processes from the material aspect also received good ratings and feasible to be implemented (data not shown). Implementation observed learning involves three aspects: (1) The preliminary activities, (2) the core activities, and (3) Event cover. Worth mentioning that when designing learning device, prepared in accordance with the theory of scaffolding and cognitive apprenticeship expressed by Vygotsky. Theory Scaffolding is defined as the process of assistance from other people who have more knowledge (professors or fellow students) to people who are a little more knowledge to address the problems that go beyond the current level of development [8]. It is very necessary to improve the ability of students, especially in improving the ability of life skills that are indispensable as the provision of public life in order to be a successful man and capable of being in the environment with appropriately in accordance with its function. Cognitive apprenticeship theory defined on the process whereby a person who is learning step by step acquire expertise in interaction with an expert, who serves as an expert, in this case, is an adult or an older person. In this case, can be interpreted faculty adviser subjects or peers who know more about the problem [9]. This can be proven at the time of the learning activity, students conduct discussions with peers and with faculty or co-assistant to solve the problems encountered at the time of observation or process skills activities. After completing this activity, students discuss with members of the group and then present the results of their discussion to the other groups. Implementation of the learning device oriented ICT skills-based process, also in accordance with Annex Permendiknas No 22 of 2006 on the Content Standards Subjects Biology, which forms positive attitudes towards biology to realize the regularity and beauty of nature and exalt the greatness of God Almighty. Fosters scientific attitude, honest, objective, open, critical resilient and able to cooperate with others. This becomes very necessary and important for the students of Biology Education S1 as a form of imitation how they can translate the implementation in accordance with the demands of the Content Standards Curriculum. The importance of the use of ICT in learning is also supported by research done previously, that the use of ICT can enhance students' science learning outcomes [10]. Another study also describes the use of ICT in learning can increase student engagement in learning [11]. Activities that support it can be seen from the data processing that is adjusted on the basis of observations made, not made-up and be honest. Besides the achievement of basic competency courses in Plant Physiology can be facilitated by using process skills approach to teaching life skills. Aspects thinking skills and academic skills can be seen in the learning activities with process skills approach that includes: making the formulation of the problem, identify variables, make observations, collect data and analyze the results of observations, and make inferences observations. Social skills can be seen clearly at the stage students discuss analyze and report the results of observation in the group. SAP enforceability of the instrument reliability of 97% indicates that the instrument used has a value that is reliable [12]. In this case, it can be assumed that the instruments used are reliable. After the learning process-oriented skills, students who have achieved mastery to value B as many as 12 students from 15 students of International Class S1 is being reprogrammed subjects Plant Physiology. However, there is still a value of C according to Unesa pass mark is still within the range of passing. Mastery learning students after learning of ICT-based process-oriented skills is due to the learning process practised skills in an organized process. While it is known that the learning process skills will give a good impact for students, such as [13]: 1) Students undergoing the process to get the concept, formula or description of something so that students can understand it; 2) Students will actively participate in learning activities; 3) Allows students to develop a scientific attitude and stimulate curiosity; 4) Students will gain a sense of who actually lived because of their own to find a concept or generalization of the results of its work; 5) Understanding students about a concept or principle steadier allowing students to be able to apply them to other more relevant issues; 6) Students are satisfied with the results of observations and findings as one of the factors to foster intrinsic motivation in students; 7) Through this approach, the development of science and concept changes that might occur readily accepted; 8) Students are trained in activities required by science as practiced by 1234567890''"" The Consortium of Asia-Pacific Education Universities (CAPEU) IOP Publishing IOP Conf. Series: Materials Science and Engineering 296 (2018) 012035 doi:10.1088/1757-899X/296/1/012035 scientists; 9) the student Skills acquired will be useful in everyday life; 10) The possibility of maximum utilization of the environment as a learning resource; 11). Familiarize students to express their opinions in a systematic way and respect the opinions of others. Please also note that training process skills, students will be more easily controlled and appreciate the subject matter, because students are directly involved in the learning process. Provision of direct experience is emphasized through the use and development of process skills and scientific attitude with the goal of understanding the concepts and be able to solve the problem so that the results of student learning are maximized and students achieve mastery learning. In addition, skills in the learning process of this subject also packaged based on ICT, given that many or most of the concepts contained in the subject of Plant Physiology are abstract. Therefore we need the help of the media that is able to bridge to translate what the abstract becomes easier to visually follow. Easy to follow visually abstract process is expected to be used as a bridge improve the cognitive abilities of students of both products and processes. Achieving mastery learning outcomes-oriented process thinking skills and academic skillsoriented learning skills indicate that ICT-based process can improve the ability to life skills in students. Learning the process skills approach, giving students the opportunity to interact directly using the tools of the experiment so that life skills can be trained and the students can use the trial properly and in accordance with the objectives expected when students have activities in the Lab. As it is known that the principle of learning-oriented life skills(life skills)more to the contextual learning, namely the relationship between real-life environment and experiences of students, as well as the process of learning the skills approach. Students learn the material in class, should not only be prepared only as researchers alone, but they must be possessed multidimensional thinking like that done by a researcher [14]. Skills are a very important process to achieve this and generate mindset to act as a science student, discovered by researchers [15]. Process skills can be described as the ability to use a researcher during their work duties and competence demonstrated during the resolve scientific problems [16]. There are a variety of skills in the process of skills, these skills consist of basic skills(basic skills)and integrated skills(integrated skills).Basic skills consist of six skills, namely: observing, classifying, predicting, measuring, conclude, and communicate. For skills integrated comprising: identifying variables, tabulating the data, present data in graphical form, describe relationships between variables, collect and process data, analyze research, develop hypotheses, defining variables operationally, designed the study, and carry out experiments [17 ]. These skills can simultaneously facilitate students on life skills. Thinking skills(thinking skills)is the proficiency in using ratios or mind, which includes among other things: prowess identify and find information, process and make decisions, and solve problems creatively. Academic skills are often called intellectual prowess or scientific thinking skills which basically is the development of thinking skills in general, but it leads to activities that are scientific. These skills include skills to identify variables, explaining the relationship a particular phenomenon, formulating hypotheses, designing and conducting research. These skills are skills are integrated into process skills, this is in accordance with the capabilities and concepts underlying fundamental science as process skills. It is known that the ability and fundamental concepts that underlie science as process skills to teach life skills such as 1) Identify the questions that can be answered through scientific investigation; 2) design or conduct scientific investigations; 3) Develop descriptions, explanations, predictions and models using evidence; 4) Think logically and critically; 5) Recognize and analyze alternative explanations and predictions. Based on Table 3.5 aspects of social skills are generally categorized either observed except for the aspects of communicative language use orally (59%) and aspects of communicating with other groups (63%). In accordance with Vygotsky's theory that emphasizes the social aspects of learning in the belief that social interaction with others spur the development of new ideas and enrich the intellectual development of students. Based on the data, they found aspects of social skills are less well categorized. This is because, in reality, oral communication was not easy to do, let alone English. Often the student is difficult to 8 1234567890''"" The Consortium of Asia-Pacific Education Universities (CAPEU) IOP Publishing IOP Conf. Series: Materials Science and Engineering 296 (2018) 012035 doi:10.1088/1757-899X/296/1/012035 accept the opinion of his interlocutor, not because of the content or ideas but because of the manner of delivery that is less pleasing. Therefore we need the ability how to choose your words and how to convey that is easily understood by his interlocutors, besides speaking in English is also not everyone can easily do. Students should develop the ability to be a good listener and appreciate the explanation submitted by other students. Students also should remain open and appreciate the ideas and different explanations, and consider alternative explanations, and students should strive for more learning to speak English. Aspects of some social skills students gain both categories. This means learning skills-oriented ICT-based packaged process can teach social skills to students. The general condition which is a condition onset of activity on the process skills that students in the class and the social aspects of the open atmosphere that invites students to discuss. Social skills should mastery by students as prospective teachers in junior high or high in accordance with the purpose of the subjects of Biology at the junior / senior high school. Compliance with the demands of the subjects Biology / IPA in SMA / SMP can be used as modelling for prospective teachers to be able to package learning materials that will be facilitated by the students. Obstacles or problems were found during the learning process most predominantly found was the lack of English language skills of students orally, however, the understanding or the ability of students in written English is better than his verbal ability. Therefore, the evidence described above indicate that ICT-based media can help improve students understanding of abstract concepts into a visual form that is easily understood. 4. Conclusions and suggestions Based on the analysis, the conclusions that can be drawn from this study are as follows: • The learning implementation process in the course of Plant Physiology to improve life skills through ICT-based process skills good category with a value of 97% reliability of the instrument. • Cognitive learning outcomes of students average product are 76.4 (B + on the number of letters). For the ability of thinking skills(thinking skills)and scientific thinking skills(academic skills)students after applying the learning device of Plant Physiology through ICT-based process skills are respectively 85.7 and 83. That achievement is at A-value for the system font. • The ability of social skills(social skills)students were in grades B to A-except for the ability to use communicative language orally (59% = value C) and the ability to communicate with other groups (63% = C). • Obstacles encountered in the implementation of learning Plant Physiology English language skills of students orally is still inadequate as a provision in communicating effectively in the classroom.
4,872
2018-01-01T00:00:00.000
[ "Education", "Biology", "Computer Science", "Environmental Science" ]
Improving students' high - level mathematical thinking skills through generative learning models ) stated that Higher thinking skills are the ability to connect, manipulate, and transform existing knowledge and experience to think critically and creatively in an effort to determine decisions and solve problems in new situations. ABSTRACT The purpose of this study was to (1) analyze the achievement of high-level mathematical thinking skills of students who received learning with generative models and learning with conventional models, (2) analyze the increase in students' high-level mathematical thinking skills between those who receive generative learning models and conventional learning models, (3) See the effect of students' KAM (Initial Mathematical Ability) levels in the high, medium, and low categories on the high-level mathematical thinking abilities of students who receive generative learning models and conventional learning models, and (4) The magnitude of the interaction between the generative learning model and the KAM (Initial Mathematical Ability) level of students in the high, medium, and low categories on the improvement of high-level mathematical thinking skills of students who received generative learning models and conventional learning models. This research is experimental with a pretest-posttest control group design. Based on the results of this study, it can be seen that (1) there are differences in the attainment of high-level mathematical thinking skills between students who are taught by generative learning models and conventional learning models, (2) There is a significant difference in the improvement of students' high-level mathematical thinking skills between students who are taught with generative learning models and students who are taught with conventional learning models, (3) For the KAM level of students in the high, medium, and low categories, it does not significantly affect the improvement of high-level mathematical thinking abilities of students who obtain generative learning models and conventional learning models, and (4) There is no interaction between the KAM level (high, medium, or low) and the learning model towards increasing the high-level thinking skills of students who receive generative learning models and conventional learning models. INTRODUCTION Education, as one of the bases for developing science and technology, certainly plays an important role in people's lives. One of them is mathematics education, which is really needed, especially in the 21 st century, which is full of competition. Teachers and lecturers need to pay attention to this when equipping students with useful knowledge and skills to answer future challenges.In connection with it, Soedjadi (1995) stated that mathematics as one of the basic sciences, both its applied aspect and its reasoning aspect, has a very important role in efforts to master science and technology.This means that to a certain extent, mathematics is controlled by all Indonesian citizens, both in its application and in its patterns of thinking.Furthermore, Soedjadi stated that school mathematics, which is part of the selected mathematics or basic interests in developing students' abilities and personalities as well as the development of science and technology, It is necessary to always be in line with the demands of students' interests in facing future life.This is in line with As'ari's opinion (Fadjar, 2007) that the characteristics of current mathematics learning are inline with As'ari's opinion (Fadjar, 2007) that the characteristics of current mathematics learning are more focused on procedural skills, one-way communication, monotonous classroom settings, low order thinking skills, depending on the textbook, routine questions and low-level questions are more dominant.Teachers rarely provide high-level thinking skills to students in mathematics learning, so when students are given non-routine questions and questions that require critical and creative solutions, they do not have difficulty solving them.Furthermore, Rofiah et al. (2013:18) stated that Higher thinking skills are the ability to connect, manipulate, and transform existing knowledge and experience to think critically and creatively in an effort to determine decisions and solve problems in new situations. ABSTRACT The purpose of this study was to (1) analyze the achievement of high-level mathematical thinking skills of students who received learning with generative models and learning with conventional models, (2) analyze the increase in students' high-level mathematical thinking skills between those who receive generative learning models and conventional learning models, (3) See the effect of students' KAM (Initial Mathematical Ability) levels in the high, medium, and low categories on the high-level mathematical thinking abilities of students who receive generative learning models and conventional learning models, and (4) The magnitude of the interaction between the generative learning model and the KAM (Initial Mathematical Ability) level of students in the high, medium, and low categories on the improvement of high-level mathematical thinking skills of students who received generative learning models and conventional learning models.This research is experimental with a pretest-posttest control group design.Based on the results of this study, it can be seen that ( 1) there are differences in the attainment of high-level mathematical thinking skills between students who are taught by generative learning models and conventional learning models, (2) There is a significant difference in the improvement of students' high-level mathematical thinking skills between students who are taught with generative learning models and students who are taught with conventional learning models, (3) For the KAM level of students in the high, medium, and low categories, it does not significantly affect the improvement of high-level mathematical thinking abilities of students who obtain generative learning models and conventional learning models, and (4) There is no interaction between the KAM level (high, medium, or low) and the learning model towards increasing the high-level thinking skills of students who receive generative learning models and conventional learning models. Based on the description above, it can be said that the ability to think at a higher level is One of the characteristics of mathematics learning required in the curriculum is that students are not only given low-level thinking skills.However, it is also hoped that teachers can provide high-level thinking skills to students in mathematics learning so that they get used to thinking outside the box and providing non-routine problems and open-ended questions. In TIMMS 2001, it was found that Indonesian students' high-level mathematics abilities are still far behind those of ASEAN countries, such as Singapura, Malaysia, Vietnam.This is the impact of our learning at school, which still relies on textbooks, and teachers are not used to presenting material and giving practice only on questions with a single solution, routine questions, and rarely provide non-routine and open-ended questions and do not require students to think critically, creatively, and problem solving that requires connecting, manipulating, and transforming knowledge that students already have with their' new knowledge. One learning model that is thought to be able to improve students' high-level mathematical thinking skills in mathematics learning is using a generative learning model.Generative learning is a constructivism-based learning model that places more emphasis on actively integrating new knowledge using knowledge that students already have.The generative learning model requires students to be active and free to construct their knowledge.Apart from that, students are also given the freedom to express ideas and reasons for the problems given so that they will better understand the knowledge they have formed themselves, and the learning process carried out will be more optimal.According to Osborne & Wittrock (1985), the application of generative learning models is a good way to find out students' thinking patterns and how they understand and solve problems well so that in later learning, the teacher can develop strategies for learning, for example, how to create an interesting, enjoyable learning atmosphere, and so on. Based on the description above, it can be said that generative learning can provide challenges for students to solve mathematical problems and encourage them to be more creative, motivated to learn, confident, and self-sufficient.In the mathematics learning process, teachers are required to use non-routine and open-ended problems to solve problems in mathematics learning. RESEARCH METHOD This type of research is quasi-experimental with a pretest-posttest control group design.In this quasi-experimental research, the subjects are not randomly grouped, but the researcher accepts the subject's condition as is.(Ruseffendi, 2005: 52).In this research, two classes were used, namely the experimental class and the control class.The initial stage of this research is to determine the research sample.Then two classes were taken at random, namely one as the experimental class and one as the control class.This treatment was given to see its effect on the aspect being measured, namely students' high-level mathematical thinking abilities.The research design used in this research is as follows: O X O .................... (Ruseffendi, 2005: 53) O O In this design, the grouping of research subjects is carried out randomly by class.The experimental group was given generative learning treatment (X), and the control group was given conventional learning.The normalized gain score calculation was carried out because this research not only looked at student improvement but also looked at the quality of that improvement.In addition, the calculation of the normalized gain score is carried out with the aim of eliminating the student guess factor and the effect of the highest score so as to avoid biased conclusions. Hake (1999).Second stage: at this stage, statistical prerequisite tests are carried out, which are used as a basis for hypothesis testing, namely normality test and homogeneity test.Third stage: determine the achievement and improvement of students' high-level mathematical thinking abilities between the experimental class and the control class, determine whether or not there is an interaction between the independent variable and the control variable on the dependent variable in accordance with the hypothesis that has been put forward.Then, to test these differences, a test is used Mann Whiteney, one-way ANOVA, two-way ANOVA, and continued with further different tests of pairs of data groups.(Post-Hoc) through General Linear Model and the entire statistical calculation uses the help of the program SPSS 22.0 for Windows. Apart from carrying out quantitative analysis, researchers will also carry out qualitative analysis of the answers to each question, observation data, and student response data.This aims to further examine the students' high-level mathematical thinking abilities and to find out whether the implementation of learning is in accordance with the learning provisions set out in the two learning models. Results In answering the problems raised and seeing the achievement of the objectives of this research, So data analysis is directed at comprehensively revealing students' achievement of high-level mathematical thinking skills after learning using generative models and conventional models.Apart from that, in this research, students' initial abilities were also identified based on their initial mathematics abilities (high, medium, and low). a. Description of Data on the Achievement of Students' High-Level Mathematical Thinking Abilities The test results show differences in the achievement of high-level mathematical thinking skills among students who receive generative learning models and conventional learning models, as shown in Table 3.Based on Table 3, it can be seen that the average post-test score for high-level mathematical thinking abilities of students who received generative learning is higher than the average post-test score of students who received conventional learning.In Table 3, it can also be seen that the ZCount value is -3.150 and the Sig.(2-Parties) value is 0.002, which is less than the 0.05 significance level set.So, it can be concluded that there is a difference in students' achievement of high-level mathematical thinking skills between the group that received generative learning (experiment) and the group that received conventional learning (control). b. Description of Data on Increasing High-Level Mathematical Thinking Abilities The test results show differences in the increase in high-level mathematical thinking abilities of students who received generative learning and conventional learning, as shown in Table 4. Based on Table 4, it can be seen that the average N-Gain score for high-level mathematical thinking abilities of students who received generative learning is higher than the average N-Gain score of students who received conventional learning.In Table 4, it can also be seen that the ZCount value is -3.905 and the Sig.(2-Parties) value is 0.000, which is less than the 0.05 significance level set.So, it can be concluded that there is a significant difference in increasing students' high-level mathematical thinking abilities between the group that received generative learning (experiment) and the group that received conventional learning (control). c. The interaction between learning and initial mathematics abilities to improve students' higher-level mathematical thinking abilities High-level mathematical thinking ability data was obtained from a high-level mathematical thinking ability test on the topic Systems of Linear Equations in Two Variables.The results of testing the interaction between learning and initial mathematics abilities in increasing students' higher-order thinking abilities are presented in Table 5.Based on the calculation results in Table 7, it can be seen that the F value for students' initial mathematics abilities is 2.598 and the significance value is 0.080; this value is more than the 0.05 significance level set.This means that the hypothesis, which states that differences in initial levels of mathematical ability have no effect on increasing students' high-level mathematical thinking abilities, is accepted.Thus, it can be concluded that differences in initial levels of mathematical ability do not have a significant effect on increasing students' high-level mathematical thinking abilities. In Table 5, it can also be seen that the F value for the learning group is 6.046 and the significance value is 0.016, which is less than the 0.05 significance level that has been set.So, it can be concluded that differences in the use of learning groups have a significant effect on increasing students' high-level mathematical thinking abilities.Meanwhile, the interaction between initial mathematics ability and learning has an F value of 0.021 and a significance value of 0.979, which is more than the 0.05 significance level set.Therefore, it can be concluded that the KAM level with generative learning models and conventional learning models together does not have a significant influence on increasing students' high-level mathematical thinking abilities. Discussions Based on the results of research using a generative learning model on two-variable linear equation systems material using a generative learning model in the experimental class and a conventional learning model in the control class, it can be seen that: a. Achievement of High-Level Mathematical Thinking Abilities To see the achievement of high-level mathematical thinking skills using the Mann-Whitney test It can be seen that the average achievement score for high-level mathematical thinking skills of students who receive the generative learning model is higher than the average achievement score of students who receive the conventional learning model.From the test results, it can also be seen that the significance value is less than the specified significance level.So, it can be concluded that there is a difference in the achievement of high-level mathematical thinking skills between students taught using generative learning models and conventional learning models.In line with the above, Suryadi (2012: 24) stated that high-level mathematical thinking abilities are essentially non-procedural thinking abilities, which include, among other things, the following: ability to search for and explore patterns to understand mathematical structures and underlying relationships; the ability to use available facts effectively and precisely to formulate and solve problems; the ability to create mathematical ideas meaningfully; the ability to think and reason flexibly through formulating conjectures, generalizations, and justifications; the ability to determine that a problem-solving result is logical.Furthermore, NCTM (2000:10) characterizes high-level thinking abilities as non-routine problem solving, namely problems involving an individual or group situation, in developing one or more solutions. b. Increasing High-Level Mathematical Thinking Abilities To see the increase in students' high-level mathematical thinking abilities, the average N-Gain score is used.The average N-Gain score for students' high-level mathematical thinking skills taught using the generative learning model is higher than the average N-Gain score for students taught using the conventional learning model.From the test results, it can also be seen that the significance value is less than the specified significance level.So it can be concluded that there is a significant difference in increasing students' high-level mathematical thinking abilities between groups that use generative (experimental) learning models and group that used conventional learning models (control).From these results, it can be seen that in the teaching and learning process, lecturers must pay attention to matters related to learning, including: The learning model used, approach, strategy, or method includes the classroom atmosphere when learning takes place, because this can also influence students' way of thinking.In line with the Ritchhart et al. (Tamalene, 2010) stated in their research that A conducive classroom atmosphere can encourage students to think effectively and broaden their conceptions of thinking.This research also provides evidence that students' thinking concepts are easy to form and achieve the expected progress using concept maps. c. Interaction between Learning and Initial Mathematics Ability to Improve Students' Higher-Level Mathematical Thinking Ability The interaction between learning and students' initial mathematics abilities shows that the F value for initial mathematics abilities is 2.598.and the significance value is 0.080, which is more than the specified significance level of 0.05.This means that the hypothesis states that differences in initial levels of mathematical ability have no effect on increasing students' high-level mathematical thinking abilities So it can be concluded that differences in initial levels of mathematical ability do not have a significant effect on increasing students' high-level mathematical thinking abilities.Meanwhile, the F value for the interaction between initial mathematics ability and learning is 0.021, and the significance value is 0.979, which is more than the set significance level of 0.05.This shows that there is a significant interaction between the initial level of mathematics ability and in increasing students' high-level mathematical thinking abilities.So it can be concluded that the initial level of mathematical ability with generative learning models and conventional learning models together does not have a significant influence on increasing students' high-level mathematical thinking abilities. Meanwhile, the F value for the learning group is 6.046, and the significance value is 0.016, which is less than the set significance level of 0.05.This means that differences in the use of learning have no effect on increasing students' high-level mathematical thinking abilities.Based on the description above, it can be concluded that students who receive the generative learning model have a moderate initial level of mathematical ability.Meanwhile, if we look at the average N-Gain, the high-level mathematical thinking abilities of students who receive the generative learning model more than the average N-Gain in students' high-level mathematical thinking abilities who use conventional learning models. d. The interaction between learning and the initial level of mathematical ability (high, medium, and low) in increasing students' high-level mathematical thinking abilities For the interaction between the learning model and the initial level of mathematical ability (high, medium, or low), it appears that there is no interaction to increase students' high-level mathematical thinking abilities.This can be seen from the asymptote value (sig.) of 0.979.Thus, it can be said that students' initial mathematical abilities at a medium level tend to have a higher increase in their high-level mathematical thinking abilities, when compared with students' initial mathematics abilities at high and low levels.This indicates that students at a moderate initial level of mathematical ability have better increases in high-level mathematical thinking abilities, when compared with students with high and low levels of initial mathematics ability. CONCLUSION Based on the results of the description and analysis of data in this study, then presented some conclusions as follows: 1) There are differences in the achievement of high-level mathematical thinking skills between students taught using generative learning models and conventional learning models.2) There is a significant difference in the increase in students' high-level mathematical thinking abilities between students taught with generative learning models and conventional learning models.3) For students' initial level of mathematical ability in the high, medium, and low categories, it does not have a significant effect on increasing the high-level mathematical thinking ability of students who receive generative learning models and conventional learning models.4) There was no interaction between the initial level of mathematics ability (high, medium, or low) and the learning model in increasing the high-level thinking abilities of students who received the generative learning model and the conventional learning model. Table 1 . Distribution of Research Samples The variables in this research consist of three variables, namely: (1) Independent variables include learning; (2) dependent variable, including: students' high-level mathematical thinking abilities; and (3) control variables, including students' initial level of mathematics ability (high, medium, or low).The instrument used in this research is a test in the form of a description consisting of three questions.The test is given to measure students' high-level mathematical thinking abilities.The test was carried out twice, namely the pretest, which was carried out before the learning process, and the posttest, which was carried out after the learning process.There are two types of data in this research, namely quantitative data and qualitative data.Quantitative data was obtained through the analysis of student answers on tests of students' high-level mathematical thinking abilities.then grouped based on the learning model used, namely generative learning and conventional learning, and the level of students' initial mathematical abilities (high, medium, or low).Qualitative data was obtained through observations of lecturer and student activities in implementing learning.This data was analyzed descriptively to support the completeness of quantitative data in answering research questions.The quantitative data processing is carried out through several stages, namely: First stage: carry out descriptive analysis of the data and calculate the pretest, posttest, gain, and normalized gain values.At this stage, it can be seen that there is a big achievement-a big increase in the high-level mathematical thinking abilities of students in classes that use generative learning and classes that use conventional learning.Following this the N-Gain formula used is: Then each research class was given a pretest and posttest(O).In this research, students' initial mathematics ability factors(high, medium, and low)were also involved.The population of this research is all 2018-2019 semester students.The selection of undergraduate mathematics education students as research subjects was based on considering the students' level of cognitive development at the formal operational stage, so it was deemed appropriate to use a generative learning model.Apart from that, first-semester students are still in the teenage stage, and at this time, students are in the process of finding their identity and building self-confidence.Meanwhile, the sample selected in this study were students selected from the KAM level (high, medium, and low) based on quiz test data for the 2018-2019 academic year.The distribution of research samples is presented in Table2.
5,131
2023-09-30T00:00:00.000
[ "Mathematics", "Education" ]
Computer-Aided Design of Microwave-Photonics-Based RF Circuits and Systems In the process of design, a developer of new microwave-photonics-based RF apparatuses is facing a problem of choosing appropriate software. As of today, the existing optical and optoelectronic CAD tools (OE-CAD) are not developed like CAD tools intended for modeling of RF circuits (E-CAD). On the contrary, operating at symbolic level, modern high-power microwave E-CAD tools simply and with high precision solve this problem, but there are no models of active photonic components in their libraries. To overcome this problem, we proposed and validated experimentally a new approach to model a broad class of promising analog microwave radio-electronics systems based on microwave photonics technology. This chapter reviews our known, updated, new models and simulation results using microwave-electronics off-the-shelf computer tool NI AWRDE to pursue advanced performances corresponding to the last generation of key photonics structural elements and important RF devices on their basis. Introduction Microwave photonics (MWP) is a relatively fresh scientific and technological direction arising among radio-electronic R&D society at the last quarter of twentieth century in the result of combining the achievements of microwave-electronics and photonics techniques [1].Initially, MWP was an area of interest for a military platform [2,3] such as radar and electronic warfare means; but, nowadays, it is becoming an object of study and development for emerging areas of telecom industry [4] such as 5G-class wireless networks.For today, MWP technology might be considered as a perspective direction of modern radio-electronics for signals generation, transmission, and processing in various radio-frequency (RF) circuits and systems.Implementation of this concept will enhance the key technical and economical features and such important characteristics as electromagnetic and environmental compatibilities, immunity to external interferences. Figure 1 demonstrates a typical MWP circuit that is started with RF-to-optical converter (RF/ O) and concluded with optical-to-RF converter (O/RF).Between these interfaces, there are a host of efficient photonics processing units in optical domain. In the process of design, a developer of new MWP-based RF apparatuses is facing a problem of choosing an appropriate software.As of today, the existing optical and optoelectronic CAD tools (OE-CAD) are not developed like being perfected for three decades CAD tools intended for modeling of RF circuits (E-CAD).On the contrary, operating at symbolic level modern high-power microwave E-CAD tool solves this problem enough simply and with high precision, but there are no models of specific active and passive photonic components in its library.To overcome this problem, we have been proposed and validated experimentally a new approach to model a broad class of promising analog microwave radio-electronics systems based on the microwave photonics technology.In particular, the classification of active photonic components and the comparison with a modern OE-CAD tool were described in Ref. [5] and later in more detail version in Ref. [6].Based on them, the electrical equivalent circuit models for the different types of semiconductor laser [7,8], photodetector [9,10], and optical modulator of Mach-Zehnder interferometer configuration [6] were published.Using these components, a number of RF circuit models and successful simulation results for microwaveband optoelectronic oscillator [11], mixer [12], and phased array antenna beam-former network [13] were proposed. The general concept behind the design is the following.A developer of these novel RF circuits has no basic knowledge about the physical features of active and passive photonic devices, but one has a toolset to measure carefully their transmission characteristics in linear and nonlinear modes.Based on it, the design principles of the equivalent-circuit models to be considered below fully reflect the common building principle of the available E-CAD tools including closed-form or table-specified library models of current and voltage sources, nonlinear active devices, as well as passive elements that subject to frequency band are built on a linear circuitry with lumped (for RF band) or distributed (for microwave and millimeter-wave bands) parameters.This chapter reviews our updated and new equivalent-circuit-based models and simulation results using microwave-electronics off-the-shelf computer tool NI AWRDE to pursue advanced performances corresponding to the last generation of key MWP photonics structural elements and important devices on their basis.In particular, Section 2 describes two laser models referred to direct RF-to-optical conversion.In addition, Section 3 presents three optical modulator models for the case of external RF-to-optical conversion.There is a description of two models for optical-to-RF conversion realized by the equivalent-circuit models of photodetector component in Section 4. The component part concludes the discussion of the specific models for optical passives in Section 5. Following the result of the previous sections, some advanced MWP-based RF circuits are modeled in Section 6.Finally, Section 7 concludes the chapter. Direct RF-to-optical conversion As well known, in a digital fiber-optic communication link, injection-current driven semiconductor laser is a requisite for simple direct conversion to optical band with the speeds up to 10 Gbit/s.In this case, a laser operates in bi-stable mode with two transmitting positions: optical emission is switched off when the injection current is below the threshold of the laser light-current plot (LCP) or is switched on when it is beyond the threshold of the LCP.The main distinguish feature of MWP link, which is a medium for analog RF-signal transmission, is in continuous mode operation presetting DC bias in the middle of LCP's linear area that provides a different approach to design.Below, we demonstrate two laser models usable for various microwave photonic circuits. Single-carrier laser model Figure 2 depicts updated nonlinear model of a semiconductor laser emitter in the form of an electric equivalent circuit, suitable for developers of RF-subcarriers modulated analog fiberoptics systems, devices of microwave optoelectronics, as well as optical interconnects in the Multi-carrier laser model As noted, the great advantage of photonic technology in comparison with the radio-electronic counterpart is the ultra-wide bandwidth of optical fiber, exceeding 10 THz.Following it, in modern MWP circuits, the so-called method of wavelength division multiplexing (WDM) is widely used [14], in which simultaneous transmission of information on a plurality of optical carriers is provided.The lack in aptitude does not allow the previous model to design correctly multi-carrier MWP circuits and has led to a new generation of laser model feasible for WDM circuit simulation [5,6].Figure 4 depicts the updated nonlinear model of a semiconductor laser emitter suitable for MWP WDM circuits and systems.The model has the simplest configuration including only one quasi-optical (QO) unmodulated carrier and one RF signal but its building principle allows aggregating both optical and RF channels. In contrast to the model of Figure 2, this model has two main input ports titled as "Quasi-Optical Input 1" and "RF Input"; the first one receives waveforms of optical band and the second one is for waveforms of RF-band.Such an approach is correct for a software tool working at the symbolic level.The chain of RF channel consists of sub-circuit network (SUBCKT) including the schematic of Figure 2 and the model of 9-order Butterworth bandpass filter (BPFB1), which is designed to eliminate spurious output signals of the subcircuit.In line with the standard radio engineering approach, both signals are then mixed using an ideal fullwave diode multiplier.Another model of the BPFB2 with modulated QO signal at the Quasi-Optical Output is terminated by the circuit.In the model, the dependence of the QO carrier frequency versus temperature is additionally introduced, which is realized by means of an additional Quasi-Optical Input 2. The main (foptic-1000 GHz, port 1) and the additional Computer-Aided Design of Microwave-Photonics-Based RF Circuits and Systems http://dx.doi.org/10.5772/intechopen.78945 (delta_f_t + 1000 GHz, port 2) signals are fed to the diode multiplier.The value of the auxiliary frequency depends on the factor delta_f_t, which describes the experimental emitted wavelength-temperature dependence of the laser chip.The frequency band of the FPFB2 is also corrected taking into account this factor.Figure 5 exemplifies the simulation results of output laser spectrum modulated by RF tone of 1 GHz and power 10 dBm (a) and wavelength versus temperature dependence in the range of 0-60 C (b). External RF-to-optical conversion In spite of cost-efficiency, the direct RF-to-optical conversion has a number of limiting factors including bandwidth, dynamic range, chirping (parasitic frequency modulation), etc.To overcome them, as in radio engineering technique, an external RF-to-optical conversion using a separate device titled "optical modulator" is in common practice for MWP circuits.As in RF systems, there are two classes of optical modulators: phase and amplitude ones; the latter in connection with the specialties of lightwave transmission called "the intensity modulator".Building principles and layouts of microwave-band optical modulators as well as initial equivalent-circuit models are described elsewhere [6].Below, two updated models of optical phase modulator (OPM) and Mach-Zehnder interferometer-based optical intensity modulator (MZM), as such as a novel model for so-called electro-absorption intensity modulator (EAM), are demonstrated and discussed. Optical phase modulator model The core element of OPM model is the phase-shifting cell (PSC).Figure 6 depicts the equivalent circuit of PSC, where the phase shift is simulated by the library varactor model VRCTR, whose nonlinear characteristic is extracted from the measured data.The phase shift of the quasi-optical signal is fed to the cell output via a diplexer acting as a high-pass filter.The difference from the known model [6] is the larger correctness due to the introduction of library models of transmission line with frequency-dependent loss TLINP, symmetric coplanar line with table-based interpolation CPWLINX, and so on. Figure 7 shows the equivalent model of optical phase modulator including PSCs as subcircuits. The number of PSCs is increased to 4 to provide a quasi-linear adjustment of the insertion phase shift within more than 180 , typically required for OPM operation.The resulting phase shift is formed in the OPM as the algebraic sum of the phase shifts of the each PSC, since the signal at the optical frequency sequentially passes through all the cells.Computer-Aided Design of Microwave-Photonics-Based RF Circuits and Systems http://dx.doi.org/10.5772/intechopen.78945 Mach-Zehnder interferometer-based intensity modulator model As is known [6], an optical intensity modulator of a MZM type contains a two-arm interferometer, in each arm of which an optical phase modulator is introduced.the suppression of the second harmonic by more than 30 dB.Besides, Figure 10(b) simulates the dependence of RF output power (after ideal optical-to-RF conversion) on input RF power for the fundamental RF modulation tone (COM), 2-order intermodulation distortion (IMD2), and 3-order intermodulation distortion (IMD3) that shows the better linearity feature than a power microwave transistor. Electro-absorption effect-based intensity modulator model In spite of high linearity of RF-to-optical conversion, the main shortcoming of MZM is bulky sizes, which is a concern for a number of very important radio engineering applications.An intriguing solution to the problem is the usage of an electro-absorption intensity modulator (EAM) that can be integrated with a laser chip [15].The circuit is terminated by BPFB and closed-form amplifier (AMP) library models to eliminate higher harmonics and calibrate the loss inserted by the EAM path (for AMP, any gain value including less than 0 dB could be set).In addition, there are three ideal isolator models (ISOL8R) to ensure the isolation of the inputs and outputs.Computer-Aided Design of Microwave-Photonics-Based RF Circuits and Systems http://dx.doi.org/10.5772/intechopen.78945 Direct optical-to-RF conversion Nowadays, there is a plurality of direct optical-to-RF conversion elements (photodetectors) but only photodiodes of so-called PIN structure are in common use for analog fiber-optic systems.Among them, long wavelength GaInAs-based PIN photodetectors (PD) are ubiquitous in modern MWP circuits due to their inherent combination of ultra-high speed, high sensitivity, and linearity features [16].Early, we described and studied in detail the AWRDE nonlinear model of microwave-band PIN PD [9,10].Figure 13 shows the updated more realistic PD model where noise sources (INOISE) including shot noise of photodiode and heat noise of the equivalent resistors (RES) are taken into account.From the viewpoint of the RF circuitry, a PIN PD can be modeled as a current source with high output impedance that is imitated by the library model of voltage controlled current source (VCCS).Besides, the nonlinear features are emulated by temperature-dependent diode model (DIODE1) and barrier capacitance of p-n junction (PNCAP) tunable in according with applied reverse voltage from DCVS model.The linear circuitry representing the frequency distortions due to stray PD elements agrees with small-signal PD model that was described elsewhere [17].The collection of photo-detecting elements includes a number of advanced constructions.The most feasible among them is a balanced one that has an advantage of more linear O/RF conversion [14].Figure 14 depicts the AWRDE model of a balanced photodetector. As well-known from radio-engineering technique, the circuit consists of two arms and each of them includes the photodetector model of Figure 13 as subcircuit.To provide antiphase excitation of the subcircuits, there is a library reciprocal model of phase shifter PHASE2 in the upper arm.photodetector has near 3-dB advantage in linearity.The both conclusions and the OIP3 values (37-40 dBm) received by the simulation are in close coincidence to modern realistic photodetectors [9,14]. Passive optical components Low-loss, interference-insensible transmitting is the most attractive feature of an optical fiber for diverse processing in photonic area.As a part of a MWP circuit, it may be defined as a medium connecting RF-to-optical and optical-to-RF converters.In general, in comparison with a coaxial cable, the optical waveguide has three orders of magnitude less attenuation, the bandwidth independent of the RF signal frequency, much better weight and size characteristics, as well as the weaker phase-to-temperature dependence of the transmitted RF signal. Nevertheless, the quality of the transmitted signal may deteriorate due to a number of limiting factors, for example, dispersion, reflection, scattering, nonlinearity, etc.Another important advantage of an optical fiber is that it can be used to design extremely narrow-bandwidth pass-band and notch filters.Below, two new AWRDE behavior models of single mode optical fiber and fiber-based ultra-narrow-bandwidth filter are demonstrated and discussed. Single-mode fiber model Figure 16 depicts AWRDE reciprocal models of single-mode optical fiber feasible for various operating regimes of a realistic fiber-optic link.The first model (Figure 16(a)) represents the transmission on a single optical carrier with multiple modulating RF signals (so-called, subcarrier multiplexed mode).Here, a set of limiting factors are taken into account, such as chromatic dispersion, time delay, loss, temperature dependence of characteristics, as well as cross-interference between RF channels. The basic element for the model of Figure 16(a) is the library model of ideal transmission line with loss (TLINP).A mode propagating across the line is specified by its effective dielectric constant and per-unit-length attenuation at user specified frequency.The model scales loss with evaluation frequency.In the model, the frequency band of one 100-GHz optical channel is divided into 16 discrete bands of equal width (more than 6 GHz).Each of them is provided by one TLINP with values of the dielectric constant and attenuation corresponding to a central frequency of a specific band.All TLINPs have the same length, equal to the length of the optical fiber, and are combined using ideal multiplexer models (MUX).The first MUX shares the spectrum of the quasi-optical signal between 16 sections, each of which exploits the corresponding TLINP.The second MUX restores the signal spectrum. Besides, the second model (Figure 16(b)) represents the transmission on a multiple optical carriers (so-called, wavelength division multiplexed (WDM) mode).Here, a cross-interference noise between the carriers is added to the above limiting factors.The main element of each QO channel (CHL) is subcircuit (SUBCKT "CHL"), which structure is discussed above for a single spectral channel.Sixteen SUBCKT "CHL" ones correspond to 16 standard channels of the WDM system, so the overall number of RF channels to be transmitted simultaneously is 256.In the model, all SUBCKT "CHL" groups are combined/divided using the same MUX library models that provide the distribution of the input QO spectra according to the corresponding subcircuits over the single fiber and further restoration of the group spectrum.The schematic can be re-tuned to another bandwidth of QO channels by changing the internal model settings. Figure 17 exemplifies the simulation results for the both types of single-mode optical fiber models under consideration at room temperature, where QO frequency dependences of CHL gain (a) and of CHL relative phase shift (b) for the fiber length of 30 km are simulated.As seen, the average normalized loss (inverse to gain) is near 0.19 dB/km that equal in this frequency band to the same parameter of standard SMF-28 fiber.In additional, the difference in losses for one 100-GHz optical channel does not exceed 0.07 dB only.Besides, normalized phase-totemperature shift of RF signal being transmitted over fiber is near 0.1 /GHz/km/ C that corresponds to known data [14]. Narrow-band multichannel optical filter model Another important element of Figure 1's photonic area used for processing RF signals (filtration, delay) is the notch Bragg grating (NBG), whose optical bandwidth may be as narrow as some hundreds of MHz. Figure 19 exemplifies the simulation results for 4-channel NBG filter transmission response (a) and the QO spectrum at the output of the filter model when the same power QO signals inside the band of FBG3 are inputted.As seen, a rejection level of 40 dB is provided. Simulation of microwave-photonics-based RF circuits In the previous sections, the requisite active and passive AWRDE models for the MWP circuit design were demonstrated and the results of the key simulation experiments were highlighted.Following them, below we will describe some models and modeling results for MWP circuits as the enablers for time-delay processing in photonic area.main directions to design temperature-compensating fiber-optic delay lines (TC-FODL) including feedback or phase conjugation concept.The disadvantage of the first one is the limited adjustment range of the RF device, the phase of which is to be regulated depending on the temperature variation.Figure 22 shows the layout explaining the principle of the second concept that is free of the above shortcoming [18].In the scheme, the effect of compensation of the temperature-induced change in the delay time is provided by synchronous variation of the fiber length during the triple pass of the modulated by RF frequency optical signal along the same fiber.A detailed explanation of the operation principle for this scheme is given in [18]. Figure 23 demonstrates the proposed ultra-wideband AWRDE model of TC-FODL simulating the operation principle of the schematic in the Figure 22.Therein, according to the scheme, the frequency F m of the input RF signal, first of all, is divided in half and multiplied by one and a half times.The converted frequencies are allocated using library models of bandpass filters BPFB, each of which is tuned to the appropriate frequency.Further, the double trip of the optical carries on the frequencies ν 1 and ν 2 modulated by the RF signal of frequency 1/2F m is represented by means of the semiconductor laser models (Figure 4), the optical fiber model of Computer-Aided Design of Microwave-Photonics-Based RF Circuits and Systems http://dx.doi.org/10.5772/intechopen.78945 and the output after one more trip over TC-FODL on the optical carrier ν 3 that are imitated by the separate models of laser and photodetector and the same model of optical fiber.Based on the graphs, the following resume can be drawn.Despite the much higher stability of the silica fiber's phase-to-temperature characteristic compared to the coaxial cable [14], the FODL under the study without compensation (the model of Figure 20) introduces the phase distortion increasing at higher frequencies of RF signal that is unacceptable in many practical cases.This distortion regardless of RF signal frequency is eliminated by using a special MWP circuit, the example of which was modeled above. Conclusions In the chapter, a new approach to design the equipment for a future generation of microwaveband radar, electronic warfare, and wireless telecom systems based on microwave photonics technology using well-known microwave-electronic software tool NI AWRDE is proposed and discussed.As a first part of it, updated and new models of key active and passive elements for microwave-photonic circuits were considered that perform direct and external RF-to-optical conversion and processing of RF signals in the optical range, which leads to an improvement in such important characteristics as size, weight and power, electromagnetic and environmental compatibilities, and immunity to external interferences.As an outcome of the conducted simulation experiments, it was shown that the main parameters and characteristics of the optoelectronic and optical elements considered correspond to the real product analogs.In particular, the comparative modeling has verified that the highest level of linearity, superior to modern transistor amplifiers, is provided in the process of external RF-to-optical conversion by means of an Mach-Zehnder optical modulator and in the process of optical-to-RF conversion using a PIN photodetector.The results of the experimental comparison against the main part of the above-simulated characteristics, which validate the accuracy of the proposed models, are described elsewhere [5][6][7][8][9][10][11][12][13]. Based on the element models and results of simulations, in the second part of the chapter, we presented two new AWRDE models and the results of model experiments for fiber-optic delay line that realized time-delay processing of RF signals in photonic area.In the course of the model experiment, the way of eliminating phase distortions of delayed RF signal caused by the fluctuation of the ambient temperature under the conditions of application was confirmed. Figure 1 . Figure 1.A typical arrangement of MWP circuit. Figure 2 . Figure 2. Single-carrier model of a semiconductor laser in the form of an equivalent circuit. Figure 3 exemplifies the simulation results of small-signal frequency response (a) and LCPs in the temperature range of 20-70 C (b). Figure 3 . Figure 3.The examples of simulation experiments: (a) small-signal frequency response, and (b) light-current plot. Figure 5 . Figure 5.The examples of simulation experiments. Figure 6 . Figure 6.The model of phase-shifting cell. Figure 8 Figure 8 exemplifies the simulation results demonstrating the linearity of the phase shift versus control voltage in PSC (a) and 35-dB suppression of higher harmonics in output spectrum (b). Figure 8 . Figure 8.The examples of simulation experiments: (a) phase-voltage dependence of PSC and (b) quasi-optical spectrum at OPM output. Figure 9 depicts the AWRDE model of optical MZM with two OPMs of Figure7as subcircuits.Inside it, the RF signal is applied in antiphase to the inputs of both OPMs via the coplanar transmission line CPWLINX and the ideal splitter SPLIT2.The output channel of one of the OPMs includes two phase-shifter library elements PHASE2, of which the first is responsible for setting the operating point on the MZM transfer characteristic and the second PHASE2 adjusts a fixed phase difference in the arms of realistic MZM.The interference of two phase-modulated signals is carried out at the output of the splitter model SPLIT2.The output attenuator is used to calibrate the power loss of the optical signal introduced by the MZM. Figure 10 Figure 9 . Figure 10 exemplifies the simulation results demonstrating the advantage in the bandwidth and linearity of the external RF-to-optical conversion using a MZM compared to direct one.In particular, Figure 10(a) simulates the optical modulated spectrum using the RF tone of 15 GHz and the same input power as in Section 2.2.Comparison with Figure 5(a) shows an increase in Figure 10 . Figure 10.The examples of simulation experiments: (a) optical modulated spectrum and (b) large-signal transmission characteristics. Figure 11 depicts the AWRDE model of optical EAM.The nonlinear model of EAM is implemented based on the modified Materka field effect transistor (MATRK) library model.The EAM model includes two MATRK elements, which are controlled by a RF signal and act as attenuators that are connected in serious to the quasi-optical channel.The use of two MATRK elements provides deep intensity (amplitude) modulation of the optical carrier.The limits of the dynamic range for the input signals are determined by the selection of the parameters of the library resistor models (RES). Figure 12 Figure 12 exemplifies the simulation results demonstrating the advantage in the bandwidth and linearity of the external RF-to-optical conversion using an EAM compared to direct RF/O one but some disadvantage in compare to external RF/O conversion by a MZM.In particular, Figure 12(a) simulates the optical modulated spectrum using the RF tone of 10 GHz and the same input power as in Section 2.2.Comparison with Figure 5(a) shows an increase in the suppression of the second harmonic by more than 20 dB.Besides, Figure 12(b) simulates the dependence of RF output power (after ideal optical-to-RF conversion) on input RF power for the fundamental RF modulation tone (COM), 2-order intermodulation distortion (IMD2), and 3-order intermodulation distortion (IMD3) that shows the linearity features compared to middle power microwave transistor. Figure 12 . Figure 12.The examples of simulation experiments: (a) optical modulated spectrum and (b) large-signal transmission characteristics. Figure 15 . Figure 15.The examples of simulation experiments: a-small-signal frequency response; b-large-signal transmission characteristics of the single-ended (1) and balanced (2) photodetectors. Figure 16 . Figure 16.AWRDE models of single-mode optical fiber in various operating regimes: (a) with subcarrier multiplexed mode; and (b) with wavelength division multiplexed mode. Figure 17 . Figure 17.The examples of simulation experiments: (a) QO channel gain versus QO frequency of single-mode optical fiber; (b) QO channel relative phase shift versus QO frequency of single-mode optical fiber. Figure 18 shows 4-channel AWRDE NBG model.Fiber Bragg grating module of each channel (FBG1-FBG4) consists of two library models of ideal passive frequency diplexer DIPLEXF each of them specifies two frequency ranges (low and high) to extract the cutoff band at the output of the second DIPLEXF.In each of the channels, through output 2 of the first DIPLEXF and output 1 of the second DIPLEXF, power is allocated outside the cut-off band.This power is summed by the model SPLIT3 and fed to the next channel.The SPLIT2 unit provides the reflection of the main power in the dedicated band and the transition of some of this power to the next channel (for SPLIT2: S 21 = 0 dB and S 31 = À30 dB), thus incomplete reflection is modeled.The residual power from the output 3 of the element SPLIT2 is fed to the element SPLIT3, where it is summed with the signal power outside the cut-off band.The closed-form models of attenuator (ATTEN) and ideal digital time delay element (DGDELAY) insert the attenuation and time delay of the optical signal in each of the Bragg grating channel, respectively. Figure 19 . Figure 19.The examples of simulation experiments. Figure 21 Figure 21 exemplifies the simulation results for RF-dependence phase shift and delay of the RF pulse.In particular, Figure 21(a) represents the relative phase shift versus frequency of RF signal modulating optical carrier that propagates over the fiber length of 1 m (the delay is near 4.8 ns).As follows from the figure, the phase shift increases linearly with frequency and its slope is approximately 1700 per GHz, which is consistent with the theory of RF delay lines.Besides, Figure 21(b) demonstrates the oscillogram of the input and output RF pulses for the fiber length of 3 m.As seen, due to the wide modulation band embedded in the laser and photodetector models (Figures 3(a) and 15(a), correspondingly), the delay of the radio pulse is exclusively determined by the retarding effect in the optical fiber. Figure 21 . Figure 21.Examples of the simulation results for single-channel FODL of RF signals. Figure 16 ( Figure23demonstrates the proposed ultra-wideband AWRDE model of TC-FODL simulating the operation principle of the schematic in the Figure22.Therein, according to the scheme, the frequency F m of the input RF signal, first of all, is divided in half and multiplied by one and a half times.The converted frequencies are allocated using library models of bandpass filters BPFB, each of which is tuned to the appropriate frequency.Further, the double trip of the optical carries on the frequencies ν 1 and ν 2 modulated by the RF signal of frequency 1/2F m is represented by means of the semiconductor laser models (Figure4), the optical fiber model of Figure16(b), and the photodetector models of Figure13.Finally, in the result of mixing double converted RF signal of frequency 1/2F m with RF signal of frequency 3/2F m using library model MIXER, the input RF signal of frequency F m is recovered exploiting the library model BPFB Figure 23 . Figure 23.AWRDE model of temperature-compensated fiber-optic delay line of RF signals. Figure 24 Figure 24 exemplifies the simulation results for the proposed TC-FODL model of 40 m in length examining its phase-to-temperature characteristics at the RF frequencies of 2.5 GHz (a) and 5 GHz (b) that are performed by triangles.For comparison, the same plots include the simulation results for the FODL model of Figure 20 that are performed by squares. Figure 24 . Figure 24.Examples of the simulation results for TC-FODL of RF signals: (a) RF frequency 2.5 GHz; and (b) RF frequency 5 GHz. Computer-Aided Design of Microwave-Photonics-Based RF Circuits and Systems http://dx.doi.org/10.5772/intechopen.78945integrated circuits.In this circuit model, each element has a clear physical interpretation. Using the proposed model together with a reference photodetector, a set of typical for radioengineering circuit simulation experiments is able to fulfill including signal transmitting characteristics (S 21 and S 11 ), noise figure, nonlinearity due to harmonic or intermodulation distortions, and so on.
6,641.6
2018-11-05T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
Influence of Synthetic Conditions in the Hydrothermal Preparation of TiO2 Nanotubes TiO2 nanotubes have been synthesized using a 2-step strategy that involves the hydrothermal preparation of intermediate titanates followed by a subsequent thermal/hydrothermal treatment of these species to produce the oxide. In the first step the influence of parameters such as temperature, pH or reaction time on the composition, structure and morphology of the intermediate species has been studied. Then we have examined how temperature and the presence of surfactants may affect the composition, structure, morphology and particle size of the titanium dioxide obtained in the second thermal/hydrothermal treatment. In particular, we have studied the effects that the presence of CTAB has upon the morphology of the final product. Both intermediate and final species have been studied by means of X-ray diffraction, transmission electron microscopy, IR spectroscopy and thermogravimetric analysis. This way we have been able to identify NaTi3O6(OH).2H2O and (TiO2)x(H2O)y as the intermediate titanate species and rutile and anatase as the final TiO2 polymorphs. Finally, it is worth mentioning the preparation of spindleor oval-shaped anatase, obtained via hydrothermal synthesis in the presence of CTAB. Introduction Titanium dioxide exists as three polymorphs: anatase, rutile and brookite, and although rutile is the thermodinamically stable phase, in the nanoscale anatase becomes very stable too [1] .As a wide-band gap semiconductor, TiO2 has been widely used in sunscreens, paints and toothpastes and, today, especially in photocatalysis [2] .For many of these applications, it is of great SciForum http://sciforum.net/conference/mol2net-1importance to maximize the specific surface to achieve a maximum efficiency.In particular, it has been found that in photocatalytic applications, nanotubes and nanorods -that is, 1D nanostructures-may allow for a much higher control of the chemical and physical behaviour [ 3 ] .Indeed, by diminishing dimensions to the nanoscale, not only the specific surface area increases significantly but also the electronic properties may change considerably [4] . 1D titanium dioxide nanostructures may be obtained by various routes such as sol-gel, template-assisted methods, hydro/solvothermal synthesis or by electrochemical processes [5,6] .Among them, hydrothermal synthesis has become a promising chemical strategy.This is usually a two-step route that involves, first, the generation of alkali titanate nanofibers by hydrothermal reaction in alkali solution followed by exchange of alkali ions with protons [7] , and the subsequent thermal dehydration reactions at high temperature in air.Less frequently, hydrothermal reactions may also be performed to produce the final TiO2 nanotubes or nanorods [8] .1Dmorphology is already achieved in the first hydrothermal process when alkali titanates are formed, and it is based on the exfoliation of TiO2 crystal planes in the alkali environment and then stabilization due to the insertion of Na + cations between the exfoliated planes.The formed nanolayer sheets are then rolled into tubes during cooling or in the acid treatment [ 9 ] .Finally, thermal treatments at 350-450 º C tend to lead anatase, which seems to favour a faster electronic transport than the rutile phase [10] . Here, we present synthesis of TiO2 1D nanostructures via a 2-step route.Initially, the influence of parameters such as temperature or reaction time on the composition, structure and morphology of the NaTi3O6(OH).2H2Oand (TiO2)x(H2O)y intermediate titanates has been studied.Then, we have examined how temperature can affect the final TiO2 rutile/anatasa composition in the dehydration process.Finally, we have performed an alternatively hydrothermal route starting from Na-titanate phase in the presence of the CTAB surfactant at different pH values.This way, a spindle-or oval-shaped anatase product was obtained. Results and Discussion Hydrothermal syntheses were carried out in order to study the influence of temperature and reaction time.Thus, synthesis at 200 º C for 6, 24 and 48 h were performed.Nanotubes start to appear after 6 h of hydrothermal heating, but the reaction was not complete and crystallinity of the product was low.The products obtained after 24 and 48 h were very similar, both in composition and degree of crystallinity.Syntheses at 170º C were performed during 24 h but low crystallinity was obtained again.For this reason, the syntheses were carried out at 200 º C for 48 h (6 sample). XRD patterns of the most representative as synthesized titanates shown in Fig. 1 indicate the presence of NaTi3O6(OH).2(H2O) [ 11 ]and (TiO2)x(H2O)y [12] phases before and after HNO3 treatment, respectively.Subsequent thermal treatment in air at 400-500°C leads to the formation of anatase and rutile TiO2 phases (JCPDF 21-1272 and JCPDF 21-1276) together with some NaTi8O13 (JCPDF 48-0523) impurities.At the same time, after the second hydrothermal treatment anatase phase was also obtained. Thermogravimetric analysis of titanate samples shows mass losses of about 20 and 15% http://sciforum.net/conference/mol2net-1 in good agreement with the presence of H2O and some Na2CO3 (formed in the basic conditions due to absorption of atmospheric CO2 [13] ) in Natitanate samples and only H2O in H-titanate ones.This was confirmed by IR spectroscopy (Figure 2). TEM images of some representative samples are shown in Figure 3. Nanotubes were already formed in the first hydrothermal step and neither the acidic nor the thermal nor the hydrothermal processes alter significantly the morphology of the samples.Both Na-and H-titanate samples are very similar, 40-80 nm wide, although they widen to 70-170 nm after thermal or hydrothermal treatments when deintercalation and dehydration take place and lead to TiO2.HTEM images of titanates shown in Figure 3 reveal lattice fringes with plane spacings of about 11 Å and 10 Å corresponding to the interlayer distances in the 2D crystal structures of NaTi3O6(OH).2(H2O)and (TiO2)x(H2O)y.Both of them exhibit similar crystal structures with packed planes formed by titanium oxide octahedra and Na + , OH -H + and H2O as inserted species between them [11,12] .Some authors suggest that the mechanism of formation of the tube shape is based on a topochemical process of exfoliation of TiO2 crystal planes in the alkali medium [ 14 ] , after which nanolayer sheets start rolling into tubes during cooling.Na + ions are then exchanged by H + ions by washing products in acid solutions to form layered hydrogen titanate, from Na2-xHxTi3O7.2H2O to H2Ti3O7.yH2O [ 15 , 16 , 17 ] .Finally, the H-titanate transforms into anatase through a dehydration reaction.This is accompanied by an in situ rearrangement of the structural units to give 1D TiO2 nanostructures [8] .In this work, not only traditional heat treatments of H-titanates to obtain anatase TiO2 nanotubes, but also hydrothermal treatments of Na-titanates in presence of CTAB as surfactant at different pH values were performed.Heating at 400º C and 500º C in air led to the formation of anatase and anatase/rutile nanotubes, respectively, about 70-170 nm wide and without significant morphological changes.However, some impurities of NaTi8O13 oxide were also found in XRD, indicating that the exchange of Na + ions was not completely achieved.At the same time, hydrothermal processes led to the formation of anatase as the main phase but the morphology of the products was found to be slightly altered depending on pH.Thus, the synthesis at pH=2 led to obtain anatase porous nanotubes, very similar in thickness and morphology to their precursors, but a considerable amount of intermediate titanate remained unaltered.The non-acidified synthesis (pH=8.5)led to a purer and more crystalline product.It seems that CTAB in a slight alkali environtment favoured the extraction of Na + ions previous to the stacking of the TiO2 layers, as thicker and denser nanostructures of 70-170 nm were found.It is possible that the CTAB surfactant made the exfoliation and later rearrangement of oxide layers in Na-titanates easier than in H-titanates, as H + could be more strongly bounded to the TiO6 octahedra.The morphology also changed.Oval-or spindleshaped anatase was obtained suggesting a nonuniform stacking mechanism.Both hydrothermally synthesized samples did, however show similar microstructures formed by aggregated nanofibres, although these were more densely stacked in the oval-shaped oxides.http://sciforum.net/conference/mol2net-1Scheme 1. Scheme of the syntheses of TiO2 nanotubes 3.1.Synthesis of sodium and hydrogen titanates nanorods.0,4 g commercial TiO2 was dispersed in 20 mL of 10M NaOH with magnetic stirring for 30 minutes.The white suspension was then transferred into a Teflon-lined autoclave of 25 mL capacity.After being heated at 170-200 °C for 6-48 h, the autoclave was naturally cooled to room temperature.The resulting products were filtered and washed several times with diluted HCl [18,19] and dried at 80°C for 2 h. 3.2.Synthesis of sodium and hydrogen titanates nanorods.For the ion-exchange, the sodium titanate samples were immersed into a 0.2 M HNO3 solution for 6 h and 1-2 hour in an ultrasonic bath [20] .After that, they were washed with deionized water for several times, filtered and dried at 80°C for 2 h.Finally, different hydrothermal [ 21 ] and thermal treatments were carried out.0,3 g of previously synthesised Na-titanate were mixed with 0,5 g CTAB in 20 mL of distilled water and transferred again into a Teflon-lined autoclave and heated at 200°C-tan for 24 h.In one case, before heating, the pH was acidified by adding HCl 0,1M.In the other hand, some H-titanates were heated at 400-500 °C for 4-5 h. Thermogravimetric measurements were performed in a Netzsch STA 449C thermogravimetric analyzer.Crucibles containing 9 mg of sample were heated at 10º C.min -1 under dry argon.For microstructure analysis, powders were dispersed in acetone, dropped-cast onto copper grid and examined using a CM200 transmission electron microscope.http://sciforum.net/conference/mol2net-1 Conclusions 1D anatase samples with different shapes had been obtained by a 2-step method that implied an initial hydrothermal treatment in alkali environment followed by acid exchange and final thermal/hydrothermal heating.1D nanostructures were already achieved in the first stage during the formation of sodium titanates and remained without significant morphological alteration, except a slight broadening, until the formation of anatase TiO2 nanotubes.Exceptionally, a second hydrothermal heating in the presence of CTAB of the alkali intermediates led to an oval-shaped anatase sample where TiO2 fibres seemed to be more densely stacked.
2,251.2
2015-12-04T00:00:00.000
[ "Materials Science" ]
Ytterbium fiber-based, 270 fs, 100 W chirped pulse amplification laser system with 1 MHz repetition rate A 100 W Yb-doped, fiber-based, femtosecond, chirped pulse amplification laser system was developed with a repetition rate of 1 MHz, corresponding to a pulse energy of 100 µJ. Large-scale, fused-silica transmission gratings were used for both the pulse stretcher and compressor, with a compression throughput efficiency of ∼85%. A pulse duration of 270 fs was measured by second harmonic generation frequency-resolved optical gating (SHG-FROG). To the best of our knowledge, this is the shortest pulse duration ever achieved by a 100-W-level fiber chirped pulse amplification laser system at a repetition rate of few megahertz, without any special post-compression manipulation. A high repetition rate (>MHz), high harmonic generation (HHG) based, vacuum ultraviolet (VUV) laser source is highly desirable for photoelectron spectroscopy to carry out a space-charge-free measurement by limiting the number of emitted electrons per pulse. 1,2) This helps obtain higher signal-to-noise ratios and achieve a high energy resolution. Such conditions cannot be met with traditional Ti:sapphire laser systems, which typically have a repetition rate on the order of a few kilohertz. To ensure a peak intensity sufficiently high to drive the HHG process efficiently at high repetition rates, a femtosecond enhancement cavity (fsEC) was used to increase the peak intensity by coherently adding multiple pulses. [3][4][5][6] In addition to longterm operation being rather laborious, enhancement cavities present technical challenges due to their sensitivity to environmental perturbations and ionization clamped intracavity power limitations. 7,8) In contrast, single pass HHG configurations at high repetition rates are more robust and have been garnering increased attention recently, 9,10) leading to the development of high power, femtosecond driving laser sources that are able to reconcile high repetition rates with high pulse energies. On top of being promising candidates for HHG-related applications, high power high repetition rate femtosecond lasers are also suitable for high precision laser material processing, ensuring rapid and cold processing. Fiber, slab and disk laser amplifiers are leading the trend in the field of high power femtosecond lasers and all three are capable of ∼1 kW of average output power. [11][12][13] Although they can find applications in laser processing, none of them are suitable for driving HHG due to the relatively low pulse energies being delivered (∼10 µJ for fiber lasers) and relatively long pulse durations (640 fs for fiber lasers, 615 fs for slab lasers, and 7.3 ps for disk lasers). Because short pulse durations are desirable for driving the HHG process, fiber lasers are the preferred choice due to the broader gain bandwidth they can support, compared to the commonly used Yb:YAG gain medium in slab and disk lasers. Actively doped fibers have the advantage of having a large surface-to-volume ratio, which results in excellent heat dissipation. However, due to the small core diameter of gain fibers, the peak powers reached inside the fiber are high and unwanted nonlinear effects, such as self-phase modulation, tend to occur. In order to avoid possible nonlinearities, the pulse duration of a seed laser needs to be stretched before it is amplified by a fiber amplifier, much like with fiber chirped pulse amplification (FCPA). In this way, high power operation with repetition rates ranging from 10 to 150 MHz have been reported, [14][15][16] but the pulse energies (few microjoules) proved to be insufficient to carry out single pass HHG and an enhancement cavity was required to increase the peak powers. Single pass HHG experiments using FCPA have been demonstrated, [17][18][19] however, this was for pulse repetition rates of 100 kHz, which when increased to 1 MHz would lead to lower energies and a less efficient HHG process. Therefore, upgrading the pulse energy to hundreds of microjoules at a repetition rate of 1 MHz for single pass HHG experiments could add to fsEC experiments carried out at tens of megahertz, and single pass experiments realized at tens of kilohertz. In recent years, research in FCPA laser systems at a repetition rate of few megahertz has soared. [20][21][22][23][24] In 2007, 90 W of output power at a 1 MHz repetition rate with a pulse duration of 500 fs was reported using an air-clad photonic crystal fiber, 20) where the efficiency of the compressor was 70%. In 2011, an average power of 200 W at a repetition rate of 1 MHz with a pulse duration of 700 fs was realized using a large pitch photonic crystal fiber, 21) where the efficiency of the compressor was 80%. In the same year, an average power of 212 W at a repetition rate of 2 MHz with a pulse duration of 470 fs was also achieved using a preferential gain photonic-crystal fiber 22) and the efficiency of the compressor was 70%. In 2014, PolarOnyx reported 100 W of average power at 1 MHz repetition rate with a pulse duration of 700 fs 23) and the efficiency of the compressor was 80%. The group of Morgner also demonstrated an average power of 80 W at 1 MHz repetition rate and a pulse duration of approximately 700 fs 24) and the second harmonic was used to pump an optical parametric chirped pulse amplifier (OPCPA). Nonetheless, the pulse durations were limited around ∼500 fs, which was not ideal for driving HHG. To further reduce the pulse duration, nonlinear compression techniques and OPCPAs have been used, albeit at the expense of increasing the complexity and decreasing the efficiency of the overall system. In addition to power scaling, another key device in chirped pulse amplification (CPA) technology is the compressor, which typically consists of two gratings and brings the pulse duration back in the femtosecond region after amplification. The typical reflection-type grating with a metal coating is not suitable because the beam quality of the high average power beam degrades due to thermal expansion, and the diffraction efficiency of a metal-coated grating gradually decreases over long-term use. 25) The low damage threshold of metal-coated gratings is another obstacle in the realization of a high-power system. While the dielectric-coated grating is suitable for high-power operation, it has a small diffractive bandwidth. Recently, a new kind of large scale, high efficiency transmission grating at 1 µm was shown to adequately compress pulses. 25) In this contribution, we report on a FCPA laser system, which delivered 100 W of output power after pulse compression, at a repetition rate of 1 MHz, corresponding to a pulse energy of 100 µJ. The compressor efficiency was as high as 85% resulting from the special large scale transmission gratings. The pulse duration was measured to be 270 fs using second harmonic generation frequency-resolved optical gating (SHG-FROG). To the best of our knowledge, this represents the shortest pulse duration achieved by a 100-W-level fiber chirped pulse amplification laser system at a repetition rate of few megahertz, without resorting to any post-compression manipulation. Excellent beam quality was guaranteed by the rod fiber itself, and by the careful alignment into this fiber to effectively excite the fundamental mode alone. Figure 1 shows the schematic diagram of the setup for the 1 MHz FCPA laser system. It consists of an oscillator, a pulse stretcher, a pulse picker, multiple stages of amplifiers, and a pulse compressor. The oscillator was a typical nonlinear polarization evolution (NPE) based, mode-locked ytterbium (Yb)-doped fiber oscillator, which was designed to have a repetition rate of 64 MHz and delivered pulses with an average output power of 5 mW and a bandwidth of 30 nm. The output of this seed laser was directed to a conventional Martinez-type pulse stretcher, which consisted of a 180-mm-wide by 40-mm-high transmission grating with a groove density of 1250 lines=mm, a lens with a focal length of 1000 mm and a diameter of 150 mm, a high reflection rectangular mirror, and a roof mirror formed by two rectangular gold mirrors. The distance from the grating to the lens was 250 mm and the applied group delay dispersion (GDD) was 1.6 × 10 7 fs 2 . The grating had a thickness of 1 mm and an anti-reflection coating on the back surface. The diffraction efficiency was measured to be ∼96% at 40°incidence angle. 25) A half-wave plate (HWP) was used to adjust the polarization state to maximize the diffraction efficiency. The stretched pulse duration was measured to be approximately 1 ns using a sampling oscilloscope and a 45-GHz-bandwidth photodiode. Due to losses from the diffraction gratings and multiple reflections on the mirrors, the average power decreased to ∼2.5 mW. A single-mode highly doped Yb fiber-based pre-amplifier, with a core diameter of 6 µm and a length of 25 cm, was placed after the stretcher to increase the power to ∼100 mW to compensate for losses incurred from an acousto-optic modulator (AOM) placed further down in the configuration. A 976 nm singlemode fiber (SMF) coupled, wavelength-stabilized laser diode was used as the pump source, and a forward pumping scheme was adopted to avoid possible damage to the pump source. In order to reduce the laser repetition rate from 64 to 1 MHz, a fiber coupled AOM was used. The driver of the AOM was electrically triggered by the signal from the oscillator and an average power of 0.5 mW at a repetition rate of 1 MHz was obtained at the output of the AOM. After the AOM, an amplifier stage consisting of a low-doping single-mode Yb fiber amplifier was used to increase the average power up to ∼50 mW. The pump source was also an SMF coupled, wavelength-stabilized laser diode at 976 nm. This amplifier stage was followed by another, which was comprised of a 2-m-long Yb-doped double-clad fiber (DCF) with a core diameter of 10 µm and a cladding diameter of 125 µm that was spliced to a combiner and pumped by a 100-µm-diameter multimode fiber coupled 10 W laser diode. After this third stage of amplification, the average pulse power was limited to 500 mW to avoid excessive gain narrowing, although several watts of average power could be obtained when the pump LD was set to its maximum. To further increase the average power, fibers with a larger core diameter were used. A 1.5-m-long photonic crystal fiber (PCF) amplifier with a core diameter of 40 µm and cladding diameter of 200 µm was employed and pumped backwards by a 100-µm-diameter multimode fiber coupled 30 W laser diode with a 976 nm wavelength. An HWP was used to adjust the polarization state of the incident seed beam. The fiber was coiled with a coiling diameter of 40 cm to prevent bend loss and to obtain a good beam quality. When the pumping power was 25 W, an average power of 14 W was obtained after the isolator. The polarization extinction ratio (PER) was measured to be as high as 95%. The final stage of amplification was carried out by a straight 55-cm-long rod fiber, which featured a low numerical aperture (NA) of 0.02 for the signal, a mode field area of ∼4500 µm 2 , and a very high absorption coefficient of ∼30 dB=m. To control the working temperature, the rod fiber was placed in a water cooled copper heat sink. The rod fiber was pumped backwards by a laser diode, centered at 976 nm, which provided up to 350 W of pumping power, coupled to a fiber with a core diameter of 400 µm and a NA of 0.22. Another HWP was used to adjust the polarization state of the incident seed beam. Figure 2 shows the power performance of the rod fiber laser. With an incident seed laser power of 13 W, more than 120 W average power was obtained from the rod fiber when it was backward pumped with ∼200 W. The optical-to-optical efficiency was in excess of 60%. The PER was measured to be 99%, so the output was almost entirely linearly polarized, which can be attributed to the rectilinear placement and the polarizationmaintaining characteristics of the fiber itself. To compress the laser pulse back down to femtosecond levels, the laser beam was directed to a pulse compressor. The compressor included a small-scale transmission grating (40 × 40 mm 2 ), a large scale transmission grating (180 × 40 mm 2 ), and a roof mirror. The distance between the two gratings was set around 1.5 m. More than 100 W of average power was obtained after the compressor, which corresponded to a compressor efficiency of ∼85%. Figure 3 shows the optical spectra recorded after different amplification stages. Only minor gain narrowing effects are visible. This could be attributed to the distribution of the amplification gain over multiple stages of amplifiers. The gain was approximately 10 for each stage. The spectrum bandwidth was approximately 10 nm after the rod fiber amplifier. The fine structures visible in the spectrum were attributed to nonlinearities occurring in the amplifiers. The pulse duration was characterized by SHG-FROG. The results, as displayed in Fig. 4, indicate the pulse intensity and phase as a function of time. From the intensity spectrum, at full-width half-maximum (FWHM), the pulse duration was ∼270 fs. This corresponds to a peak power of 0.37 GW if a pulse energy of 100 µJ is considered. Supposing a focused beam radius of 20 µm was used, a peak intensity of 3 × 10 13 W=cm 2 could be obtained, which is sufficiently high to drive the HHG process. The spectrum bandwidth of 10 nm could support pulse durations as short as ∼150 fs and the discrepancy between this value and the measured one of 270 fs was ascribed to uncompensated third order dispersions. Shorter pulse durations could be expected by adjusting the incident angle of the beam onto the grating, however, this would come at the expense of the compressor efficiency. When a VUV laser based on HHG is used in spectroscopy experiments, especially in angle-resolved photoemission spectroscopy (ARPES), long-term continuous operation is desirable, which in turn requires the driving laser to operate stably over extended periods of time. The long-term stability of the laser system was therefore characterized. Figure 5 shows the measured output power of the rod fiber amplifier over a period of two hours. The output power was maintained at 116 W over a 2 h period with a standard deviation of 0.3 W, representing a significantly low fluctuation level. The inset of Fig. 5 is the beam profile taken after two hours of operation. There was almost no change between this and the one acquired at the start of the experiment. The two stress rods of the rod fiber are visible in the beam profiles, as evidenced by the two light-colored regions. The laser system was capable of producing even higher output powers. At 330 W of pump power, 200 W of output power was obtained, however, the beam profile was distorted due to the onset of the transverse mode instability, 11,26) which could be attributed to the remaining population inversion at the edge of the fiber core when the pumping power is higher than the threshold for the instability mode. While this kind of phenomenon has been observed in both ordinary DCFs and PCFs, the threshold in fibers without a PCF structure was found to be higher than that of fibers with a PCF structure. 11) Large pitch PCFs and preferential gain PCFs have been created to increase this threshold to 300 W. 21,22) For the type of rod fiber used here, the threshold was measured to be approximately 160 W 26) and an output power of up to 150 W was achieved while maintaining a single transverse mode of operation. To preserve the lifetime of the fiber, the system was operated so that the output power after the rod fiber was limited to 120 W. In conclusion, a 100-W-level rod-fiber-based femtosecond CPA laser system with a repetition rate of 1 MHz, corresponding to a pulse energy of 100 µJ, was developed. Special large-scale transmission gratings were used for pulse compression and stretching, leading to an efficiency of 85% for the compression. The pulse duration was measured to be ∼270 fs using SHG-FROG. We believe this laser system to be suitable for carrying out HHG experiments and obtaining VUV radiation, which could subsequently be used for photoelectron spectroscopy experiments.
3,768.8
2015-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Influence of Si Addition on the Microstructures, Phase Assemblages and Properties in CoCrNi Medium-Entropy Alloy The effects of Si addition on the microstructures and properties of CoCrNi medium-entropy alloy (MEA) were systematically investigated. The CrCoNiSix MEA possesses a single face-centered cubic (FCC) phase when x is less than 0.3 and promotes solution strengthening, while the crystal structure shows a transition to the FCC+σ phase structure when x = 0.4 and the volume fraction of the σ phase increases with a microstructure evolution as the Si content increases. The Orowan mechanism from σ precipitation effectively enhances the strength, hardness, and stain hardening of CrCoNiSix MEA, which also exhibits superior hardness at high temperatures. Furthermore, a large amount of σ phase decreases the wear resistance because of the transformation of the main wear mechanism from abrasion wear for σ-free CrCoNiSix MEA to adhesion wear for σ-contained CrCoNiSix MEA. This work contributes to the understanding of the effect of Si addition on FCC structured alloys and provides guidance for the development of novel Si-doped alloys. Introduction In the last two decades, medium-and high-entropy alloys (M/HEAs) have been widely developed and studied due to their unique structures and superior comprehensive mechanical properties to traditional alloys [1][2][3].It is reasonable to assume that promising properties in hardness, strength, wear resistance, corrosion resistance and oxidation resistance can be realized by composition design [4][5][6].It is known that the proper addition of elements can change the microstructure and optimize the properties of alloys [5][6][7][8].In particular, the non-metallic elements, such as Si, C, and B, may promote solution strengthening due to large lattice distortion or precipitation strengthening due to the introduction of intermetallic material with Si, C, or B to realize a constructive role in tailoring the microstructure and properties [8][9][10].Hence, it is significant to study the effect of the non-metallic element addition on the microstructure and properties of the alloys. Many researchers have studied and proved that the addition of Si element significantly alters the phase assemblages and enhances the alloy's hardness, wear resistance, corrosion resistance, and oxidation resistance [11][12][13][14][15][16][17][18][19][20][21][22].The FeCoNiCrSi x [11], Al 0.5 CoCrCuFeNiSi x [12], FeCoNiAlSi x [13], Al 0.3 CoCrFeNi [14], Fe 2.5 CoNiCu [15], and AlCoCuNi-based M/HEAs [16] undergo the transition from a closed-packed face-centered cubic (FCC) structure to a loosepacked body-centered cubic (BCC) structure with the increasing of Si content, thus leading to better hardness and wear resistance.As Si content increases, the, FeCoCrNiMoSi x [17], Co 0.2 CrAlNi [18], FeCrMnVSi x [19], and CoCrFeNiSi x [20] M/HEAs enhance the hardness and wear resistance through the introduction of Si-rich precipitation, such as Cr 3 Si, Si-Ni, and σ phase.The CoCrCuFeNiSi x [21] and FeCoCrNiAl 0.5 Si x [22] HEAs were reported to enhance the strength, hardness, and wear resistance by the transition from FCC to BCC and the formation of an intermetallic precipitation Cr 3 Si phase.Therefore, the addition and increase in the Si element can effectively change the microstructure of alloys and thus enhance the properties which are required for studying. In our previous study, the Si addition in CrCoNi MEA promotes single-phase FCC solid solution strengthening when x ≤ 0.3 [8].Meanwhile, it reduces the stacking fault energy and increases the short-range order, thus effectively improving the work hardening ability of the alloy and achieving the synergistical enhancement in strength and plasticity [8].In previous work, we aimed to improve the combination of strength and ductility in the range of keeping the single FCC phase.Nevertheless, the increase in Si content usually influences the crystal structure to change the mechanical performance [11][12][13][14][15][16][17][18][19][20][21][22].Therefore, now we focus on the transformation of the crystal structure and mechanical properties with increasing the Si content.How the microstructure and mechanical properties of the alloy vary with more Si content?Will it introduce the appearance of other Si-rich precipitated phases?The systematic study of Si element on the microstructure and properties of CrCoNiSi x MEA is required in order to answer these questions.Furthermore, the Si element generally increases the wear properties of the alloy [11][12][13][14][15][16][17][18][19][20][21][22]; nevertheless, the wear properties and mechanisms of CrCoNiSi x MEA have not been studied. In the present paper, the effects of Si content on the microstructures and phase assemblages of CrCoNiSi x MEA are systematically investigated.The corresponding mechanical and wear properties are also studied.The increase in the Si element leads to the transition of the alloy phase structure and further promotes the improvement of alloy properties.The strengthening mechanisms of the alloy are discussed in detail. Material Preparation Alloy ingots with nominal compositions of CrCoNiSi x (x values in molar ratio, x = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, denoted by Si 0 , Si 0.1 , Si 0.2 , Si 0.3 , Si 0.4 , Si 0.5 and Si 0.6 , respectively) were prepared by arc-melting the mixtures of metals (purity > 99.9 wt.%) in a high-purity argon atmosphere.The ingots were flipped and remelted five times to ensure chemical homogeneity, followed by drop-casting in a water-cooled copper mold with dimensions of 2 × 10 × 100 mm 3 .The casted plates were homogenized at 1100 • C for five hours followed by water quenching, which is in accordance with the previous study [8] for comparison.In an attempt to roll the homogenized specimens, the CrCoNiSi x (x = 0.4, 0.5, 0.6) MEAs crack when rolled to 35% thickness reduction and cannot be rolled to 70% thickness reduction, as seen in [8], so the homogenized specimens for all MEAs were obtained and used for study in present paper. Mechanical Experiment Dog-bone-shaped tensile specimens with a gage geometry (length × width × thickness) of 10.0 × 4.0 × 2 mm 3 were cut for quasi-static tension samples, and the cylinder specimens with a diameter of ϕ 3 mm and a height of 3 mm were adopted for compression tests.The tensile and compression tests experiments were performed using an Instron 5969 (Instron, a Division of Illinois Tool Works Inc. ITW, Norwood, MA, USA) machine at a constant strain rate of 1 × 10 −3 s −1 .Vickers hardness was measured using a THV-1DTe tester with a load of 0.5 N and a duration time of 10 s for each measurement.The hardness values were averaged by at least ten measurements for each sample. Wear testing was performed on a reciprocating friction testing machine (CETR-UMT 2MT, Labs Arena, San Clemente, CA, USA) using a 6 mm diameter GCr15 steel ball as the contact counterpart at room temperature.The tests were conducted at the load of 20 N under a constant velocity of 6 mm/s and a stroke length of 3 mm for a total duration of 30 min.The wear tests were repeated three times with a new steel ball and the wear weights of the specimens were averaged using an analytical balance.The microstructures of the worn surface were characterized using scanning electron microscopy (SEM). Microstructural Characterization The X-ray diffraction (XRD) measurements were performed using the Rigaku Ultima IV diffractometer under Cu-Kα radiation at 40 kV and 40 mA (scanning rate = 1 • /min, 2θ = 40-100 • , step = 0.01 • ).The microstructural characterizations were performed by JEOL JSM-7100F field emission gun-scanning electron microscopy (SEM) (JEOL, Peabody, MA, USA), back-scattered electrons (BSEs), dual energy-dispersive X-ray spectroscopy (Dual EDS) detectors, and an electron backscatter diffraction detector (EBSD) system at an acceleration voltage of 20 kV.More refined microstructures were obtained by a transmission electron microscope (TEM; JEM-2100F, JEOL, Peabody, MA, USA) at an acceleration voltage of 200 kV.TEM specimens were prepared by mechanical grinding down to 30 µm thickness and then thinned using double-jet electropolishing in a solution of 95% ethanol and 5% perchloric acid at −20 • C and an applied voltage of 30 V, followed by Ar-ion milling. Microstructures and Phase Assemblages The crystal structures of recrystallized CrCoNiSi x (x = 0.4, 0.5, 0.6) MEAs were characterized by XRD patterns, as displayed in Figure 1.Except for the FCC phase, there were some other peaks corresponding to a tetragonal structure with the lattice parameters of a ≈ 8.8 Å and c ≈ 4.5 Å, which was indexed to the σ phase.As Si concentration increased, the peaks corresponding to the σ phase intensified, indicating the larger σ phase content.At x = 0, 0.1, 0.2, and 0.3, only the FCC-type solid solution structure was detected [8].When x increased to 0.4, the phase assemblages became FCC+σ and the amount of σ phase was gradually enlarged with the Si increase. under a constant velocity of 6 mm/s and a stroke length of 3 mm for a total duration of 30 min.The wear tests were repeated three times with a new steel ball and the wear weights of the specimens were averaged using an analytical balance.The microstructures of the worn surface were characterized using scanning electron microscopy (SEM). Microstructural Characterization The X-ray diffraction (XRD) measurements were performed using the Rigaku Ultima IV diffractometer under Cu-Kα radiation at 40 kV and 40 mA (scanning rate = 1°/min, 2θ = 40-100°, step = 0.01°).The microstructural characterizations were performed by JEOL JSM-7100F field emission gun-scanning electron microscopy (SEM) (JEOL, Peabody, MA, USA), back-scattered electrons (BSEs), dual energy-dispersive X-ray spectroscopy (Dual EDS) detectors, and an electron backscatter diffraction detector (EBSD) system at an acceleration voltage of 20 kV.More refined microstructures were obtained by a transmission electron microscope (TEM; JEM-2100F, JEOL, Peabody, MA, USA) at an acceleration voltage of 200 kV.TEM specimens were prepared by mechanical grinding down to 30 µ m thickness and then thinned using double-jet electropolishing in a solution of 95% ethanol and 5% perchloric acid at −20 °C and an applied voltage of 30 V, followed by Ar-ion milling. Microstructures and Phase Assemblages The crystal structures of recrystallized CrCoNiSix (x = 0.4, 0.5, 0.6) MEAs were characterized by XRD patterns, as displayed in Figure 1.Except for the FCC phase, there were some other peaks corresponding to a tetragonal structure with the lattice parameters of a ≈ 8.8 Å and c ≈ 4.5 Å, which was indexed to the σ phase.As Si concentration increased, the peaks corresponding to the σ phase intensified, indicating the larger σ phase content.At x = 0, 0.1, 0.2, and 0.3, only the FCC-type solid solution structure was detected [8].When x increased to 0.4, the phase assemblages became FCC+σ and the amount of σ phase was gradually enlarged with the Si increase.In CrCoNiSi 0.5 MEA, some parts of the small σ-phase particles grow and converge into large particles (with an average size of 10 µm), forming the large-and small-particle distributed network structure.When x reaches 0.6, all the small σ-phase particles grow and converge into a large-area σ phase and the volume fraction exceeds the FCC structure, showing that the FCC phase is distributed in the σ-phase structure.Figure 2h exhibits the proximity histograms of the elemental concentrations across the interface between the matrix and precipitates as white lines, as shown in Figure 2g.The line scan spans two large-sized and one small-sized σ particles.It can be clearly seen that the concentration of Si and Cr elements increases when crossing the interface from the FCC matrix to the σ phase.Cr and Si elements in the σ-precipitated phase are obviously larger than those in the FCC phase, indicating that Cr and Si elements are enriched in the σ phase. forming the large-and small-particle distributed network structure.When x reaches 0.6, all the small σ-phase particles grow and converge into a large-area σ phase and the volume fraction exceeds the FCC structure, showing that the FCC phase is distributed in the σ-phase structure.Figure 2h exhibits the proximity histograms of the elemental concentrations across the interface between the matrix and precipitates as white lines, as shown in Figure 2g.The line scan spans two large-sized and one small-sized σ particles.It can be clearly seen that the concentration of Si and Cr elements increases when crossing the interface from the FCC matrix to the σ phase.Cr and Si elements in the σ-precipitated phase are obviously larger than those in the FCC phase, indicating that Cr and Si elements are enriched in the σ phase.To further confirm the phase assemblages and element composition, the CrCoNiSi0.5MEA with both large and small sized precipitates was selected for observation using transmission electron microscopy (TEM) technology.As shown in the bright-field image in Figure 3a, there are obvious annealing twins in the FCC matrix region, and the large and small particles (with an average size of 100 nm) are distributed in the FCC phase.The corresponding selected area electron diffraction (SAED) patterns exhibited in Figure 3b- To further confirm the phase assemblages and element composition, the CrCoNiSi 0.5 MEA with both large and small sized precipitates was selected for observation using transmission electron microscopy (TEM) technology.As shown in the bright-field image in Figure 3a, there are obvious annealing twins in the FCC matrix region, and the large and small particles (with an average size of 100 nm) are distributed in the FCC phase.The corresponding selected area electron diffraction (SAED) patterns exhibited in Figure 3b-d indicate the FCC structure with annealing twin and the σ-phase precipitation.This agrees with the results of XRD, BSE, and EBSD.In detailed element compositions of the FCC region, small and large σ-phase particles were recognized through TEM-EDS; the results are shown in Figure 3e-g.It was found that the proportion of elements in the FCC region was close to the molar ratio of the CrCoNiSi 0.5 MEA, while the proportions of Si and Cr elements were relatively low, which is a result of Si, Cr element segregation in the σ phase.Additionally, the element proportion of the small particles is close to the large ones, and both of them are enriched with Si and Cr elements.This further proves the Si, Cr-rich σ phase and indicates that large and small σ-phase precipitates have the same element composition. d indicate the FCC structure with annealing twin and the σ-phase precipitation.This agrees with the results of XRD, BSE, and EBSD.In detailed element compositions of the FCC region, small and large σ-phase particles were recognized through TEM-EDS; the results are shown in Figure 3e-g.It was found that the proportion of elements in the FCC region was close to the molar ratio of the CrCoNiSi0.5MEA, while the proportions of Si and Cr elements were relatively low, which is a result of Si, Cr element segregation in the σ phase.Additionally, the element proportion of the small particles is close to the large ones, and both of them are enriched with Si and Cr elements.This further proves the Si, Cr-rich σ phase and indicates that large and small σ-phase precipitates have the same element composition.In summary, CrCoNiSix shows the transition from the pure FCC solid solution phase to the mixture of FCC and σ precipitation as the Si content increases.When x is less than 0.3, the MEA maintains an FCC single-phase structure, showing solid solution strengthening with Si increasing [8].An apparent transition in phase assemblages occurs at x = 0.4 in which σ phase appears.As the molar ratio of Si increases, the σ phase gradually increases, showing the transition from a uniform distribution of small-sized precipitates to a large-and small-particle distributed network structure, and eventually to the FCC phase being distributed in the σ-phase matrix structure.In conclusion, small Si addition does not change the phase assemblages, but a higher content of Si tends to promote the formation of σ precipitation in CrCoNi MEA. Mechanical Properties and Strengthening Mechanisms Figure 4a,b show the quasi-static tensile and compressive stress-strain curves of the homogenized Six (x = 0.3, 0.4, 0.5 and 0.6) MEAs, respectively, with the variations in tensile and compressive yield strength (YS) with Si content being displayed in Figure 4c.It can be seen that as the Si mole ratio increases from 0.3 to 0.5, the tensile and compressive yield strength increases from 320 to 430 MPa and from 760 to 1820 MPa, respectively, with a gradual decrease in plastic strain, while Si0.6 MEA shows a brittle fracture manner without yield strain.It is noticeable that the Si0.4 and Si0.5 MEAs also exhibit an obviously higher In summary, CrCoNiSi x shows the transition from the pure FCC solid solution phase to the mixture of FCC and σ precipitation as the Si content increases.When x is less than 0.3, the MEA maintains an FCC single-phase structure, showing solid solution strengthening with Si increasing [8].An apparent transition in phase assemblages occurs at x = 0.4 in which σ phase appears.As the molar ratio of Si increases, the σ phase gradually increases, showing the transition from a uniform distribution of small-sized precipitates to a largeand small-particle distributed network structure, and eventually to the FCC phase being distributed in the σ-phase matrix structure.In conclusion, small Si addition does not change the phase assemblages, but a higher content of Si tends to promote the formation of σ precipitation in CrCoNi MEA. Mechanical Properties and Strengthening Mechanisms Figure 4a,b show the quasi-static tensile and compressive stress-strain curves of the homogenized Si x (x = 0.3, 0.4, 0.5 and 0.6) MEAs, respectively, with the variations in tensile and compressive yield strength (YS) with Si content being displayed in Figure 4c.It can be seen that as the Si mole ratio increases from 0.3 to 0.5, the tensile and compressive yield strength increases from 320 to 430 MPa and from 760 to 1820 MPa, respectively, with a gradual decrease in plastic strain, while Si 0.6 MEA shows a brittle fracture manner without yield strain.It is noticeable that the Si 0.4 and Si 0.5 MEAs also exhibit an obviously higher strain-hardening capacity than Si 0.3 MEA, which can be attributed to the precipitate hardening of the σ phase.However, due to smaller plastic strain, the ultimate tensile and compressive strengths of Si 0.5 MEA is less than Si 0.4 MEA.In conclusion, small Si addition in CrCoNi MEA promotes solution strengthening and reduces the stacking-fault energy, thus realizing synergistical enhancement in strength and plasticity [8], and more Si content leads to precipitation strengthening and improves the strength and strain hardening effectively, but the large number of the σ phase causes a brittle fracture when Si content increases too much, which is not suitable for engineering structures.strain-hardening capacity than Si0.3 MEA, which can be attributed to the precipitate hardening of the σ phase.However, due to smaller plastic strain, the ultimate tensile and compressive strengths of Si0.5 MEA is less than Si0.4 MEA.In conclusion, small Si addition in CrCoNi MEA promotes solution strengthening and reduces the stacking-fault energy, thus realizing synergistical enhancement in strength and plasticity [8], and more Si content leads to precipitation strengthening and improves the strength and strain hardening effectively, but the large number of the σ phase causes a brittle fracture when Si content increases too much, which is not suitable for engineering structures.The hardness variations in Six MEA are shown in Figure 4d.The hardness of Six MEA exhibits a continuous increase with Si content, and the Si0.6 MEA shows a hardness of 751.21 Hv, 282.93 Hv (increased by 60.5%) higher than Si0.3 MEA with a single FCC phase and 532.07 Hv (increased by 242.8%) higher than CrCoNi MEA, indicating that Si addition effectively enhances the hardness of the CrCoNi MEA, especially for the σ-phase-contained Six MEA.Moreover, the hardness of Si0.3 and Si0.6 MEAs at a high temperature (1073 K) are tested and the values are 151.2Hv and 558.9 Hv, respectively.It can be observed that the hardness of Si0.6 MEA is increased by 269.6% compared to that of Si0.3 MEA.The increase percentage is much greater than that at room temperature.The hardness of Si0.3 and Si0.6 MEAs at high temperature is 317.08 Hv (decreased by 85%) and 192.31Hv The hardness variations in Si x MEA are shown in Figure 4d.The hardness of Si x MEA exhibits a continuous increase with Si content, and the Si 0.6 MEA shows a hardness of 751.21 Hv, 282.93 Hv (increased by 60.5%) higher than Si 0.3 MEA with a single FCC phase and 532.07 Hv (increased by 242.8%) higher than CrCoNi MEA, indicating that Si addition effectively enhances the hardness of the CrCoNi MEA, especially for the σ-phase-contained Si x MEA.Moreover, the hardness of Si 0.3 and Si 0.6 MEAs at a high temperature (1073 K) are tested and the values are 151.2Hv and 558.9 Hv, respectively.It can be observed that the hardness of Si 0.6 MEA is increased by 269.6% compared to that of Si 0.3 MEA.The increase percentage is much greater than that at room temperature.The hardness of Si 0.3 and Si 0.6 MEAs at high temperature is 317.08 Hv (decreased by 85%) and 192.31Hv (decreased by 26%) lower than those at room temperature, respectively.This indicates that the σ phase effectively enhances the hardness of the CrCoNi MEA at high temperatures. Figure 5 shows the morphologies of the fracture in the tensile-deformed samples of the Si 0.3 and Si 0.5 MEAs.There are numerous fine dimples in the Si 0.3 sample, indicating excellent plastic stability and a ductile nature, as shown in Figure 5a,b.Some parallel bands (marked by yellow arrow in Figure 5b) can be seen inside the dimples, which are related to deformation twins (DTs), which is also a proof of superior ductility and strain hardening, as previously reported [8].The Si 0.5 MEA shows an obvious cleavage fracture, where river-like patterns and tearing ridges with several pits are clearly observed.Furthermore, many bumps with small and large sizes can be observed, indicating that the σ particles result in the reduction in plasticity. (decreased by 26%) lower than those at room temperature, respectively.This indicates that the σ phase effectively enhances the hardness of the CrCoNi MEA at high temperatures. Figure 5 shows the morphologies of the fracture in the tensile-deformed samples of the Si0.3 and Si0.5 MEAs.There are numerous fine dimples in the Si0.3 sample, indicating excellent plastic stability and a ductile nature, as shown in Figure 5a,b.Some parallel bands (marked by yellow arrow in Figure 5b) can be seen inside the dimples, which are related to deformation twins (DTs), which is also a proof of superior ductility and strain hardening, as previously reported [8].The Si0.5 MEA shows an obvious cleavage fracture, where river-like patterns and tearing ridges with several pits are clearly observed.Furthermore, many bumps with small and large sizes can be observed, indicating that the σ particles result in the reduction in plasticity.In order to reveal the deformation hardening mechanism of the σ phase-contained Six MEA, the TEM microstructural characterization of the tensile-deformed samples was conducted.As exhibited in Figure 6a, abundant dislocations are distributed in the interspace of σ precipitation.No dislocation line is found within the σ particles, and numerous dislocations pile up and are bent around the precipitated particles, presenting an obvious Orowan hardening mechanism [23,24].Figure 6b shows the thick dislocation pile-ups near the boundary of the large σ precipitation.It is the σ precipitation that strongly impedes the dislocation motion and promotes the high work hardening of the Six MEA (see Figure 4a).Furthermore, the σ precipitation can also increase the local stress and cause heterogeneous strain distribution [25], thereby resulting in the back stress strengthening and enhancing the strain hardening ability.As is reported in our previous study [24], the Orowan hardening mechanism is more effective than the shearing hardening mechanism in the enhancement of the strength and strain hardening capacity, so the Six MEA shows higher YS and a work hardening rate with the increase in the σ phase.However, because the σ phase is too hard to cut through, the plastic coordination during plastic deformation decreases and results in a reduction in plastic strain.In order to reveal the deformation hardening mechanism of the σ phase-contained Si x MEA, the TEM microstructural characterization of the tensile-deformed samples was conducted.As exhibited in Figure 6a, abundant dislocations are distributed in the interspace of σ precipitation.No dislocation line is found within the σ particles, and numerous dislocations pile up and are bent around the precipitated particles, presenting an obvious Orowan hardening mechanism [23,24].Figure 6b shows the thick dislocation pile-ups near the boundary of the large σ precipitation.It is the σ precipitation that strongly impedes the dislocation motion and promotes the high work hardening of the Si x MEA (see Figure 4a).Furthermore, the σ precipitation can also increase the local stress and cause heterogeneous strain distribution [25], thereby resulting in the back stress strengthening and enhancing the strain hardening ability.As is reported in our previous study [24], the Orowan hardening mechanism is more effective than the shearing hardening mechanism in the enhancement of the strength and strain hardening capacity, so the Si x MEA shows higher YS and a work hardening rate with the increase in the σ phase.However, because the σ phase is too hard to cut through, the plastic coordination during plastic deformation decreases and results in a reduction in plastic strain. Wear Properties and Mechanisms In order to reveal the effect of Si addition on the wear properties and mechanisms of CrCoNi MEA, the typical Si0.3 (representing the single FCC phase structure) and Si0.5 (representing the FCC+ σ phase structure) MEAs are selected for the wear tests.Figure 7 displays the friction coefficient (μ) evolution histories of Si0.3 and Si0.5 MEAs during sliding under the load of 20 N. The friction coefficient of Si0.3 MEA shows a steady continuous rise with wear time, while that of Si0.5 MEA undergoes a decrease first and then an increase at the beginning running-in stage.This is because the temperature at the contacting surface increases as wear test progresses and aggravates the surface oxidation of the cladding layer, which plays a role in reducing the friction and thus making the friction coefficient decrease obviously [26].When the oxide is worn out, the friction pair would continue to contact the deposited metal, and the friction coefficient would rise again and enter the stable friction stage.Furthermore, the friction coefficient of Si0.5 MEA exhibits a wider fluctuation amplitude than that of Si0.3 MEA throughout the wear process.This can be mainly observed with the σ precipitation in Si0.5 MEA, which leads to a large uneven distribution in the microstructure.The average friction coefficient of Si0.3 and Si0.5 MEAs are 0.3814 and 0.5008, respectively.The Si0.5 MEA shows an obviously higher friction coefficient than Si0.3 MEA, indicating that the increase in Si and the existence of the σ phase decrease the wear resistance of CrCoNiSix MEA.The wear loss of Si0.5 MEA is also a little lower than that of Si0.3 MEA, which goes against Archard's rule [27].According to Archard's rule, the wear loss decreases with the increase in the hardness.Nevertheless, Archard's rule is mainly based on a single wear mechanism, such as adhesion wear or abrasive wear [27,28].Thus, this opposite trend is mainly related to the wear mechanism. Wear Properties and Mechanisms In order to reveal the effect of Si addition on the wear properties and mechanisms of CrCoNi MEA, the typical Si 0.3 (representing the single FCC phase structure) and Si 0.5 (representing the FCC+ σ phase structure) MEAs are selected for the wear tests.Figure 7 displays the friction coefficient (µ) evolution histories of Si 0.3 and Si 0.5 MEAs during sliding under the load of 20 N. The friction coefficient of Si 0.3 MEA shows a steady continuous rise with wear time, while that of Si 0.5 MEA undergoes a decrease first and then an increase at the beginning running-in stage.This is because the temperature at the contacting surface increases as wear test progresses and aggravates the surface oxidation of the cladding layer, which plays a role in reducing the friction and thus making the friction coefficient decrease obviously [26].When the oxide is worn out, the friction pair would continue to contact the deposited metal, and the friction coefficient would rise again and enter the stable friction stage.Furthermore, the friction coefficient of Si 0.5 MEA exhibits a wider fluctuation amplitude than that of Si 0.3 MEA throughout the wear process.This can be mainly observed with the σ precipitation in Si 0.5 MEA, which leads to a large uneven distribution in the microstructure.The average friction coefficient of Si 0.3 and Si 0.5 MEAs are 0.3814 and 0.5008, respectively.The Si 0.5 MEA shows an obviously higher friction coefficient than Si 0.3 MEA, indicating that the increase in Si and the existence of the σ phase decrease the wear resistance of CrCoNiSi x MEA.The wear loss of Si 0.5 MEA is also a little lower than that of Si 0.3 MEA, which goes against Archard's rule [27].According to Archard's rule, the wear loss decreases with the increase in the hardness.Nevertheless, Archard's rule is mainly based on a single wear mechanism, such as adhesion wear or abrasive wear [27,28].Thus, this opposite trend is mainly related to the wear mechanism. For revealing the wear mechanisms, the worn surfaces of Si 0.3 and Si 0.5 MEAs after sliding wear tests are given in Figure 8. From the worn surface morphology of Si 0.3 MEA displayed in Figure 8a, numerous parallel scratches developed due to the ploughing of micro-asperities between two friction surfaces are presented along the wear direction, indicating a typical abrasive wear mechanism.In addition to the scratches, fine wear debris and discontinuous glaze layers also generate on the worn surface, as exhibited in Figure 8a,b.There are also small grooves near the debris, as shown in the magnified image (see Figure 8c) corresponding to the brown rectangular area in Figure 8b.The worn surface of Si 0.5 MEA is nearly fully covered by the continuous compact glaze layers and a large amount of wear debris (Figure 8d), showing a typical adhesive wear mechanism.The glaze layers are believed to act as the protective interlayer to avoid the metal−metal direct contact during friction [26].Additionally, an uneven surface in the interspace of glaze layers can be seen, which was revealed to be grooves through the corresponding magnified image (Figure 8e), indicating that there are abundant small grooves in the surface of Si 0.5 MEA.Moreover, there are many micro-cracks in the glaze layers, as marked by the black arrows in Figure 8d.These cracks were reportedly induced due to the stick-slip behavior and high stress localizations during repetitive sliding, and they would result in the failures of glaze layers or oxide patches after their growth to a critical thickness [29][30][31].Figure 8f gives us the magnified image of the crack region, from which large and small sized σ particles are visible in the glaze layer, and the crack expands along the σ particle boundary.It can be drawn that the hard σ phase contributes to the formation of the crack.These inhomogeneous distributions of the wear debris, glaze layers, together with the σ particles, contribute to the significant fluctuations of friction coefficients, as displayed in Figure 7.For revealing the wear mechanisms, the worn surfaces of Si0.3 and Si0.5 MEAs after sliding wear tests are given in Figure 8. From the worn surface morphology of Si0.3 MEA displayed in Figure 8a, numerous parallel scratches developed due to the ploughing of micro-asperities between two friction surfaces are presented along the wear direction, indicating a typical abrasive wear mechanism.In addition to the scratches, fine wear debris and discontinuous glaze layers also generate on the worn surface, as exhibited in Figure 8a,b.There are also small grooves near the debris, as shown in the magnified image (see Figure 8c) corresponding to the brown rectangular area in Figure 8b.The worn surface of Si0.5 MEA is nearly fully covered by the continuous compact glaze layers and a large amount of wear debris (Figure 8d), showing a typical adhesive wear mechanism.The glaze layers are believed to act as the protective interlayer to avoid the metal−metal direct contact during friction [26].Additionally, an uneven surface in the interspace of glaze layers can be seen, which was revealed to be grooves through the corresponding magnified image (Figure 8e), indicating that there are abundant small grooves in the surface of Si0.5 MEA.Moreover, there are many micro-cracks in the glaze layers, as marked by the black arrows in Figure 8d.These cracks were reportedly induced due to the stick-slip behavior and high stress localizations during repetitive sliding, and they would result in the failures of glaze layers or oxide patches after their growth to a critical thickness [29][30][31].Figure 8f gives us the magnified image of the crack region, from which large and small sized σ particles are visible in the glaze layer, and the crack expands along the σ particle boundary.It can be drawn that the hard σ phase contributes to the formation of the crack.These inhomogeneous distributions of the wear debris, glaze layers, together with the σ particles, contribute to the significant fluctuations of friction coefficients, as displayed in Figure 7.In a word, with the increase in Si content, the main wear mechanisms change from plastic deformation and abrasion wear for the σ-free Si0.3 MEA to adhesion, delamination, and mild oxidation wear for the σ phase-contained Si0.5 MEA.During the wear process, the σ particles may be ribbed out by the GCr15 counterpart and stay on the friction sur- Figure 2 Figure 2 presents the BSE images and EBSD phase maps of recrystallized CrCoNiSix (x = 0.4, 0.5, 0.6) MEAs.It can be found that all three MEAs are composed of FCC and σ phases.Pay attention to the phase distribution of three MEAs to analyze the evolution of microstructures as the Si increases.The small σ-phase particles are uniformly distributed Figure 2 Figure2presents the BSE images and EBSD phase maps of recrystallized CrCoNiSi x (x = 0.4, 0.5, 0.6) MEAs.It can be found that all three MEAs are composed of FCC and σ phases.Pay attention to the phase distribution of three MEAs to analyze the evolution of microstructures as the Si increases.The small σ-phase particles are uniformly distributed in the FCC matrix in CrCoNiSi 0.4 MEA.In CrCoNiSi 0.5 MEA, some parts of the small σ-phase particles grow and converge into large particles (with an average size of 10 µm), forming the large-and small-particle distributed network structure.When x reaches 0.6, all the small σ-phase particles grow and converge into a large-area σ phase and the volume fraction exceeds the FCC structure, showing that the FCC phase is distributed in the σ-phase structure.Figure2hexhibits the proximity histograms of the elemental concentrations across the interface between the matrix and precipitates as white lines, as Figure 2 . Figure 2. The back-scattered SEM images (a-c), EBSD phase maps (d-f) of homogenized CrCoNiSix (x = 0.4, 0.5, 0.6) MEAs, respectively.(g) Enlarged image of (b), (h) EDS line scan results of crosssection of the matrix and σ particles interfaces as white dotted line marked in (h) showing the variations in the compositional elements. Figure 2 . Figure 2. The back-scattered SEM images (a-c), EBSD phase maps (d-f) of homogenized CrCoNiSi x (x = 0.4, 0.5, 0.6) MEAs, respectively.(g) Enlarged image of (b), (h) EDS line scan results of crosssection of the matrix and σ particles interfaces as white dotted line marked in (h) showing the variations in the compositional elements. Figure 3 . Figure 3. (a) Bright-field TEM image of the homogenized CrCoNiSi0.5MEA; TEM-energy dispersive spectrometer (EDS) results reflexing the element composition in FCC matrix (e) and large and small particles (f,g).The corresponding SAED patterns for FCC matrix (b) and σ phase (c,d). Figure 3 . Figure 3. (a) Bright-field TEM image of the homogenized CrCoNiSi 0.5 MEA; TEM-energy dispersive spectrometer (EDS) results reflexing the element composition in FCC matrix (e) and large and small particles (f,g).The corresponding SAED patterns for FCC matrix (b) and σ phase (c,d). Figure 6 . Figure 6.TEM images of post-fractural tensile deformation microstructures for the Si 0.5 MEA. Figure 7 . 12 Figure 8 . Figure 7. Friction coefficient of Si 0.3 and Si 0.5 MEAs during sliding against GCr15 steel balls from 20 N load.Materials 2024, 17, x FOR PEER REVIEW 10 of 12 Figure 8 . Figure 8. SEM images of wear surface of Si 0.3 (a-c) and Si 0.5 (d-f) MEAs, respectively.(c) is the amplified image taken from the brown rectangular area, (e) is the amplified image taken from the brown rectangular area, and (f) is the amplified image taken from the cyan rectangular area.
8,327.2
2024-06-01T00:00:00.000
[ "Materials Science", "Engineering" ]
DNA aggregation induced by polyamines and cobalthexamine. We have studied the precipitation of short DNA molecules by the polycations spermidine, spermine, and cobalthexamine. The addition of these cations to a DNA solution leads first to the precipitation of the DNA; further addition resolubilizes the DNA pellet. The multivalent salt concentration required for resolubilization is essentially independent of the DNA concentration (between 1 μg/ml and 1 mg/ml) and of the monovalent cation concentration present in the DNA solution (up to 100 mM). The DNA aggregates are anisotropic; those obtained in the presence of the polyamines spermidine and spermine generally contain a cholesteric liquid crystalline phase that flows spontaneously. In contrast this phase is never seen in the presence of cobalthexamine. We propose that the ability of polyamines to condense DNA in fluid structures is an essential feature of their biological functions. We have studied the precipitation of short DNA molecules by the polycations spermidine, spermine, and cobalthexamine. The addition of these cations to a DNA solution leads first to the precipitation of the DNA; further addition resolubilizes the DNA pellet. The multivalent salt concentration required for resolubilization is essentially independent of the DNA concentration (between 1 g/ml and 1 mg/ml) and of the monovalent cation concentration present in the DNA solution (up to 100 mM). The DNA aggregates are anisotropic; those obtained in the presence of the polyamines spermidine and spermine generally contain a cholesteric liquid crystalline phase that flows spontaneously. In contrast this phase is never seen in the presence of cobalthexamine. We propose that the ability of polyamines to condense DNA in fluid structures is an essential feature of their biological functions. Multivalent cations with a charge of 3ϩ or greater induce the condensation of DNA in aqueous solution (reviewed in Ref. 1). In extremely dilute DNA solutions, one can observe the monomolecular collapse of long chains; with more concentrated DNA solutions (of short or long chains), aggregation sets in. Electrostatic forces appear to be predominant in DNA condensation. For highly charged polyelectrolytes there is a strong electrostatic repulsion between the chains. One expects the addition of multivalent cations to decrease this repulsion. DNA condensation by multivalent cations has been analyzed within the framework of the counterion condensation theory developed by Manning (2). This theory predicts the fraction of the DNA charges neutralized by a given cation: a saturating trivalent cation for instance should neutralize 92% of the DNA charges. It has been shown experimentally that approximately 90% of the DNA charges must be neutralized before DNA condensation can occur (3,4). In addition, mono-and divalent cations compete with multivalent cations in the condensation process, in agreement with the proposal that the interactions are predominantly electrostatic. It is known, however, that a purely electrostatic model is insufficient to account for the experimental data; cobalthexamine is for instance five times more efficient as a condensing agent than spermidine, although these compounds have the same charge (3ϩ) (4). DNA condensation has been studied with the naturally occurring polyamines spermidine (3ϩ) and spermine (4ϩ), as well as with the inorganic cation cobalthexamine (3ϩ). Exper-imental data indicate that condensation is usually coupled with an isotropic to an anisotropic transition. In particular, high molecular weight DNA aggregates formed by spermidine, spermine, or cobalthexamine give a strong equatorial reflection when analyzed by x-ray diffraction (5,6). Based on these data, several types of crystalline and liquid crystalline structures have been suggested for these DNA aggregates (5,7). We have recently undertaken a study of these DNA aggregates using short (about 150 base pairs long) DNA molecules (8). In the case of the trivalent cation spermidine, we have found that the aggregate is generally biphasic and contains a liquid crystalline phase. In the course of these experiments, we have observed that the addition of excess spermidine leads to the resolubilization of the DNA aggregates. In this work, we describe this phenomenon, which is also observed with the cations spermine and cobalthexamine. We compare the precipitation by multivalent cations with the classical salting out obtained at high salt concentration (9) and with the formation of coacervates (10). We analyze the structures obtained with the three cations and discuss their biological significance. MATERIALS AND METHODS DNA Precipitation-Short DNA fragments were prepared from calf thymus as described in Ref. 8. The length of the fragments obtained ranged from 130 to 600 base pairs with 50% of 146 (Ϯ7) base pairs. DNA precipitation was induced by the addition of the following polycations: spermidine (3HCl, Fluka) (initial concentration, 392 mM; prepared in a TE buffer: 10 mM Tris HCl, pH 7.6, 1 mM EDTA), cobalthexamine (3HCl, Fluka) (250 mM in TE buffer), and spermine (4HCl, Fluka) (400 mM in TE buffer). Two types of experiment were performed. First, the DNA concentration was kept constant at 1 mg/ml, and the salt concentration was varied. The NaCl concentration was varied from 4 mM to 1.2 M; the multivalent cation concentration was increased from 1 to 225 mM. In a second series of experiments the DNA concentration was varied from 1 g/ml to 1 mg/ml. The NaCl concentration was kept constant at 25 mM, and the multivalent cation concentration was increased from 0 to 100 mM for spermidine and spermine. Following the addition of the salts to the solutions, the sample was vortexed, incubated at room temperature for 15 min, and centrifuged at 11,000 ϫ g for 7 min. The amount of DNA remaining in the supernatant and in the pellet was determined by two methods: measurement of the absorbance at 260 nm or of the radioactivity of 32 P-labeled DNA. For the UV absorbance, an aliquot of the supernatant was recovered and diluted in a TE buffer solution containing a NaCl concentration identical to that of the sample. For the second method, 1 g of DNA was treated with alkaline phosphatase, and 5Ј end-labeled with T4 DNA kinase to a specific activity of about 5 ϫ 10 7 cpm/g. The labeled DNA was purified by gel filtration on a Sephadex NAP-5 column, and the eluted material (about 500 l) was concentrated on a Centricon-10 microconcentrator (Amicon) to a final volume of about 50 l. In a standard precipitation experiment, 10,000 cpm of this labeled DNA was added as a radioactive tracer. The radioactivity present in the pellet cannot reliably be determined by Cerenkov counting (in the 3 H channel of the liquid scintillation counter). For this reason, the pellet was resuspended in 2 M NaCl, and the radioactivity present in the supernatant and in the pellet was measured in a toluene-based scintillation fluid. The concentration of multivalent cations required to either precipitate or resolubilize DNA was taken at the midpoint of the corresponding transitions (half-point concentration between zero precipitation and the maximal value obtained in precipitation). * This work was supported by Grant 6473 from the Association pour la Recherche sur le Cancer and by Grants DSPT5 and ACC-SV5 from the Ministère de l'Enseignement Supérieur et de la Recherche. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ¶ To whom correspondence should be addressed. Polarizing Microscopy-DNA was precipitated from a 1 mg/ml solution using various monovalent and multivalent salt concentrations. An aliquot of the pellet was recovered and deposited between slide and coverslip. The sample was sealed with DPX resin (Fluka) in order to prevent evaporation. The preparations were observed between linear crossed polars in a Nikon Optiphot microscope. RESULTS DNA Precipitation at a Fixed DNA Concentration-DNA precipitation from a dilute solution (1 mg/ml) by spermidine, spermine, and cobalthexamine has been studied for multivalent cations concentrations ranging from 0 to 225 mM. The corresponding precipitation curves are shown on Fig. 1. At low NaCl concentration (25 mM), a common feature is observed for the three polycations: an increasing concentration leads first to an almost complete precipitation of the DNA, and further increase leads to the resolubilization of the pellet. This soluble state observed at high multivalent cation concentrations can be reached either directly without precipitation (in which case one may speak of a suppression of the precipitation) or by resolubilization of a pellet (the term that we use here). The phenomena of precipitation and resolubilization observed here are fully reversible, as noted in the case of precipitation in our previous work (8). In the case of the trivalent cation cobalthexamine, the amount of DNA remaining in the supernatant could not be determined by the absorption at 260 nm, because of the high absorption of cobalthexamine at this wavelength. For this reason, the radioactivity method alone was used for this cation. In the case of spermidine and spermine, the two methods (spectroscopic and radioactivity) have been used and can be compared (Fig. 1, A and B). For spermidine, there is a clear quantitative difference in the amount of DNA remaining in solution at about 10 mM spermidine (maximum of precipitation) determined by UV absorption (less than 5%) and by radioactivity (about 40%). This seems to be due to fractionation: we have analyzed the length of the fragments in the pellet and in the supernatant by gel electrophoresis. The supernatant fragments are shorter than the pellet fragments, and the differences were quantified by densitometry of the gels on a PhosphorImager (data not shown). As a result, the mass amount of 5Ј endlabeled DNA precipitated by spermidine is underestimated by the radioactive method. Qualitatively however, the shape of the two curves determined by UV absorption and radioactivity are similar. For spermine, the two curves obtained at 25 mM ( Fig. 1B) are in much better agreement. We observe, however, some scattering of the experimental data in the resolubilization zone for the UV determination. This is apparently due to the precipitation of the aliquot of the supernatant recovered for the UV measurement, when it is diluted in the TE buffer. Because of the experimental difficulties encountered with cobalthexamine and spermine, we have chosen to use the radioactivity method for the three cations. Several pieces of information can be extracted from a comparison of Fig. 1 (A, B, and C). 1) The amount of multivalent cations required to precipitate DNA varies: the tetravalent spermine is the most efficient. The trivalent cobalthexamine is about four times more efficient that the trivalent spermidine. Similar observations have been made previously in studies of the condensation of DNA by these cations (4,11,12). Spermine and cobalthexamine can lead to an almost complete precipitation. This implies that they are more efficient in the precipitation of small fragments as previously observed for spermine at low DNA concentrations (13). 2) The amount of multivalent cations required to resolubilize DNA varies but with a different order; the midpoint for this transition corresponds to 50 mM spermidine, 90 mM, spermine and 220 mM cobalthexamine. 3) The presence of an increased NaCl concentration prevents DNA precipitation as observed previously for spermidine and spermine (12,14). In the case of spermidine no precipitation is observed for a NaCl concentration greater than 100 mM. This concentration does not fully prevent the precipitation by spermine or cobalthexamine. The suppressing effect can be observed at higher concentrations (500 mM for spermine, 1.2 M for cobalthexamine). The competing action of NaCl differs in the precipitation region and in the resolubilization region. In the precipitation region the presence of an increased amount of NaCl (100 mM instead of 25 mM) increases the concentration of spermine or cobalthexamine required for precipitation. This competition between mono-and multivalent cations has already been observed by several authors (3,4,15,16). In contrast, the resolubilization by excess spermine or cobalthexamine is only slightly affected by the presence of 100 mM NaCl rather than 25 mM NaCl, with the midpoint remaining unchanged. In the case of spermidine, resolubilization is also not affected by a change from 4 to 55 mM NaCl (data not shown). DNA Concentration Effects-DNA precipitation by polyamines has been studied for different DNA concentrations ranging from 1 g/ml to 1 mg/ml (Fig. 2). In these experiments the NaCl concentration was kept constant at 25 mM. We observe again the behavior described above. The addition of polyamines first leads to precipitation; further addition resolubilizes the DNA aggregate. The amount of multivalent cations required for DNA precipitation increases with DNA concentration at low concentrations (between 1 g/ml and 50 g/ml). Above 50 g/ml, this concentration dependence is no longer observed for spermidine ( Fig. 2A). The efficiency of the precipitation varies also with DNA concentration; it decreases from a maximal value of 60% (for 1 g/ml) to 30% (for 50 g/ml) and increases back to 60% (at 1 mg/ml). Using the data of Fig. 2A, one can show that between 1 and 50 g/ml, the concentration of spermidine at the transition midpoint increases linearly with DNA concentration according to the equation: where ␣ and ␤ are constant parameters. This linear dependence has been previously observed for DNA condensation by polyamines (11) and cobalthexamine (4). In the case of spermidine, we obtain ␣ Ϸ 36 and ␤ Ϸ 0,55 mM (the DNA concentration being expressed in terms of monomer concentration; 1 mg/ml corresponds to 3 mM). These values agree with those obtained previously (11). In contrast with the linear dependence observed for precipitation, the resolubilization by an excess polyamines is essentially independent of DNA concentration between 1 g/ml and 1 mg/ml. This is easily seen for the resolubilization by spermine (Fig. 2B) and is illustrated in a better way for spermidine below (see Fig. 3A). The data of Fig. 2 (A and B) have been used to draw a schematic phase diagram shown in Fig. 3 (A and B). This phase diagram is composed of three regions: two monophasic regions and one region where phase separation occurs. The boundaries of this phase separation region correspond to the concentration of polycations required to precipitate (lower bound) or resolubilize (upper bound) DNA. The choice for these boundaries (midpoint of the transition) is somewhat arbitrary; however, Fig. 2 (A and B). In A, the two points indicated by asterisks correspond to extrapolated midpoint values obtained from Fig. 2A. The lines are guides for the eye. the overall shape of the phase diagram is not modified by values of the polycation concentrations corresponding from 20 to 72% of the DNA remaining in solution (data not shown). The phase separation region is much wider for the tetravalent cation spermine (Fig. 3B) than for the trivalent spermidine (Fig. 3A). As observed above, the upper bound is essentially independent of the DNA concentration ( Fig. 3A illustrates clearly this fact for spermidine). This contrasts with the lower boundary. Phase diagrams similar to those obtained here were described for another anionic polyelectrolyte (polystyrene sulfonate) in the presence of the trivalent cation LaCl 3 and the tetravalent cation Th(NO 3 ) 4 (17). For this polymer also the presence of increasing amounts of NaCl shrinks the phase separation region. For DNA in the presence of spermidine, the lower boundary gives a funnel-shaped phase separation region. The change in slope of the lower boundary coincides with the minimum efficiency of the precipitation process. We do not understand this correlation. Nature of the DNA Aggregates-DNA aggregates obtained in the presence of polyamines or cobalthexamine have been observed by polarizing microscopy. The nature of the aggregates obtained in the presence of polyamines depends on the NaCl and polycation concentrations, in contrast with the aggregates obtained in presence of cobalthexamine, where a unique phase is usually observed. In the presence of spermidine, there exist two types of condensed structures: a cholesteric liquid crystalline phase and a more concentrated phase (8). The relative amounts of the two phases are modulated by ionic conditions: one finds either a pure cholesteric phase (not illustrated), a biphasic state (Fig. 4A), or a pure concentrated phase (not illustrated). These structures have been analyzed in greater detail elsewhere. 1 Pure concentrated phases are seen for spermidine concentrations just above the aggregation concentration, whereas pure cholesteric phases are found for NaCl or spermidine concentration close to the resolubilization concentration. We observe the same phenomenon for spermine; in Fig. 4B a pure concentrated phase is seen in the presence of 10 mM spermine and 25 mM NaCl. The structure is that of a dense cluster of germs that cannot always be seen individually. Between crossed polars, the intensity of the transmitted light is more intense than in the homologous phase with spermidine, which suggests a denser packing of the DNA helices. A pure cholesteric phase is observed in the presence of 90 mM spermine, 25 mM NaCl (Fig. 4C). In this last figure we note the presence of spherical germs with concentric layers, a texture characteristic of a cholesteric liquid crystal (18). A common feature of the cholesteric phases obtained in the presence of spermidine and spermine is their fluidity; they both flow spontaneously. For both phases the helical pitch is much larger than the cholesteric helical pitch obtained in the presence of NaCl alone (about 20 versus 2.5 m (19)). DNA condensed in the presence of cobalthexamine yields a single phase (Fig. 4D), which is highly birefringent. This suggests a high packing density of the molecules, in agreement with interhelical spacing measured by x-ray diffraction: 27.5 (5) or 28.2 Å (6) for cobalthexamine compared with 29.2 Å for spermine (5) and 29.4 Å for spermidine (5,6). This structure does not flow spontaneously. The fluidity of this phase is therefore highly restricted but can be established by squeezing the sample be-1 D. Durand, J. Doucet, and F. Livolant, submitted for publication. tween slide and coverslip. The structure is probably that of a columnar hexagonal liquid crystal. The time required for the appearance of the structure shown in Fig. 4D is much longer than the time required for the organization of the condensed states in the presence of polyamines. Following the addition of cobalthexamine to a DNA solution, the condensed phase first presents a structure identical to the structure obtained in the presence of spermine (Fig. 4B). This organization lasts a few days. The evolution to the texture seen in Fig. 4D requires about 1 week. In contrast this state is reached in a few hours with polyamines. The structure shown in Fig. 4D is the only one seen with cobalthexamine; cholesteric phases have never been seen, not even when an excess of cobalthexamine is present (corresponding to 55% DNA resolubilization). Precipitation of Polyelectrolytes by Salts: Salting Out versus Complex Coacervation-The precipitation of polyelectrolytes by salts is often called "salting out." It is useful to clearly distinguish between a classical salting out behavior and the phenomenon observed here. The classical salting out describes the precipitation observed in the presence of high concentrations (typically 1 M or greater) of salts (usually monovalent). The precipitation is thought to be due essentially to the reduction of the activity of water in the solution that results from the high concentration of the hydrated salt ions (9). The activity of the polyelectrolyte is therefore increased, and its solubility decreases. The classical salting out behavior can be observed for DNA in presence of LiCl; precipitation take places for concentrations greater than 9 M (20,21). In contrast with the high monovalent salt concentrations required in a classical salting out, the precipitation by multivalent cations occurs at low concentrations that are not expected to greatly modify the activity of water. We are not therefore dealing with a classical salting out here. Bungenberg de Jong has described the precipitation of numerous polymers and colloids under various conditions (10). According to his nomenclature, the precipitate can be an ordered solid (a true crystal) or can be in an amorphous state, either solid (a flocculate) or liquid (a coacervate). He has specifically described the precipitation of numerous polyelectrolytes in presence of micro-ions or polyelectrolytes and introduced the term of complex coacervation to describe the phase separation occurring in such system. In our experiments, we observe such a phase separation, but the precipitate, instead of being amorphous, is a highly ordered fluid. Nevertheless we propose that it should be considered as a complex coacervate (as already done for polylysine-DNA complexes (22)). We note here that the term "complex coacervation" is often specifically used to describe a complex between oppositely charged polyelectrolytes (23). However the original definition of Bungenberg de Jong encompasses the condensation of DNA by 3ϩ and 4ϩ cations (p. 336 in Ref. 10). The difference between complex coacervation and a classical salting out can be illustrated in the following manner. Starting from DNA spermidine complex coacervate, the addition of increasing amounts of LiCl leads first to the resolubilization of the DNA aggregate (at about 0.2 M LiCl) and then to the reprecipitation corresponding to the classical salting out (which is observed at 12 M LiCl) (data not shown). Precipitation/Resolubilization of Polyelectrolytes by Multivalent Salts-Bungenberg de Jong has shown that solutions of highly charged polyelectrolyte chains precipitate upon addition of multivalent salts and resolubilize with further addition of the salts; this behavior has been observed for several types of polyelectrolytes including nucleic acids (sodium nucleate) and for various multivalent cations, including cobalt hexamine and an hexavalent cation (hexol nitrate). Similar observations have been also reported for polystyrene sulfonate (17) and DNA (this work). How can we explain precipitation and resolubilization? Counterion condensation alone is not sufficient to explain the appearance of an attractive force between similarly charged polyelectrolytes in the presence of multivalent cations (1). Net electrostatic attractive forces are obtained when charges fluctuations are taken into account. Correlated counterion fluctuations can lead to an attractive interaction, as first proposed by Oosawa (24). The effect of charge fluctuations on the interaction of highly charged polyelectrolyte in the presence of multivalent cations has also been recently studied by Olvera de la Cruz et al. (25). The authors used a model where condensed ions are considered as a random charge along a flexible polymer. They showed that the precipitation by multivalent salts can be explained by a short-range electrostatic attraction and that the resolubilization at high salts concentration is due to the screening of the short range electrostatic attraction. Their model accounts almost quantitatively for the experimental data obtained for the precipitation of polystyrene sulfonate. Another mechanism that could also explain the resolubilization phenomenon is that of a charge reversal. There are several reasons why such a mechanism should be considered. First, it is well known experimentally that the resolubilization in presence of multivalent cations can correlate with a charge reversal; several examples (for instance sodium nucleate plus an hexavalent cation) are discussed by Bungenberg de Jong (10) who notes that transgression of solubility usually correlates with charge reversal. In the case of DNA, we know that already 90% of the charge is neutralized in the DNA aggregate (3,4); because DNA resolubilization is produced by increasing the concentration of multivalent cation, resolubilization should be accompanied by an increased binding of the cation. The possibility of a charge reversal is therefore not unlikely. We note in this respect that the charge reversal of DNA in aqueous MgCl 2 solution has been observed for concentrations in excess of 1 N MgCl 2 . 2 We have tested this hypothesis with a DNA sample dissolved in a buffer containing 70 mM spermidine (in which 70% of the DNA is soluble). This sample was submitted to electrophoresis in an agarose gel equilibrated in the same buffer, which was circulated during electrophoresis. The DNA migrates toward the anode (data not shown). This indicates that the net charge of DNA remains negative and seems to rule out the charge reversal hypothesis. To summarize, a purely electrostatic description of the behavior of polyelectrolytes in the presence of multivalent cations (25) accounts qualitatively for the phenomena of precipitation and resolubilization. It is clear however that this description is not sufficient to account for all the phenomena observed, in particular the existence of two distinct phases in equilibrium and their liquid crystalline nature in the case of polyamines. In addition, we observe that cations carrying the same 3ϩ charge have different behaviors; both the nature of the aggregates and the concentrations required for precipitation and resolubilization differ. Thus there exist specific ion effects depending not only on their charge but also on their structure. Several forces are supposed to be implicated in the stabilization of the DNA aggregates: van der Waals' interactions (26), cross-links by condensing counterion (5), and hydration forces (6). We have argued previously that the existence of cross-links with spermidine is incompatible with the fluidity of the condensed phase. 1 We can extend this reasoning to the case of spermine. Our findings support the delocalized binding expected from the counterion condensation theory (2) that is 2 A. Papon and U. P. Strauss, unpublished results. observed for spermine in NMR (27) and photoaffinity cleavage experiments (28). We conclude from these observations that the existence of localized sites and salt bridges is unlikely here. This does not rule out their existence for specific DNA sequences or in nonaqueous solvents. It is worth mentioning that the existence of the two types of aggregated phases reported here in the presence of spermidine and spermine has also been observed with short oligonucleotides (dodecamers) in the presence of spermidine, spermine, and cobalthexamine (29,30). These experiments and ours cannot be directly compared, because the solvent conditions are different. The oligonucleotides are aggregated in the presence of methyl-pentane-diol or Polyethylene glycol 4000. Biological Implications-The biological implications of our results can be discussed from three points of view: physiological, biochemical, and evolutionary. From the physiological point of view, polyamines are ubiquitous compounds that are involved in numerous cellular processes (31,32). Their binding to DNA and also to RNA suggests that these compounds play an important role in nucleic acid function. The concentration of polyamines increase markedly upon stimulation of RNA synthesis; they have also been implicated in protein synthesis. Here we have observed that spermidine and spermine are able to condense DNA into highly fluid anisotropic structures (cholesteric liquid crystal). In contrast, the inorganic cation cobalthexamine only yields an anisotropic structure that does not flow spontaneously. We propose that this ability to interact with nucleic acids in a dense phase while preserving some fluidity is an essential feature required for their biological functions. This proposal makes specific predictions that can be tested. 1) The ability to condense nucleic acids into highly fluid anisotropic structures should also be observed with other nucleic acid structures such as RNA or nucleosomal DNA. In the case of RNA, liquid crystalline structures have already been observed for tranfer RNA in the absence of polyamines (33). We expect these nucleic acids to give rise to highly fluid anisotropic structure when condensed by polyamines. 2) There exist prokaryotic and eukaryotic mutants deficient in polyamine biosynthesis that are auxotrophic for polyamines (32). The addition of synthetic polyamine analogs can sometimes restore growth in these mutants. We expect the ability of these polyamines to restore growth to correlate with their ability to give rise to highly fluid anisotropic structures with nucleic acids in vitro. In particular, analogs that do not yield such structures should not be able to restore growth. We would like to make it clear that we do not propose that DNA is condensed in cellular systems by polyamines alone (a point of view already expressed by Gosule and Schellman (34)). Clearly, several factors are involved in the stabilization of compact forms of DNA such as specific proteins (histones in eukaryotes or histone-like proteins in prokaryotes), macromolecular crowding (35), or DNA supercoiling. It appears that these different factors are able to cooperate in the stabilization of compact forms of DNA; macromolecular crowding for instance decreases the amount of cobalthexamine (6) or histonelike proteins (36) required for DNA condensation, and DNA supercoiling facilitates DNA condensation by divalent cations (37). Our proposal deals rather with the required fluidity of the condensed structure present in the cells; the factors that are physiologically involved in DNA compaction should preserve fluidity. The biological significance of the resolubilization observed at higher polyamine concentrations is at present unclear, because the concentrations required (typically about 50 mM) appear unphysiological (the polyamines intracellular concentrations are in the millimolar range (about 10 mM) (31, 32)). It is possible, however, that there exist locally high concentrations of polyamine that are involved in decondensing rather than condensing processes or that the concentration required for an in vivo effect is lower than in in vitro experiments. In this respect, we note that the highest concentrations in polyamines are generally found at the G1 phase of the cell cycle (32). The suggestion has been made that these high concentrations are required in the cell's preparation for DNA synthesis. From the biochemical point of view, it is well known that polyamines are often used in vitro for the study of the functional properties of nucleic acids (see Ref. 8 and references therein). They can increase the efficiency of different enzymatic systems. It is also known that they can be present in sufficient concentrations to precipitate the nucleic acids (32). A correlation between the stimulatory effect and the aggregation of nucleic acids has been demonstrated in several cases leading to the suggestion that the aggregate should be fluid. For instance the catenation of DNA by topoisomerases requires the aggregation of the DNA molecules by spermidine (16). The inorganic cation cobalthexamine is also able to stimulate these enzymatic reactions albeit generally in a less efficient manner. We propose that this lower efficiency of cobalthexamine results in part from the lack of fluidity of the condensed DNA phase. One way to investigate the role of the fluidity of the DNA phase in such experiments is to induce DNA condensation prior to the addition of the enzyme (in contrast with most experiments where DNA condensation and the addition of the enzyme are simultaneous). In an aggregate lacking fluidity the stimulatory effect on the action of the enzyme should be very low or even absent. Finally, the experiments reported here can also be discussed from an evolutionary perspective. The ubiquity of polyamines among the cells is likely to reflect their antiquity. In addition to their compacting action and their role in the protection of DNA against shear degradation and UV irradiation, polyamines can increase the biochemical activities of DNA as noted above. All these properties led Baeza et al. (38) to propose that "compacted forms of DNA induced by polyamines may represent a primordial DNA genome." The observation that such aggregates are fluid strengthens this proposal. We have seen above that such aggregates can be considered as coacervates. It is useful to recall here Oparin's classic proposal on the role of coacervation in prebiotic chemistry (39): coacervation is considered as an essential concentrating process by which mixtures of randomly formed prebiotic polymers initially in dilute solutions are condensed into concentrated assemblies. The phase separation of the polymers into separate coacervate droplets is thought to provide the appropriate medium required for the evolution of these prebiotic systems. According to the current view, Oparin's proposal suffers from the defect that it considered polypeptides rather than nucleic acids as a model for the primeval gene. The proposal of Baeza and co-workers as well as our present results should help reactualize Oparin's coacervation model in the perspective of a DNA (or an RNA) world.
7,287
1996-03-08T00:00:00.000
[ "Chemistry", "Biology" ]
Investigation of Physicho-chemical Properties and Characterization of Produced Biosurfactant by Selected Indigenous Oil-degrading Bacterium. Background Due to the amphipathic properties of biosurfactants which act on surfaces and interfaces interest by a variety of industries such as cosmetic, pharmaceutical, bioremediation and petroleum-related industries has recently increased. Methods Detection of a high-efficiency biosurfactant producer using preliminary screening methods from soil contaminated with crude oil was carried at the Microbiology Laboratory at Shahid Beheshti University, Tehran, Iran in 2013. Then after characterization of some physico-chemical properties of produced biosurfactant and production optimization conditions, processes of purification and complete identification were done. Results Pseudomonas aeruginosa sp. ZN was selected as high-efficiency biosurfactant producing strain from soil contaminated with oil from Ahvaz City, Khuzestan Province, southern Iran. The biosurfactant production in modified BH2 culture medium supplemented with 1% n-hexadecane occurred during exponential phase resulting in a reduction surface tension from 70 to 29 mN/m. Strain ZN produced biosurfactant with different properties to other Pseudomonas reported. These characterizations included continued production at C/N ratio range of 10-40; the produced biosurfactant could not separate stable emulsion of span-80-kerosene: Tween-80-distilled water (30:70) within 24 h. The produced biosurfactants were able to increase hydrophobicity of bacterial cell to 55%. Recovery of biosurfactants from cell-free supernatant was performed with acid precipitation and ammonium sulfate precipitation. Chemical analysis such as spraying techniques on developed TLC plate and staining methods of supernatant indicated that produced biosurfactants were glycolipids, characterized by ESI-MS analysis of extracted product as di-rhamnolipids. Conclusion Ability of this strain to produce biosurfactant in the presence of cooked oil and n-hexadecane make it an optimistic candidate for biodegradation of some derivatives of crude oil and food industry. eukaryotic microorganisms (1). Biosurfactants are categorized into lipoproteins, lipopolysaccharides, phospholipids, glycerides, and glycolipids according to their chemical groups and microbial origins (2,3). Interest in use of biosurfactants in different industries such as bioremediation, biodegradation, petroleum, medicine, cosmetic and food industries, increased in recent years due to their advantages over chemical surfactants including biodegradability, lower toxicity, better environmental compatibility and specific activity at extreme temperatures, salinities and pHs (1,4). The composition of culture media and chemophysical parameters strongly influence kinetic production, biosurfactant congeners and properties (5,6). The amphiphilic nature of biosurfactants not only form uniform water in oil and oil in water emulsions but could also dehydrate emulsions, being a promising technique in petroleum dependent industries (7). The purpose of this study is isolation of a highefficiency biosurfactant producing strain among hydrocarbon degraders and complete identification of produced biosurfactant based on purification, preliminary characterization and ESI-MS results. Bacterial Isolates The 10 mg of soil contaminated with crude oil collected from Ahvaz City, Khuzestan Province, southern Iran in 2013, was suspended in 100 ml buffer (solution A in modified Bushnell-Haas2 (BH2) medium), after shaking at 130 rpm, 5 ml of this suspension was added to modified BH2 medium supplemented with 1% (v/v) crude oil. For re-inoculation, after 20 d, 1 ml of fermentation broth transferred to fresh medium. Finally, 100 µl culture broth steaked on nutrient agar and incubated at 30 °C for 24 h. Culture media conditions and isolation of high-efficiency producing bacterium The compositions of nutrient broth for preparing of inoculum were as follows (g/L): beef extract To detect biosurfactant producers some preliminary screening methods were performed (8)(9)(10). 16S rDNA gene sequence analysis For partial 16S rDNA sequence analysis, following isolation bacterium chromosomal DNA with High Pure PCR template preparation kit (Roche Applied Science, Germany), polymerase chain reaction (PCR) was carried out using CinnaGen PCR Master Mix (Cat No. PR8252C) containing MgCl2, PCR buffer, dNTP mixtures and enzymes. Afterward, primers and template DNA were added. In this study, one set of primer 27f (forward primer 5ʹ-AGAGTTTGATCCTGGCTCAG) / 1510r (reverse primer 5'-TACGGYTACCTTGTTA CGACTT) was used to amplify conserved sequences of the bacterial 16S rDNA. The final volume of reaction was 50 μl including 25 μl Master Mix, 1 μl of each primer, 1 μl template DNA and the rest of volume, sterile deionized water was added. PCR amplification was carried out using a thermal cycler (Techne, UK) with following program: initial denaturation at 94 °C for 5 min, then 33 cycles of denaturation at 94 °C for 30 sec, primer annealing at 56 °C for 55 sec and primer extension at 72 °C for 1 min. The PCR product was sequenced by CinnaClone Company and sequence homologies were performed with NCBI. A consensus neighborjoining tree was constructed using Molecular Evolutionary Genetics Analysis (MEGA) software version 5.0. The sequencing result was submitted to GenBank with accession number KX808675. Kinetic of biosurfactant production and measurement of surface tension and CMD Culture medium (modified BH2) supplemented with 1% (w/v) n-hexadecane and initial OD600 of 0.1 used to produce biosurfactant for 6-day incubation. At regular intervals, 10 ml of sample was taken for determination of OD600, surface tension (ST) and critical micelle dilution (CMD) at room temperature with De Nouy ring method using K-8 tensiometer (Kruss, Hamburg, Germany). Critical micelles dilution (CMD) is a measure of the dilution factor to reach the level of critical micelle concentration (1). Effect of carbon sources and C/N ratios on biosurfactants production In order to determine the optimum conditions for biosurfactant production, modified BH2 culture medium with different carbon sources such as: decane, n-hexadecane, glucose, glycerol, paraffin and cooked oil, at concentration of 1% (w/v) and after selection the best carbon source, different C/N ratios such as: 10, 20, 25, 30, 35 and 40 were used. The culture media were incubated at 30 °C, pH 7 and on rotary shaker at 130 rpm for 10 d. The biosurfactant production and its activity were investigated oil spreading test and E24, respectively at regular time intervals. Emulsification Index (E24) Two ml of paraffin was added to the same volume of cell-free supernatant in a glass test tube. The tube was mixed vigorously for 2 min and then left at room temperature for 24 h. E24 was calculated as (h1/h0) ×100; where h1 is the height of emulsified layer (mm) after 24 h and h0 is the total height of liquid column (mm) (11). Bacterial adhesion to hydrocarbon test (BATH test) BATH test is indicative of bacterial cell hydrophobicity in the presence of biosurfactant. One ml of fermentation broth was transferred to a microcentrifuge tube and centrifuged to separate bacterial cells. Then, the cell pellets were washed and suspended in buffer part of modified BH2 medium to obtained approximately 1 for OD600 and measures it as Ai. 200 µl of hydrocarbon (nhexadecane or n-heptadecane) was added and vortexed severely for 2 min. After 30 min under steady condition, OD600 (Af) of the aqueous phase was measured and cell hydrophobicity (CH %) was calculated as (1-Af/Ai) × 100 (5,12). Antimicrobial activity Antimicrobial activity was investigated against Escherichia coli as a gram-negative bacterium, Bacillus subtilis as a gram-positive bacterium and Candida albicans as yeast with disk diffusion method. After incubation time for 24 h at 37 °C, diameter of inhibition zones was measured (12,13). Demulsification experiment For preparing the stock solutions of kerosene and Tween-80, 0.8 g of Span-80 (Sigma, Lot No. 98k09888) and 1 g of Tween-80 were added to 1 L of kerosene and 1 L of distilled water, respectively. Before using, they have been stirred for 1 min. To identify model emulsions, different ratios of Span-80-kerosene: Tween-80-distilled water (70:30, 60:40, 50:50, 40:60 and 30:70) were performed. The total volume was 10 ml. Then, they were mixed on a vortex at maximum speed for 3 min and were incubated at 30 °C under a static condition. After 24 h, the most stable emulsion was selected as a model emulsion. To carry out demulsification assay, 1 ml of cell-free supernatant was added to 9 ml of model emulsion and mixed vigorously 3 min to form a uniform emulsion and incubated for 24 h (7). Recovery and purification of biosurfactants Two methods were carried out to recover biosurfactants: acid precipitation and ammonium sulfate precipitation. For acid precipitation method, pH of cell-free supernatant was decreased to below 2 with HCl or H2SO4 and followed by storing at 4 °C for overnight. The pellet contained biosurfactants were removed by centrifugation (10000 ×g, 20 min) and dissolved in sodium bicarbonate (pH 8.6), in the next step, acidification was carried out again (5). In order to carry out ammonium sulfate precipitation, 40% of this compound was added to cell-free supernatant and incubated overnight at room temperature. The floating materials were collected by centrifugation (10000 × g, 20 min) and dissolved in distilled water (14). For further extraction, the fraction contained biosurfactant was washed by the same volume of ethyl acetate three times (15). Although a large volume of impurities was removed in the former recovery steps, there were some residual hydrocarbons and bacterial metabolites which co-extracted with biosurfactant. To address this problem and also separation of different biosurfactant congeners, liquid column chromatography was an advantageous technique. A 26 × 3.3 cm column filled up with 20 g silica gel 60 (200-425 mesh)-ethyl acetate slurry and then loaded with 1 g of biosurfactant in ethyl acetate. To remove hydrocarbons used as carbon sources, initially the column was washed with ethyl acetate and after that washed with 40 ml of different ratios of ethyl acetate: methanol (100, 90:10, 60:40 and 100). Oil spreading test was carried out for each fraction (16). Preliminary biosurfactant characterization Partially purified biosurfactant with acid precipitation and column chromatography was used for all process of biosurfactant characterization. The primary biosurfactant structures were determined with a variety of spraying techniques on thin layer chromatography (TLC). In this method, the developing solvent was ethyl acetate: hexane ratio 80:20. The spraying methods were described as follows: ninhydrin, iodine vapor and molish reagents for detection of amino acids, lipid domains and carbohydrate compounds, respectively (17,18). In addition to these spraying methods, biosurfactant chemical characterization can also be quantitatively determined with some staining assays such as Bradford and Anthrone, were performed for cell-free supernatant. Bradford assay is special to detect and quantify amino acids and bovine serum albumin was used as calibration standard (5). Anthrone reagent is special to detect and quantify the amount of glycolipid and for preparation of standard curve, a pure sample of glycolipid is used (15). Electrospray ionization mass spectra (ESI-MS) Individual rhamnolipid congeners were identified by electrospray ionization tandem mass spectrometry equipped with LCQ quadrupole ion trap mass spectrometer (Finnigan MAT, San Jose, CA, USA). The sample was dissolved in methanol at a concentration between 0.01-0.5 mg/ml and negative ion mass spectra were recorded in the m/z range of 50-1200. This experiment was carried out under the following conditions: syringe 5 µl/ml, nitrogen sheath gas and auxiliary gas at 20 and 35 (arbitrary units), respectively, spray voltage to 4.5 kV, capillary temperature at 250 °C, capillary voltage to 47 V and tune lens offset to 40 V (15). Statistical Analysis Optimization experiments were carried out in triplicates and repeated measures ANOVA analysis was utilized to determine significant differences among carbon sources and C/N ratios (SPSS, ver. 19.0 (Chicago, IL, USA)). The result was considered significant if P˂0.05. Screening of biosurfactant-producing microorganisms After isolation of purified bacterial colonies, they were inoculated in MSM supplemented with 1% (v/v) crude oil. Finally, the best crude oil degrader and emulsifier were selected. The appearance of this colony was wet, convex and also some morphological and biochemical properties were: gram-negative, motive, rod-shaped, results of catalase, oxidase, nitrite reduction, and denitrification tests were positive. 16S rDNA gene sequences indicated that this strain was a species of Pseudomonas aeruginosa sp. ZN with an accession number KX808675 (The data was not shown). The effects of different carbon sources and C/N ratios on biosurfactant production Oil spreading curve showed that cooked oil and n-hexadecane were the best carbon sources for biosurfactant production as a result of the maximum diameter of clear zone and the shortest lag phase, 2 and 3 d, respectively. While strain ZN could not use paraffin to biosurfactant production (Fig. 1). Moreover, cooked oil and n-hexadecane had major effect on biosurfactants production because there was a significant difference between them and other hydrocarbons. E24 test showed that the produced biosurfactants in the presence of cooking oil and n-hexadecane were able to form stable and thick layer of emulsions, approximately 65%. In the case of different C/N ratios, pattern of biosurfactant production was the same for all tested ratios; initially slower in the first 3 days and then with higher rates of growth for all with no significant differences between tested C/N ratios (Fig. 2). Kinetic of biosurfactant production In the modified BH2 medium containing with 1% (w/v) n-hexadecane, P. aeruginosa sp. ZN grew to 3.5 during 3 d; hence, surface tension of cell-free supernatant decreased from 70 to 29 mN/m as its minimum level. Cell hydrophobicity (BATH test or CH %) increased immediately after incubation to maximum value (about 55%); Then the end of incubation time it gradually decreased to approximately 40% (Fig. 3). Although the strain reduced surface tension to about 29 mN/m after 3 d, its CMD was higher compared with what was observed after 6 d. showed weak antibacterial activity against B. subtilis. A mixture of span-80-kerosene: Tween-80distilled water with ratio 30:70 produced the most stable emulsion after 24 h and was selected as model emulsion. Results from demulsifying activity showed that the biosurfactants could not break the model emulsion after 24 h at room temperature. Structural characterization of biosurfactant Results of preliminary tests of developed TLC spraying assay with ninhydrin, iodine vapor and molish revealed that the biosurfactants produced by this isolate contained three different spots (13,57 and 65 mm) with a low amount of amino acids, which may be due to presence of residual cell debris, a large amount of carbohydrate and lipid groups. These results were confirmed with staining assays on cell-free supernatant with Bradford and anthrone reagents that showed a large number of glycolipids. The exact characterization result obtained with ESI-MS was indicative that the presence of four different congeners of rhamnolipid: Rha-Rha-C10 (479 m/z), Rha-Rha-C8-C10 (621 m/z), Rha-Rha-C10-C10 (649 m/z) and Rha-Rha-C10-C12:1 (675 m/z) (Fig. 4). Discussion Strain ZN was selected as high-efficiency biosurfactant producing strain through preliminary screening methods and 16S rDNA sequencing result showed isolated strain exhibited the highest similarity (99%) to P. aeruginosa. In the presence of n-hexadecane as carbon source, biosurfactant production occurred on the exponential phase. The result was reported when P. aeruginosa MR01 grew in the presence of glucose. Although some biosurfactants such as surfactin were considered as a primarily products and produced during the exponential phase, limitation of some essential medium ingredients especially nitrogen sources, played a key role for biosurfactant production (5,17,19). The maximum rhamnolipids production was reported by P. aeruginosa UKMP 14T in the presence of glycerol and ammonium sulfate as carbon and nitrogen sources with C/N ratio 14:1 (20). The best C/N ratios for rhamnolipids production were 20:1 and 10:1, respectively, were also confirmed these issue (6,21). While in the current study, results of oil spreading assay and statistical analysis showed that strain ZN was able to produce biosurfactants in a wide range of C/N ratio between from 10 to 40. In the medium culture contained n-hexadecane, produced biosurfactant caused increase cell hydrophobicity up to 55%. Production of biosurfactants probably released lipopolysaccharides (LPS) on cell surface of bacterium and increased the cell hydrophobicity (22). According to amphiphilic properties, some biosurfactants are able to break stable emulsions of oil in water or water in oil (23). In addition, for first time the use of rhamnolipid for demulsification of waste crude oil was cited with efficiency of 98% (24). However, the current study showed that this biosurfactant could not destabilize the model emulsion prepared with span-80-kerosene: Tween-80-distilled water with 30:70 ratios. Some effective factors were reported such as culture age, pH and temperature on demulsifying activity. Therefore, pH and temperature with effects on the ionization of emulsion ingredients or any ionized group on the bacterial surfaces and the viscosity of oil phases, respectively, could be effective on demulsifying activity (25)(26)(27). Many species of Pseudomonas produce rhamnolipid but there are some reports of lipopeptides produced by P. fluorescens BD5 and P. putida (28,29). Acid precipitation is commonly used to purify glycolipid (rhamnolipid) while for purifying lipopeptide, solvent extraction is frequently used but some authors used acid precipitation as well (30,31). Ammonium sulfate precipitation is used to purify the majority of high molecular weight biosurfactants with a considerable amount of protein contents (16). However, a very few articles used this method to purify rhamnolipids. In this method after centrifugation, floating materials contained high concentration of biosurfactants (14). The result of preliminary characterizations showed that produced biosurfactants contained some hydrocarbon and lipid compounds and suggested it was glycolipid as expected from species of P. aeruginosa. Moreover, the result of ESI-MS confirmed rhamnolipid production with this isolate and it showed 4 different congeners of rhamnolipids (Rha-Rha-C10, Rha-Rha-C8-C10, Rha-Rha-C10-C10 and Rha-Rha-C10-C12:1) in the partially purified biosurfactant. Conclusion P. aeruginosa sp. ZN produced rhamnolipids during exponential phase and a wide range of C/N ratios between 10 and 40 had no significant effect on production process. On the other hand, the ability of biosurfactant production in the presence of cooked oil and n-hexadecane is indicative that the using of this strain in order to biodegradation of light fractions of crude oil and wastes of food industries has been optimistic. Ethical considerations Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors.
4,072.6
2018-08-01T00:00:00.000
[ "Environmental Science", "Biology", "Chemistry" ]
The Use of Discriminant Analysis to Assess the Risk of Bankruptcy of Enterprises in Crisis Conditions Using the Example of the Tourism Sector in Poland : The aim of this article is to use multiple discriminant analysis (MDA) and logit models to assess the risk of bankruptcy of companies in the Polish tourism sector in the crisis conditions caused by the COVID-19 pandemic. A review of the literature is used to select models appropriate to analyze the risk of bankruptcy of tourism enterprises listed on the Warsaw Stock Exchange (WSE). The data are from half-year financial statements (the first half of 2019 and 2020, respectively). The obtained results are compared with the current values of the Altman EM-score model and selected financial ratios. An analysis allowed the estimation of the risk of bankruptcy of enterprises from the tourism sector in Poland as well as the assessment of the prognostic value of these models in the tourism sector and the risk of a collapse of this market in Poland. The article fills the research gap created by the negligible use of solvency analysis of the tourism sector and constitutes the basis for estimating the risk of collapse of the tourism sector in a crisis situation. Introduction The tourism sector is one of the industries most affected by the coronavirus pandemic. The sharp decline in global demand for tourism services is a factor; the COVID-19 pandemic reduced the turnover of the global tourism sector by more than 50% in the first half of 2020. An important consideration in the context of the second wave of the pandemic, lockdowns, and restrictions on the activities of tourism enterprises is the risk of bankruptcy of businesses in this sector and the risks for the tourism industry, the share of which in the GDP of Poland in 2019 exceeded 6.3%. In particular, the bankruptcy of tourism firms could cause serious losses for the government and businesses involved, hindering economic development (Li et al. 2013). In fact, the tourism sector is extremely vulnerable to any crises because fixed costs are usually high. In Poland, the entire tourism market is worth approximately PLN 30.9 billion and has grown at a rate of approximately 7% annually in the last three years. The growth was driven by, among other factors, increasing consumption, a rise in household income, and social benefits such as Family 500+, a program of monthly child benefits implemented in Poland from April 2016 (PMR 2019). The SARS-CoV-2 pandemic, however, disturbed these conditions. The three-month lockdown in the first half of 2020, travel restrictions, prohibitions on the organization of large events, fairs, and conferences, and the total paralysis of tourism have left many companies struggling to maintain liquidity. Another economic lockdown in early January 2021 may cause many of them to collapse. Financial distress is one of the most important threats facing firms, regardless of their size and operations (Charitou et al. 2004). Fitzpatrick (1932) and Beaver (1966) were the first to use single variable analysis in the assessment of the possibility of firm bankruptcy. Fitzpatrick pointed out that the development of selected corporate indicators differs for a long time in groups of insolvent and solvent companies before financial distress occurs (Kliestik et al. 2018). Altman (1968), in his Z-score model, used multiple variable analyses in evaluating bankruptcy risk. Using financial data from 33 prosperous and 33 nonprosperous companies, 22 variables were considered in the construction of the model. The model correctly classified 70% of the companies. In 1977, the Z-score model was expanded by Altman et al. (1977) to improve its accuracy (Zeta model). Both the Z-score and Zeta models are specific forms of multiple discriminant analysis (MDA), with all its assumptions and limitations (Li et al. 2013). Default risk prediction of restaurants was first explored by Olsen et al. (1983) using ratio analysis. Gu (2000) and Gu and Gao (1999) indicated that MDA can be used successfully in forecasting the default risk in tourism. Other authors have used logit to predict the default risk for hotels and restaurants, e.g., Cho (1994), . As have shown, MDA and logit models have the same effectiveness in predicting the bankruptcy of restaurants. The use of MDA and logit models to assess the bankruptcy of companies in the tourism sector in Poland is rare. Gołębiowski and Pląsek (2018) investigated 20 MDA and logit models forecasting default risk on a sample of 30 companies (18 solvent and 12 insolvent) from the tourism industry in Poland. The highest t -1 and t -2 accuracy were found in domestic models: the Wędzki model (t -1 accuracy = 91.67%), the Prusak model (t -2 accuracy = 83.33%), and the Gajda and Stos model (t -2 accuracy = 81.94%). The most accurate foreign model for predicting bankruptcy was the Altman model for emerging markets (Altman EM-score). In addition to MDA, artificial neural networks (ANNs) are also used (Atiya 2001). ANNs do not have the statistical constraints of discriminant analysis. In addition, their ability to represent nonlinear relationships makes them well-suited to modeling the frequently nonlinear relationship between the likelihood of bankruptcy and commonly used variables (i.e., financial ratios) (Laitinen and Laitinen 2000). ANNs allow us to determine the significance of variables in the model and to use big data (Agosto and Ahelegbey 2020;Cerchiello et al. 2020). The efficiency of classification using ANNs is often compared with the effectiveness of other methods (discriminant analysis, logit models) and the ANN method is becoming extremely popular. However, the limitation of these methods is the necessity to choose the right tool (Alaka et al. 2018;Chung et al. 2008). The aim of this study is to assess the risk of bankruptcy of companies in the tourism sector in Poland in the crisis conditions caused by the COVID-19 pandemic using discriminant analysis. As we will prove, the COVID-19 pandemic has significantly influenced the risk of bankruptcy of enterprises from the tourism service sector in Poland. This article fills the research gap created by the negligible use of discriminant analysis on the tourism sector in Poland and constitutes the basis for estimating the risk of collapse of the tourism sector in a crisis. The problem is new and important because the impact of the pandemic on the tourism sector is extremely significant, and there are no such studies on the Polish tourism sector yet. It is obvious that there are likely to be some comprehensive studies in the future, but our article already signals some problems that the crisis caused by the pandemic will surely aggravate. Results We estimated the value of the Z function for the surveyed companies using three models: Prusak, Gajdka and Stos, and Altman's EM-score. Table 1 presents the results for the first half of 2019. The value of the Z function for the Prusak model indicates that in the first half of 2019, six out of nine of the companies that were analyzed were at risk of bankruptcy, two were in the grey zone, and only Mex Polska SA was in a good financial situation. In the case of the Gajdka and Stos model as well as Altman's EM-score model, seven companies were in the grey zone or were at risk of bankruptcy. On the other hand, both Gajdka and Stos and Altman's EM-score indicate no risk of bankruptcy for CFI Holdings SA and Interferie SA. Figure 1 presents the number of enterprises in each classification, according to the different models. The value of the Z function for the Prusak model indicates that in the first half of 2019, six out of nine of the companies that were analyzed were at risk of bankruptcy, two were in the grey zone, and only Mex Polska SA was in a good financial situation. In the case of the Gajdka and Stos model as well as Altman's EM-score model, seven companies were in the grey zone or were at risk of bankruptcy. On the other hand, both Gajdka and Stos and Altman's EM-score indicate no risk of bankruptcy for CFI Holdings SA and Interferie SA. Figure 1 presents the number of enterprises in each classification, according to the different models. Table 2 shows the results obtained for the same companies based on the data for the first half of 2020. Table 2 shows the results obtained for the same companies based on the data for the first half of 2020. The value of the Z function for the Prusak model for the first half of 2020 indicates a risk of bankruptcy (distress zone) for all companies. According to the Gajdka and Stos model, only CFI Holdings was in a good financial situation. Using Altman's EM-score, CFI Holdings can also be considered safe. It also indicates that Interferie SA was in a good financial situation. Figure 2 presents the number of enterprises in each classification, according to the different models. The value of the Z function for the Prusak model for the first half of 2020 indicates a risk of bankruptcy (distress zone) for all companies. According to the Gajdka and Stos model, only CFI Holdings was in a good financial situation. Using Altman's EM-score, CFI Holdings can also be considered safe. It also indicates that Interferie SA was in a good financial situation. Figure 2 presents the number of enterprises in each classification, according to the different models. As shown by the discriminant analysis, the risk of bankruptcy of the surveyed enterprises increased significantly in the first half of 2020, compared to the same period in 2019. According to the Prusak model, all nine of the companies that were analyzed were at risk of bankruptcy. The other two models indicate a lower number of companies at risk of bankruptcy. However, it is worth emphasizing that the value of the Z function for all companies decreased. This proves the deterioration of the financial situation of the enterprises that were analyzed, compared to the same period in 2019. This is also confirmed by the analysis of the dynamics of operating profit and fixed assets (Table 3). As shown by the discriminant analysis, the risk of bankruptcy of the surveyed enterprises increased significantly in the first half of 2020, compared to the same period in 2019. According to the Prusak model, all nine of the companies that were analyzed were at risk of bankruptcy. The other two models indicate a lower number of companies at risk of bankruptcy. However, it is worth emphasizing that the value of the Z function for all companies decreased. This proves the deterioration of the financial situation of the enterprises that were analyzed, compared to the same period in 2019. This is also confirmed by the analysis of the dynamics of operating profit and fixed assets (Table 3). The value of fixed assets over the period studied rose for six companies, which means that these companies increased their fixed assets. The largest decrease in the value of fixed assets was recorded by Sfinks Polska SA (a decrease of 32.8%). Although there was a decline in the value of fixed assets at Mex Polska SA and Benefit Systems SA, it was small (1-2%). Only Tatry Mountain Resorts obtained operating profit, achieving an EBIT increase of 24.32% from the first half of 2019 to the first half of 2020. A similar situation occurred in the case of CFI Holdings SA-the company generated operating profit, but in 2020, compared to 2019, there was a 23.6% decrease in EBIT. In the case of Benefit Systems SA, the company achieved operating profit in both periods, but in 2020, EBIT decreased by 87%. The remaining companies, analyzed in the first half of 2019, generated an operating profit, but for the same period of 2020, they suffered an operating loss. The greatest decrease in operating profit, by 260.57%, was recorded by Novaturas (see Table 3). Table 4 presents the following indicators: current liquidity, debt ratio, coverage ratio II, and the sales cash performance index for the surveyed companies in 2019-2020 (first half of the year). The analysis of the current liquidity ratio shows that all companies, except CFI Holdings SA, had problems with financial liquidity in the first half of 2020 as the value of the ratio does not fall within the range of 1.2-2.0. It is also worth emphasizing that in the first half of 2020, compared to the corresponding period in 2019, the value of the ratio was reduced, which proves a decrease in the financial liquidity of these enterprises. The exception is Benefit Systems SA, as the current liquidity ratio slightly increased (but not enough for the company to regain financial liquidity). The analysis of the debt ratio in 2019-2020 indicates a very high level of indebtedness for most of the companies (ratios above 0.67). The analyzed ratio was below 0.57 only in the cases of CFI Holdings SA and Interferie SA, which proves the low indebtedness of these enterprises. Moreover, the value of the debt ratio for Sfinks SA, in both 2019 and 2020, was at 1.02 and 1.22, respectively, which was influenced by the negative value of equity (net loss and loss from previous years). In the first half of 2019, the value of coverage ratio II in the cases of Novaturas AB, AmRest Holdings, Mex Polska, Benefit Systems SA, and Sfinks Polska was below 1.0, which proves that some parts of the fixed assets of the companies were financed with short-term liabilities. For the remaining companies, the value of the indicator was over 1.0. The situation changed in the first half of 2020 because only two companies-CFI Holdings and Tatry Mountain Resorts-had a ratio above 1.0, which means that their fixed assets were financed by fixed capital. The remaining companies did not meet this rule, which may indicate long-term financial instability. The value of the sales cash efficiency index for all the surveyed companies decreased, which proves a decrease in the amount of cash generated by sales revenues. This decrease indicates a growing risk of losing financial liquidity. It should also be emphasized that three companies-Novaturas AB, Rainbow Tours SA, and Interferie SA-recorded a nega-tive balance in cash flows from operating activities in the first half of 2020. We also estimated the value of the Z function for the Wędzki logit model. Table 5 presents the results for the first half of 2019 and 2020. Based on the analysis of the logit model, we find that in the first half of 2019, 55% of the surveyed companies were at risk of bankruptcy, and another 45% were in a good financial situation. This changed in the first half of 2020 when only two companies-CFI Holdings SA and Tatry Mountain Resorts-showed a good financial situation and were not threatened by bankruptcy. The remaining companies were at risk of bankruptcy. Discussion It should be noted that MDA models have some limitations (Altman and Narayanan 1997). They assume that financial ratios are normally distributed and that the variancecovariance structures of insolvent and solvent firms are equivalent. In practice, both of these assumptions rarely hold up (Ezzamel et al. 1987). Logit regression models do not have these assumptions but can produce biased estimates, especially in small-sample studies (Firth 1993). Wu et al. (2010), Grice and Dugan (2001), and Pitrova (2011) have shown that the accuracy of the prediction of MDA models is significantly reduced when the model is used in another industry, at another time, or in a different trading environment than the data used to derive the model. Therefore, it is essential to develop a model for each country, accepting its economic, political, and entrepreneurial uniqueness. On the other hand, according to Mandru, Lidia 2010. The diagnosis of bankruptcy risk using score function (), Altman's model is still solid and durable, despite being formed more than 30 years ago. This view has been confirmed by other studies (Li and Ragozar 2012;Satish and Janakiram 2011). When it comes to debt ratios, the financial structure of a firm is assumed to be a significant failure-related factor in the hospitality business (Geng et al. 2015;Gu 2000;Sun et al. 2017;Zhou 2013). Nevertheless, it should be noted that financial structure was found to be significantly correlated with Spanish hotel failures (Lado-Sestayo et al. 2016) but not with failures of large Spanish hotels (three-star or higher; Gemar et al. 2016). Although early research tended to ignore cash-based ratios, these ratios also demonstrated predictive capacity in a number of studies (Gilbert et al. 1990;Sung et al. 1999;Ravisankar et al. 2010). showed in their study that a hospitality firm is more likely to fail when its operating cash flows are low and total liabilities are high. All of the models that we used showed a visible deterioration in the financial situation of the enterprises that were analyzed. The number of companies at risk of bankruptcy increased significantly (an average of three companies for the first half of 2019 and five for the first half of 2020). Selected financial ratios also deteriorated. As the situation of almost all of the companies in the sector has deteriorated dramatically, the Polish government should consider the default risk of tourism companies before making important decisions, such as creating anticrisis solutions for the tourism sector. It is necessary to monitor the economic stability of the industry as well as to invest and grant loans. As the crisis persists for an extended period, the industry will require fiscal support to avoid mass defaults. Regulatory forbearance on debt can ease the solvency of the tourism industry; on the other hand, it may create long-term risks as it is not helpful in improving structural issues. Lockdowns will strain the tightening economic conditions due to rising healthcare costs and unemployment. Tax deferrals will reduce the amount collected by the exchequer, and providing subordinated loans and equity support will be a significant drag on public funds. However, if there is no intervention, bankruptcies on an unprecedented scale may occur in this sector (Jamal and Budke 2020;Hoque et al. 2020;Rodríguez-Antón Jose Miguel 2020) We do recognize the limitations of our research. Understandably, the risk of internal and external factors, other than the pandemic, that may affect the risk of bankruptcy cannot be excluded. On the other hand, external factors can have a synergic effect on bankruptcyusually, external factors enhance the possibility of internal factors manifesting. Mackevičius et al. (2018) have shown that even in the case of successfully operating enterprises, negative external factors can have a huge negative impact. Finally, as indicated by Narkunienė and Ulbinaitė (2018), some comparative analysis with nonfinancial performance indicators that complement financial indicators should be used. The future direction of the research is its continuation based on the results for the entire year of 2020, with an analysis of the effectiveness of the presented predictions. The research should be extended to include other enterprises from the sector and not only companies listed on the WSE. Future research should also measure the impact of government initiatives to support the recovery of tourism. Materials and Methods As seen in the results from the research in the literature, in the case of enterprises from the tourism industry, the most effective models among Polish discriminant models are by B. Prusak (Prusak 2005) and J. Gajdka and D. Stos (Gajdka and Stos 2003), alongside the logit model by D. Wędzki (Wędzki 2005). In the case of foreign models, the Altman model for emerging markets (Altman EM-score) is of the highest quality (Altman and Hotchkiss 2005;Gołębiowski and Pląsek 2018). Thus, these three models were used to assess the financial condition of companies listed on the WSE. In order to standardize and transparently interpret the results of the study, the same nomenclature for the classification rules was adopted: safe zone (financially sound), grey zone (uncertain status), and distress zone (at risk of bankruptcy). In addition to discriminant models, we also used a singlebranch, noncollinear logit model by D. Wędzki. The form of the models used, with the interpretation of the Z function, is presented in Appendix A ( Table A1). All of the companies we examined were from the WSE sector-travel agencies, hotels and restaurants, and recreation and leisure. There were six companies from the Hotel, Restaurant, Catering/Café (HoReCa) sector, two companies from the travel agency group, and one company from the recreation and leisure sector. The analysis was based on data from financial statements for the first half of 2019 and the first half of 2020. In the case of Tatry Mountain Resorts SA, the financial statements were prepared for the period 1 January 2018-30 April 2019 and 1 November 2019-30 April 2020, and an analysis was made for this time range. In order to deepen the analysis of the bankruptcy risk of the surveyed enterprises, apart from the discriminant models and the logit model, we used the analysis of selected financial indicators. For this purpose, we used the debt ratio, the coverage ratio II, the current liquidity ratio, and the sales cash efficiency index as measures of dynamic liquidity and the dynamics of operating profit (Jagiełło 2013;Podstawka 2017;Sierpińska Maria 2020;Sierpińska and Wędzki 2010). Calculation formulas and interpretation of selected indicators are included in Appendix A (Table A2). Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. The mathematical form of the models and the interpretation of the Z function. Model Mathematical form of the Model Interpretation of the Z Function Prusak Z P = 1.4383x 1 + 0.1878x 2 + 5.0229x 3 − 1.8713 x 1 = net pro f it+depreciation and amortization total liabilities x 2 = operating costs current liabilities x 3 = gross margin total assets Z P ≥ −0.295 safe zone (SZ) −0.7 ≤ Z BP ≤ 0.2 gray zone (GZ) Z P < −0.295 distress zone (DZ) Financial Ratio Calculation Formula Interpretation Debt ratio (DR) DR = total liabilities total assets The indicator should be in the range of 0.57-0.67. A value above 0.67 means a high credit risk. A low value indicates a high share of equity in liabilities. Coverage ratio II equity+non−current liabilities non−current assets coverage ratio II < 1 means that fixed capital (equity + long-term liabilities) does not cover fixed assets. Current liquidity ratio current assets current liabilities The correct value of the indicator should be in the range of 1.2-2.0. Sales cash performance index net cash f rom operating activities total revenue An increase in the value of the ratio over time means more cash from sales and higher security of maintaining financial liquidity.
5,227.6
2021-04-16T00:00:00.000
[ "Business", "Economics" ]
Remission for Loss of Odontogenic Potential in a New Micromilieu In Vitro During embryonic organogenesis, the odontogenic potential resides in dental mesenchyme from the bud stage until birth. Mouse dental mesenchymal cells (mDMCs) isolated from the inductive dental mesenchyme of developing molars are frequently used in the context of tooth development and regeneration. We wondered if and how the odontogenic potential could be retained when mDMCs were cultured in vitro. In the present study, we undertook to test the odontogenic potential of cultured mDMCs and attempted to maintain the potential during culturing. We found that cultured mDMCs could retain the odontogenic potential for 24 h with a ratio of 60% for tooth formation, but mDMCs were incapable of supporting tooth formation after more than 24 h in culture. This loss of odontogenic potential was accompanied by widespread transcriptomic alteration and, specifically, the downregulation of some dental mesenchyme-specific genes, such as Pax9, Msx1, and Pdgfrα. To prolong the odontogenic potential of mDMCs in vitro, we then cultured mDMCs in a serum-free medium with Knockout Serum Replacement (KSR) and growth factors (fibroblastic growth factor 2 and epidermal growth factor). In this new micromilieu, mDMCs could maintain the odontogenic potential for 48 h with tooth formation ratio of 50%. Moreover, mDMCs cultured in KSR-supplemented medium gave rise to tooth-like structures when recombined with non-dental second-arch epithelium. Among the supplements, KSR is essential for the survival and adhesion of mDMCs, and both Egf and Fgf2 induced the expression of certain dental mesenchyme-related genes. Taken together, our results demonstrated that the transcriptomic changes responded to the alteration of odontogenic potential in cultured mDMCs and a new micromilieu partly retained this potential in vitro, providing insight into the long-term maintenance of odontogenic potential in mDMCs. Introduction Vertebrate organs, including tooth, develop upon interactions typically between epithelial and mesenchymal tissues, with one tissue component producing inductive stimuli and another one responding to the induction [1][2]. Odontogenic potential represents an instructive induction capability of a tissue to induce gene expression in an adjacent tissue and to initiate tooth development [3]. In mice, the odontogenic potential shifts from the epithelial compartment to dental mesenchyme at the early bud stage [4][5]. The inductive dental mesenchyme is able to determine the odontogenic fate of dental and non-dental epithelium [4,[6][7][8][9]. Dental mesenchymal cells isolated from prenatal or postnatal tooth germs participate in whole-tooth regeneration in mice, pigs, and rats [10][11][12]. Mouse dental mesenchymal cells (mDMCs) give rise to the whole dental pulp mesenchyme, including the odontoblasts. However, preparation of embryonic cells is time-consuming and acquisition of embryos at the right stage is laborious. In addition, embryonic tooth germ cells are inaccessible in the adult, and xenogenic embryonic tooth germ cells suffer from immune rejection, especially in humans. Thus, several easily available cell sources potentially could be employed to regenerate a whole tooth, including ecto-mesenchymal cells prepared from postnatal teeth, immortalized cell lines, and induced pluripotent stem cells. Postnatal dental pulp contains stem cells that are capable of generating a dentin-like structure lined with odontoblast-like cells [13]. Although dental pulp stem cells potentially could be used for dentinal repair of teeth [14], they do not induce tooth formation when recombined with dental epithelium [3]. Several immortalized cell lines have been established from mouse and human dental mesenchymal cells and display similar characteristics of the primary cells [15][16][17][18][19]. These immortalized cells express tooth-specific genes and can differentiate towards a odontoblast fate. But immortalized cell lines from dental mesenchyme of the bell stage fail to induce the morphogenesis of tooth [20]. Recently, mouse induced pluripotent stem cells showed the potential to differentiate into mDMC-like cells through neural crest-like cells (NCLCs) [21]. Although the recombinant with mDMC-like cells and incisor dental epithelium demonstrated calcified tooth germ-like structures with bone [22], whether the mDMC-like cells possess odontogenic potential was not identified. Collectively, a cell source with odontogenic potential other than mDMCs has not been reported. Efficient strategies for the culture of odontogenic mDMCs are essential for the study of tooth development and would provide opportunities in regenerative medicine. However, in vitro expansion of mDMCs without impairing the odontogenic potential remains a great challenge. The odontogenic potential of mDMCs of embryonic day 14 is lost in the course of culturing [20]. Similarly, the loss of potential has also been reported for hematopoietic stem cells (HSCs). The potential of HSCs expanded in vitro is impaired in subsequent in vivo regenerative assays [23]. Various cytokine cocktails have been used to support HSC growth in vitro, and many factors are found to promote the survival and regenerative potential of HSCs [24][25]. Thus, we wonder if supplementation of growth factors would facilitate the maintenance of odontogenic potential in vitro. In the present study, we have examined the odontogenic potential of cultured mDMCs and found a new approach to maintain the potential during culture based on the transcriptomic data of mDMCs (Fig 1). Our results showed that cultured mDMCs rapidly lost odontogenic potential. RNA-seq analysis revealed a rapid loss of the dental mesenchymal signature in cultured mDMCs and a deviation away from the neural crest. To avoid cell apoptosis and cell differentiation towards fibroblasts, Knockout Serum Replacement (KSR) was used to culture mDMCs instead of fetal bovine serum (FBS). Fibroblastic growth factor 2 (Fgf2), and epidermal growth factor (Egf) that are essential for the development of neural crest and tooth were also used. The new culture micromilieu with KSR/Fgf2/Egf retained the expression of some dental mesenchyme-specific genes and delayed the loss of odontogenic potential by 24 h. Our work revealed the characteristics and behavior of mDMCs in culture and suggested routes for tooth regeneration from cultured mDMCs. Cell culture All animal procedures were approved by the Animal Care Committee of Peking University and Guangzhou Institutes of Biomedicine and Health Ethical Committee (permit Number: CMU-B20100106). Tooth germs of the mandibular first molar in embryonic day 14.5 (E14.5) mouse embryos were dissected using fine needles and treated with dispase to separate dental mesenchyme from the epithelium. The isolated dental mesenchyme was digested with trypsin and filtered through a 40-μm cell sieve to obtain single cells. mDMCs were cultured at a density of 1 × 10 4 /cm 2 in Dulbecco's modified Eagle's medium (DMEM; Gibco, Grand Island, NY) supplemented with 10% FBS (Gibco), 100 U/ml penicillin, and 100 g/ml streptomycin. To prolong the odontogenic potential of mDMCs, freshly isolated cells were cultured on gelatincoated plates in a new medium with 10% KSR (Invitrogen, Carlsbad, CA), 20 ng/ml FGF2 (R&D system) and 20 ng/ml EGF (R&D system). Tissue recombination and subrenal culture Freshly isolated and cultured mDMCs were harvested at indicated time points. About 1 × 10 5 mDMCs were spun down to make a cell pellet and left in the centrifuge tube for aggregation for 2 h in DMEM + 10%FBS. The cell pellets was then recombined with freshly isolated E14.5 dental epithelium as previously described [9]. All recombinants were further cultured for 24 h prior to subrenal culture in adult male ICR mice. The host mice were sacrificed 3 weeks later and the grafted tissues were harvested. Grafts were fixed in 4% PFA/PBS, embedded in paraffin, and sectioned at 7 μm. Sections were stained with H&E for histological analysis. RNA isolation and sequencing Total RNA from freshly isolated and cultured mDMCs was extracted using the RNeasy mini kit and RNase-Free DNase set (Qiagen GmbH, Hilden, Germany). RNA library from each sample was prepared according to the instructions with the Illumina TruSeq RNA kit and Design of the study. Dental mesenchyme tissues from embryonic day 14.5 (E14.5) mice were digested with trypsin. Freshly isolated mouse dental mesenchymal cells (mDMCs) were divided into three groups: One group was recombined with embryonic dental epithelial and cultured in kidney; the second was submitted for RNA-seq; the third was cultured in vitro and harvested at indicated time points. Quantitative reverse transcription PCR (qRT-PCR) Total RNA was extracted with Trizol and complementary DNA was synthesized using an RT-PCR kit (TaKaRa Bio, Otsu, Japan). Real-time PCR was performed in triplicates in a Thermal CyclerDice7™ RealTime System with SYBR Green Premix EXTaq™ (Takara Bio). The primers are listed in Table 1. RNA expression was normalized to Actin and freshly isolated samples using the 2 -ΔΔCt method. Cell proliferation analysis Cell counting kit 8 (CCK8; Dojindo, Tokyo, Japan) was used to measure the cell viability according to the protocol. Briefly, freshly isolated mDMCs were seeded in a 96-well plate and cultured in the incubator for 6 h to allow cell attachment. Subsequently, the media was replaced with FBS-or KSR-supplemented medium. After indicated time, 10 μl CCK-8 solution was added to each well and incubated for 3 h at 37°C. The optical density was measured at an absorbance of 450 nm using a microplate reader (ELx800; BioTek Instruments, Inc., Winooski, VT, USA). -GGC TGT ATT CCC CTC CAT CG-3' 5'-CCA GTT GGT AAC AAT GCC ATG T-3' Dlx1 5'-GGC TAC CCC TAC GTC AAC TC-3' 5'-TTT TTC CCT TTG CCG TTA AAG C-3 Bioinformatics analysis RNA-sequencing reads were trimmed for adaptor sequence, mapped to the mouse transcriptome (mm10, Ensembl v73) and then aligned using bowtie (v1.0.1) and RSEM (v1.2.12). Differentially-expressed genes were identified using DESeq2 (v1.12.0); a p-value <0.05 and fold change >2 were used as the threshold to define significant differences in gene expression. The Database for Annotation, Visualization and Integrated Discovery was used to determine the GO categories and KEGG pathways using the entire mouse transcriptome as background gene set. The appropriate modules in glbase were used for hierarchical clustering and principal components analysis (PCA) [26]. Other RNA-seq was reanalyzed from GSE39918 [27], GSE55966 [28] and GSE29278 [29]. Statistical analysis Statistical analyses were performed using the SPSS for Windows software package (v18; SPSS Inc., Chicago, IL). Data from at least three independent experiments were used for analysis. The data were shown as means ± standard deviations (SD) and differences among groups were analyzed using one-way ANOVA. A two-tailed p value <0.05 was considered to be statistically significant. Loss of odontogenic potential in mouse dental mesenchymal cells Among the isolated mDMCs, some displayed a spindle-shaped, fibroblast-like morphology and others an elliptic morphology when adhering to the plates (Fig 2A). The cells continued to proliferate in culture and cell quantity doubled in 48 h (Fig 2B). When recombined with E14.5 dental epithelium, both freshly isolated mDMCs and molar mesenchyme tissues developed into teeth with well-differentiated odontoblasts after 3 weeks of subrenal culture (Fig 2C). The tooth-formation ratio for freshly isolated cells was 21/28 and for molar mesenchyme tissue was 11/11. The first-(120 h) and second-passage (192 h) mDMCs showed no tooth formation when recombined with E14.5 dental epithelium, suggesting that the odontogenic potential was lost during in vitro culture (Table 2). Given the possible influence of culture duration, the odontogenic potential of cells cultured for shorter periods was then examined. Recombinants with cells cultured for 48 h failed to develop into dental tissue, forming cysts with an amorphous matrix. In contrast, recombinants with cells cultured for 24 h gave rise to well-organized, tooth-like structures with a ratio of 17/28 (Fig 2C, Table 2). Collectively, these data demonstrated the culture-induced impairment of odontogenic potential in mDMCs, and mDMCs lost their odontogenic potential after 24 h in culture. Changes in the transcriptome profiles induced in vitro To reveal the underlying mechanism of the impairment of odontogenic potential, we generated gene expression profiles from mDMCs in culture using RNA-seq. Correlation analysis performed at the cell population level showed that there were 2 main clusters (Fig 3A). Cluster I comprised freshly isolated mDMCs and dental mesenchymal tissues, indicating comparable odontogenic potential between the two components. Cell populations in cluster II exhibited a high mutual positive correlation and were those associated with cultured cells. However, the link between cluster I and mDMCs cultured for 12 h or 24 h was relatively strong compared with that of cells cultured for 36 h or 48 h. PCA was used to map cell populations, and a dominant component (PC3) matching the sequence of progressive loss of odontogenic potential was identified (Fig 3B). In addition, the magnitude of transcriptional changes during culture was reflected by the number of transcripts induced or repressed at a given time point. A remarkable number of genes were differentially expressed between freshly isolated and cultured mDMCs. However, culturing mDMCs for >24 h had a comparatively minor effect on transcription (Fig 3C). To group the transcripts with similar behavior during culture, genes differentially expressed between freshly isolated and cultured mDMCs were divided further into eight clusters according to their expression patterns. Each cluster included genes exclusively upregulated or downregulated during the interval in culture (Fig 3D). Functional annotation analysis was performed to gain insight into the function of genes in each cluster. Much of the transcriptional change affected genes controlling proliferation and various 'housekeeping' activities such as transcription, cell motion, cell adhesion, and cytoskeletal organization. Interestingly, genes encoding skeleton-and ossification-related categories underwent downregulation during the intervals 0-12 h and 24-36 h, when the odontogenic potential of mDMCs evidently was degenerating. Throughout the culture duration, expression of genes controlling the negative regulation of cell proliferation, positive regulation of apoptosis, and cell-cycle arrest were enhanced. Collectively, mDMCs underwent a major perturbation in their biological processes that coincided with the loss of odontogenic potential. Phenotype alterations in cultured mouse dental mesenchymal cells The expression pattern of some dental mesenchyme-specific genes (Msx1, Pax9, Lhx6, Dlx1, and Dlx2) was examined using qRT-PCR and immunofluorescence analysis. In agreement with RNA-seq results (Fig 4A), these genes were downregulated in a temporal pattern approximately concordant with the loss of odontogenic potential (Fig 4B). Freshly isolated mDMCs were stained positive for Pax9 (~60%) and Msx1 (~73%). However, the fraction of cells positive for Pax9 (~25%) and Msx1 (~11%) declined remarkably during culture and was hardly detectable at 48 h (~1.5%; Fig 4C). mDMCs contained only a small fraction of cells positive for p75 (Ngfr,~30%), which had almost completely disappeared at 48 h. In addition, expression of Pdgfrα was also decreased under in vitro conditions. However, the expression of Bmp4, Wnt5a and fgf10 did not show a trend that is concordant with the loss of odontogenic potential ( Fig 4B). Thus, the temporal pattern of some dental mesenchyme-specific genes like Msx1 and Pax9 corresponds to the change in odontogenic potential, and the decreased expression of these genes may contribute to the loss of odontogenic potential. Maintenance of odontogenic potential in mouse dental mesenchymal cells in culture Since dental mesenchyme is derived from the neural crest, the RNA-seq data of tooth and neural crest obtained by others was then integrated with our data to understand the relationship of mDMCs to normal neural crest cell types. A relational network was built based on the strength of the correlation between pairs of samples ( Fig 5A). Freshly isolated mDMCs were associated closely with the neural crest cells, but particularly with the upper and lower molars. When mDMCs were cultured, they came to resemble fetal fibroblasts more closely as they lost their odontogenic potential. We then investigated whether supplementation with growth factors essential for the development of neural crest and tooth could prolong the odontogenic potential. Egf and Fgf2 were incorporated into the culture medium to inhibit cell apoptosis and promote cell proliferation. Moreover, KSR was used to skip undefined factors in FBS and avoid cell differentiation towards fibroblasts. mDMCs in the new medium proliferated robustly, with a significantly higher rate than those in the FBS-supplemented medium (Fig 5B). The KSRsupplemented medium increased the mRNA expression of Lhx6 and Fgf10, and maintained the expression of Pax9, Msx1, Wnt5a, Bmp4, and Dlx1, but failed to maintain the expression of Dlx2 and Alp after 48 h in culture (Fig 5C). In addition, protein levels of Msx1, Pax9, and Pdgfrα in mDMCs cultured in KSR-supplemented medium were comparable to freshly isolated mDMCs but significantly higher than those in FBS-supplemented medium (Fig 5D). The protein level of Bmp4 was slightly increased in KSR-supplemented medium and slightly decreased in FBS-supplemented medium (S1 Fig). Surprisingly, the KSR-supplemented medium was capable of extending odontogenic potential of mDMCs, and tooth formation could be derived at 48 h (Fig 5E; 15/30; Table 2). The potential is lost gradually thereafter, and recombinants with mDMCs cultured for 72 h developed into only thin and tiny tooth-like structures with a ratio of 7/19. Since E14.5 dental mesenchyme also possesses the capability to instruct tooth formation when recombined with epithelia of non-dental origin, mDMCs cultured in KSR-supplemented medium were recombined with E10.5 mouse second-arch epithelium. Tooth-like structures were found in the recombinants of cultured mDMCs and secondarch epithelium (Fig 5F, Table 2), confirming the retained odontogenic potential in cultured mDMCs. Thus, it is possible to maintain the odontogenic potential of mDMCs in vitro by modulating the composition of culture medium. The effects of KSR, Fgf2, and Egf on cultured mDMCs To identify the differential effect of individual supplement, one or two supplements were used to cultured mDMCs. When Egf/Fgf2 were used and KSR was removed from the medium, mDMCs poorly attached to the culture flasks. When Egf or Fgf2 was removed, the cells displayed normal morphology but proliferated relatively slowly (Fig 6A). KSR/Fgf2 increased the expression of Fgf10, Wnt5a, and Bmp4, but decreased the expression of Dlx2. KSR/Egf Prolonged Odontogenic Potential In Vitro increased the expression of Msx1, Dlx1, and Bmp4, while KSR increased the expression of Fgf10 (Fig 6B). Moreover, the downstream effects of Egf and Fgf2 were predicted using bioinformatic approaches. Egf was predicted to restore the expression of Egr1, Egr2, Jund, and Fos (S2 Fig). Fgf2 was predicted to restore the expression of Runx2 and Twist2 (S3 Fig). Thus, KSR is essential for the survival of mDMCs, and both Egf and Fgf2 induced the expression of some dental mesenchyme-related genes. Discussion mDMCs possess rigorous odontogenic potential; nevertheless, in vitro expansion of odontogenic mDMCs has proven to be a great challenge. In the present study, cultured mDMCs rapidly lost their odontogenic potential. The loss of epithelial-mesenchymal interaction and cellcell communication may account for the loss of odontogenic potential when cells are grown as a monolayer. A previous study demonstrated that the odontogenic potential of mDMCs was lost rapidly in the course of culture and mDMCs cultured for 24 h did not support tooth formation [20]. However, we successfully maintained the odontogenic potential of mDMCs for 48 h in a KSR-supplemented medium, suggesting a new route to long-term maintenance of odontogenic potential. Tooth development involves epithelial-mesenchymal interactions mediated by conserved signals in the signal families BMP, FGF, Hh, and Wnt [1][2]. Odontogenic potential shifts to dental mesenchyme at the bud stage [4][5]. FGF and BMP are potential signals transmitting odontogenic potential from the epithelial to mesenchymal compartment [30][31]. Unexpectedly, the temporal pattern of Bmp4 and Fgf10 did not correspond to the change in odontogenic potential in our study. Expression of Fgf3 decreased dramatically in cultured mDMCs. However, restored expression of Fgf3 does not rescue the odontogenic potential of cultured mDMCs [20]. Hence, other genes could also contribute to the loss of odontogenic potential. BMPs and FGFs induce expression of several mesenchymal transcription factors, including Msx1, Msx2, Pax9, Dlx1, Dlx2, Lef1, and Lhx6 [32][33]. Several of these genes perform essential functions during tooth development, because deletion of their functions in transgenic mice results in arrest in tooth development [34][35][36][37]. In our study, Msx1, Pax9, Dlx1, Lef1, and Lhx6 were significantly reduced in cultured mDMCs, leading to impairment of the odontogenic potential and, ultimately, the failure of tooth formation. Although Bmp4 was shown to activate the expression of Msx1 [38], Bmp4 was maintained and Msx1 was downregulated in FBS-supplemented medium. The deregulation of other growth factors like Fgf8 [38] and transcription factors like Crebbp and Sp1 may lead to the decreased expression of Msx1 [39]. Recently, transfection of Pax9 and Bmp4 into iPS-derived NCLCs has been shown to promote differentiation into odontoblast-like cells [40]. Our results indicated that overexpression of genes other that Pax9 and Bmp4 are needed to confer odontogenic potential on iPS-derived NCLCs. Although the molecular mechanisms regulating the odontogenic potential are not fully understood, our data demonstrated that medium supplemented with KSR, Fgf2, and Egf prolonged the odontogenic potential of mDMCs. KSR is a defined and serum-free formulation used to support the growth of stem cells since it prevents spontaneous differentiation. Transferrin, which is contained in KSR, is necessary for development of cap-staged tooth in organ culture and stimulates cell proliferation in tooth germs [41]. In addition, insulin in KSR facilitates the proliferation of stem cells. In the present study, KSR is essential for the survival of mDMCs and cells proliferated robustly in KSR-supplemented medium. Fgf2 and Egf were also employed to maintain the odontogenic potential of mDMCs. Fgf2 facilitates the maintenance of the specific properties of various progenitor cell types, including bone marrow and dental pulp stem cells [42][43]. Egf induces the proliferation of dental tissue, and Egf down-regulation during early mouse development results in impaired tooth formation. We found that both Egf and Fgf2 contributed to the expression of certain genes that are important for tooth development. Coincidently, the combination of Egf and Fgf2 has been used to derive neural crest cells from embryonic stem cells [44][45]. Neural crest cells contribute to a variety of derivatives including teeth and are found mainly in the condensed dental mesenchyme under the enamel organ [46][47]. In the present study, the transcriptomic link between mDMCs and the neural crest was abrogated when mDMCs were cultured in FBS-supplemented medium, but Fgf2 and Egf which trigger mitosis in neural crest cells facilitated the maintenance of odontogenic potential. mDMCs cultured in KSR-supplemented medium for 72 h also expressed the dental mesenchyme-specific markers, but the morphology of regenerated tooth-like structure was abnormal. Thus, other regulatory signals should be considered to maintain the odontogenic potential of mDMCs. Gene expression profiling was used to explore the possible mechanism in the present study. The relationships among the transcriptome profiles of mDMCs recapitulated the evolvement of odontogenic potential impairment. Genes controlling the skeleton-and ossificationrelated activities were downregulated, providing insights into the optimization of culture medium in the future. Besides, considering the cell heterogeneity in the dental mesenchyme [48], the inability to sustain a certain cell population may also lead to the culture-induced impairment of odontogenic potential and the altered transcriptomic profiles. However, further studies are still needed to identify the transcriptomic profiles for different cell populations in dental mesenchyme before we can determine the transition of cell heterogeneity. Moreover, three-dimensional (3D) culture scaffolds mimic the in vivo microenvironment that is essential to the function of cells. Thus, the 3D culture scaffolds, instead of traditional two-dimensional (2D) culture methods, would manage to prolong the properties of cultured mDMCs, and therefore exhibits promising potential in tooth regeneration. In summary, our results showed the culture-induced impairment of odontogenic potential and provided a panoramic view of the transcriptomic landscape of cultured mDMCs. Notably, a new culture micromilieu ameliorated the impairment of odontogenic potential in 48 h, providing a clue for long-term culture of mDMCs without compromising their odontogenic potential.
5,349.4
2016-04-06T00:00:00.000
[ "Biology", "Medicine" ]
Reflected entropy and Markov gap in Lifshitz theories We study the reflected entropy in $(1+1)$--dimensional Lifshitz field theory whose groundstate is described by a quantum mechanical model. Starting from tripartite Lifshitz groundstates, both critical and gapped, we derive explicit formulas for the R\'enyi reflected entropies reduced to two adjacent or disjoint intervals, directly in the continuum. We show that the reflected entropy in Lifshitz theory does not satisfy monotonicity, in contrast to what is observed for free relativistic fields. We analytically compute the full reflected entanglement spectrum for two disjoint intervals, finding a discrete set of eigenvalues which is that of a thermal density matrix. Furthermore, we investigate the Markov gap, defined as the difference between reflected entropy and mutual information, and find it to be universal and nonvanishing, signaling irreducible tripartite entanglement in Lifshitz groundstates. We also obtain analytical results for the reflected entropies and the Markov gap in $2 + 1$ dimensions. Finally, as a byproduct of our results on reflected entropy, we provide exact formulas for two other entanglement-related quantities, namely the computable cross-norm negativity and the operator entanglement entropy. 1 Introduction Quantum information has revolutionized the study of quantum many-body systems [1][2][3], offering valuable insights into their universal properties [4][5][6][7][8][9][10][11][12] and revealing hidden phases that are otherwise undetectable using local probes [13][14][15].Central to these developments is the concept of entanglement between two parties in a pure state.Much of the focus has thus revolved around the entanglement entropy, the measure that captures the amount of EPR entanglement present asymptotically in a pure state, which is the sole form of bipartite pure-state entanglement.In contrast, in systems shared by three or more parties, several inequivalent classes of entanglement can be identified and no single measure captures all its aspects.Although the entanglement entropy has provided crucial understanding in condensed-matter physics, quantum field theory, and holography, it cannot distinguish among the various forms of multipartite entanglement.Furthermore, the entanglement entropy of subregions is ill-defined in continuum theories, thus requiring careful UV regularization.To overcome these challenges, we aim to investigate other informationtheoretic quantities that can give a more complete picture of the structure of entanglement, and which remain well-defined in the continuum limit.One such quantity is the reflected entropy, defined for bipartite mixed states [16]. In the present work, we compute the reflected entropy and Markov gap for Lifshitz groundstates composed of three parties, with the goal of obtaining a better understanding of tripartite entanglement in such theories.Note that the reflected entropy is related to two other information-theoretic quantities, namely the operator entanglement entropy [79][80][81] and the computable cross-norm or realignment (CCNR) negativity [82], both of which shall be investigated as well.In what follows, we review the definition of (Rényi) reflected entropy and we present in more details the Lifshitz theories we shall focus on. Definition of reflected entropy and replica formulation Consider a general mixed bipartite quantum state ρ AB on a finite Hilbert space H A ⊗ H B .The state can be canonically purified as the pure state | √ ρ AB ⟩ in a doubled Hilbert space (H A ⊗ H B ) ⊗ (H A * ⊗ H B * ).The reflected entropy S R is defined as von Neumann (entanglement) entropy associated to the reflected density matrix It is expected that working on a lattice and taking the continuum limit, this definition coincides with that using von-Neumann algebras [46].A replica formulation of reflected entropy provides a convenient tool for computations.It involves two replica indices, m and n.The latter is the standard Rényi index, while the former is related to the (generalized) purification |ρ where we used Trρ (m) m,n (A : B) is obtained by analytic continuation n → 1, and the original reflected entropy S R (A : B) is recovered by further taking the limit m → 1.Note that if ρ AB is pure, the reflected entropy is twice the entanglement entropy associated to the bipartition The reflected entropy enjoys many interesting properties, see [16]. Lifshitz scalar field theory and Rokhsar-Kivelson groundstates In this paper, we are interested in a certain class of nonrelativistic quantum field theories, called Lifshitz theories, which possess the remarkable feature that their groundstate wavefunctional takes a local form, given in terms of the action of a classical model-a Rokhsar-Kivelson wavefunctional [60,75]. We shall consider both the (noncompact) z = 2 Lifshitz critical boson and its massive deformation [77] in 1 + 1 dimensions, described by the Hamiltonian with canonical commutation relations [ϕ(x), Π(x ′ )] = iδ(x − x ′ ), and Π(x) = −iδ/δϕ(x) in the Schrödinger picture.The groundstate can be expressed in terms of a path integral of a one-dimensional Euclidean theory with normalization factor Z = Dϕ e −S cl [ϕ] .In one dimension, Z corresponds to the partition function of a single particle with classical action S cl [ϕ], i.e. an (Euclidean) harmonic oscillator of "mass" M = 2 and "frequency" ω.In the critical massless case, ω = 0, the theory is invariant under Lifshitz scaling (1.1) with z = 2, and the classical action is that of a free nonrelativistic particle.The propagator of the Euclidean harmonic oscillator, defined as which in the massless limit ω = 0 reduces to Vacuum expectation values, associated to |Ψ 0 ⟩, of local operators can then be expressed in terms of the above propagator. Tripartition and reduced density matrix The density matrix operator corresponding to the Lifshitz groundstate with implicit boundary conditions (BCs) on the full system (e.g.Dirichlet, Neumann or periodic).In this work, we consider a class of bipartite mixed states obtained by tracing out the degrees of freedom of one of the subsystems in a tripartite pure state, which we choose to be the groundstate |Ψ 0 ⟩ in (1.4).More precisely, beginning with |Ψ 0 ⟩, we divide our system into three non-overlapping subsystems, A, B and C, and consider the reduced density matrix on A ∪ B, which is in general that of a mixed state.The reduced density matrix on A ∪ B reads with junction conditions at each interface between A, B, and C depending on the geometry. Here, ϕ A denotes fields defined on A, and similarly for ϕ B and ϕ C .For example, for A, B disjoint as in Figure 3a, the junction conditions at the interfaces Γ X between X = {A, B} and We may leave ρ AB unnormalized, such that Tr ρ AB = Z is the partition function of the underlying classical model on a graph that encodes the boundary and junction conditions. Organization of the paper In Section 2, we compute the reflected entropy reduced to two adjacent or disjoint intervals from tripartite Lifshitz groundstates, both critical and massive.For two disjoint regions, the reflected entropy is finite, and the associated spectrum is discrete.For two adjacent regions, it is nonmonotonic under partial trace.We study the Markov gap in Section 3, which is found to be positive, signaling nontrivial tripartite entanglement in Lifshitz groundstates.We discuss the CCNR negativity and the operator entanglement entropy in Section 4.An incursion is made in 2 + 1 dimensions in Section 5, where we obtain analytic results for the reflected entropy in certain limits.We conclude in Section 6 with a summary of our main results, and give an outlook on future study.Finally, some technical details and further calculations can be found in the three appendices that complete this work. Reflected entropies In this section, we study the Rényi reflected entropies for non-compact Lifshitz theories in 1 + 1 dimensions. 1 We consider the critical (massless) Lifshitz scalar and its massive deformation introduced in [77].Extending the method developed in [69,77,83], we construct the reflected density matrix for bipartite mixed states corresponding to two adjacent or disjoint subsystems, and compute the reflected entropies for arbitrary Rényi indices m, n. As a byproduct of our results, we find that the reflected entropy for Lifshitz theories is not monotonically nonincreasing under partial trace, contrarily to what is observed for free relativistic fields [33].We also compute the spectrum of the reflected density matrix for two disjoint subsystems, finding a discrete set of eigenvalues. Critical (massless) Lifshitz scalar We study the reflected entropies for the critical Lifshitz scalar defined by the Hamiltonian (1.3) setting the mass ω = 0.The corresponding groundstate (1.4) with ω = 0 encodes the partition function of a free nonrelativistic particle whose propagator is given in (1.6). To illustrate the purification process and the replica method for the reflected entropy outlined in the introduction, let us first consider the bipartite case for which ρ AB = |Ψ 0 ⟩⟨Ψ 0 | is pure.We work on a finite one-dimensional system with Dirichlet BCs at both ends, as depicted in Figure 1.The partition function Z m,n ≡ Tr ρ (m) AA * n of the free particle on the (disconnected) replica graph M m,n can be calculated using the procedure shown in Figure 2. In Figure 2, we use dashed lines for bras and kets, and solid lines to represent traces taken, which are simply the propagators.We find for ρ AB pure, where ℓ AB = ℓ A + ℓ B .The (m, n)-Rényi reflected entropy follows using (1.10), with nonuniversal constant term γ n = 1 n−1 log n + log π.Taking the limit n → 1 we find with γ ≡ γ 1 .Note that we have reinstated a UV cutoff ϵ; whenever there is an unequal number of ℓ i in the numerator and denominator in a logarithm, it is understood that we should add the corresponding power of cutoff length ϵ.This is due to the fact that the field ϕ has mass dimension −1/2, which we have not taken into account in the path integral.As expected, the reflected entropy is equal to twice the Rényi entanglement entropy [69,73] S which could already be realized from Figure 2. Figure 3: Two disjoint regions A and B on a finite system with Dirichlet BCs. Disjoint regions on an interval We now turn to the more interesting case of two disjoint subsystems on an interval with Dirichlet BCs.Two such configurations are shown in Figure 3.As mentioned in the Introduction, the reduced density matrix of two disjoint subsystems is separable, implying that they are not entangled, and correlations being classical and quantum non-entangling. We first consider the configuration depicted in Figure 3a, where A and B are adjacent to the boundaries of the system.The replica graph M m,n is drawn in Figure 4, and the corresponding partition function reads where M m,n is a 2n × 2n matrix given in (A.2).Let us define the cross-ratio η as which provides a nice analytic continuation in n.Plugging (2.4) together with (2.6) into (1.10), the Rényi reflected entropies read For two disjoint subsystems, we find that the reflected entropies are finite and, interestingly, independent of m.In the limit n → 1 we obtain For the more general configurations where the endpoints of A and B are not boundaries of the system (see Figure 3b), the reflected entropies are given by (2.7) and (2.8), with cross-ratio (2.9) Taking limits: As a function of η ∈ [0, 1], S R m (A : B) is monotonically increasing; it vanishes for η = 0 and diverges for η = 1.For two disjoint intervals adjacent to the boundaries of the system, as in Figure 3a, in the limit ℓ C ≫ ℓ A , ℓ B (η → 0) of two widely separated regions, the reflected entropies behave as (2.10) In the opposite limit of small separation ℓ C → 0 (η → 1), the reflected entropies have asymptotics (2.11) Identifying ℓ C ∼ ϵ, we find that expression (2.11) does not reproduce the bipartite result (2.2b), but only half of it.This suggests that the reflected entropy is not a good regularization of entanglement entropy, similarly as the mutual information (see Section 3). In the configuration of Figure 3b, letting ℓ C 1 , ℓ C 3 → ∞ while keeping ℓ A , ℓ B and ℓ C 2 finite, i.e η → 1, the reflected entropies diverge.We show in Appendix B that letting ℓ C 1 , ℓ C 3 → ∞ is equivalent to imposing Neumann BCs on the endpoints.The divergence is thus due to the presence of a zero mode. Adjacent regions on an interval We now consider the reflected entropy of two adjacent regions on an interval, as depicted in Figure 5.We draw the replica graph M m,3 in Figure 6, which can easily be generalized to M m,n .The corresponding partition function reads where M m,n is a (2n + 2) × (2n + 2) matrix given in (A.3).Up to a total factor m/2−1 ℓ A +ℓ B , the matrix M m,n is of the form (A.12).Defining the reflected entropies of two adjacent subsystems are found to be Figure 6: Replica graph M m,3 for two adjacent regions as in Figure 5.The vertices of degree two (between blue and green edges) can be neglected due to the property of propagators . Apart from the boundary vertices (hollow circles), we have 2n + 2 vertices: 2n of them are on a circle-like shape, where adjacent ones are connected by m/2 − 1 edges of length ℓ A + ℓ B ; the other two vertices connect to each of these (for clarity, we color the edges of one of the two only).When m = 2, the circle vanishes.When ℓ C 1 = 0 or ℓ C 3 = 0, the circular shape breaks down, due to Dirichlet BCs.When ℓ C 1 = ℓ C 3 = 0, we recover Figure 2. and ℓ is the total length of the system.For m = 2, nice simplifications occur, finding Taking limits: In the limit ℓ C 1 , ℓ C 3 → 0, which corresponds to the bipartite case, we recover the reflected entropy of pure states (2.2a).Taking limit ℓ C 1 , ℓ C 3 → ∞, the reflected entropies diverge logarithmically due to the zero mode. For A adjacent to the boundary of the system, that is taking ℓ C 1 → 0 (u → 0), the reflected entropies simplify to Curiously, for large systems ℓ → ∞, we get the same result as in the bipartite case ℓ C 3 → 0. Massive Lifshitz scalar In this section, we study the reflected entropies for the massive deformation of the Lifshitz scalar introduced in [77].The calculation of reflected entropies follows exactly as before. The only changes, due to the different propagator (1.5), are in the matrices encoding adjacency and lengths, whose diagonal elements change from 1/ℓ i to ω coth ωℓ i and nondiagonal elements from 1/ℓ i to ω csch ωℓ i .Quite remarkably, for all the cases considered in the previous sections, the reflected entropies for the massive theory are obtained from the massless results by the simple replacement and ϵ → ωϵ.The novel cases come from the ones with periodic BC, which we did not consider for massless Lifshitz scalar due to the presence of a zero mode. Disjoint regions on an interval The reflected entropies of two disjoint subsystems (see Figure 3b) in the massive case take the same forms as (2.7) and (2.8), with "effective" cross-ratio Again, the reflected entropies do not depend on m.In the limit ωℓ i → 0 for all ℓ i , we recover the massless result. Taking limits: Consider the configuration in Figure 3a.In the highly gapped regime ωℓ i → ∞ for all ℓ i , we have η ∼ e −2ωℓ C and the reflected entropies vanish exponentially In the near or far-apart limit, the reflected entropies have the same asymptotics as (2.10) and (2.11), with effective cross-ratio η = sinh ωℓ A sinh ωℓ B sinh ωℓ AC sinh ωℓ BC .Now consider the general configuration in Figure 3b.Since we have a mass gap, there is no divergent zero mode while taking the infinite system limit ωℓ C 1 , ωℓ C 3 → ∞.In that case, we have η = e −2ωℓ C 2 such that the reflected entropies are finite and read Interestingly, for infinite systems the reflected entropies do not depend on the lengths ℓ A , ℓ B of the two regions A, B, but depend only on the distance ℓ C 2 between them.In the highly gapped regime, which coincides with large separation regime, we have (2.19).In the opposite limit ωℓ C 2 → 0, i.e. small mass or very near A and B, we find where we have introduced the correlation length ξ = ω −1 . Adjacent regions on an interval The reflected entropies of two adjacent subsystems (see Figure 5) in the massive case are obtained from the massless results (2.13), (2.14a), (2.14b) by replacing The massless results are recovered in the limit ωℓ i → 0 . Taking limits: In the highly gapped regime ωℓ i → ∞, writing the correlation length as ξ = ω −1 , we have The reflected entropy for the gapped theory thus satisfies an area law for adjacent regions.It is also independent of ℓ A , ℓ B in this limit, as expected for massive excitations.For infinite systems, i.e. ωℓ C 1 , ωℓ C 3 → ∞, the reflected entropy becomes Figure 7: Two disjoint regions A and B on finite system with periodic BCs. Disjoint regions on a circle Let us now work on a system with periodic BC, i.e. a circle.We consider two disjoint regions A and B, separated by two intervals of same size ℓ C , as can be seen in Figure 7.We draw the replica graph M m,3 in Figure 8, which is straightforwardly generalized to arbitrary n.The corresponding partition function reads where M m,n is a 4n × 4n matrix of the form where Figure 8: Replica graph M m,3 for two disjoint regions on a circle as in Figure 7.We use a simplified notation here: each gray edge denotes m/2 copies of length ℓ C edges connecting the two vertices, while each green (blue) edge denotes m copies of edges of length ℓ A (ℓ B ). The reflected entropies for two disjoint subsystems on a circle hence follow as (2.29b) In the limit of small mass ωℓ i → 0 (η 1 → 1), the reflected entropies diverge.This is due to the zero mode of the massless theory on a circle. Taking limits: In the highly gapped limit, we have which is twice (2.19).We observe a logarithmic divergence at vanishing separations, and an exponential decay at large separations, Adjacent regions on a circle Finally, let us consider A and B to be adjacent intervals on a circle, as in Figure 9.The replica manifold is similar as that in Figure 6, only the boundary vertices, which represent Dirichlet BCs, must pair together.Hence, adjacent vertices on the center "circle" are connected by m/2 − 1 copies of edge of length ℓ A + ℓ B , as well as m/2 copies of edge of length ℓ C .The corresponding partition function reads where M m,n can be found in (A.4).Defining we obtain the reflected entropies Taking limits: In the highly gapped regime, we have which is identical to (2.22).In the bipartite limit ωℓ C → 0 , we find For infinite systems, i.e. ωℓ C → ∞ , S R m=1 (A : B) has the same asymptotics as (2.23). Reflected entanglement spectrum With the knowledge of all Rényi entropies, one can reconstruct the entanglement spectrum. The same applies to the Rényi reflected entropies and reflected entanglement spectrum.The latter is defined as the spectrum of the normalized reflected density matrix ρ AA * .Since the reflected entropy of disjoint subsystems is finite, we expect the reflected spectrum to be discrete and cutoff independent (see also [34]). Entanglement (entropy) spectrum Let us start with the entanglement spectrum corresponding to the spectrum of the reduced density matrix ρ A of some subregion A. Consider a region A composed of several intervals, with k boundaries in common with its complement.For Lifshitz theories (both critical and massive) in one spatial dimension, the Rényi entropies S n (A) depend on n through a constant term only [77]: where S is the UV divergent, n-independent part.As alluded to above, the entanglement spectrum P (λ) = i δ(λ−λ i ) can be related to the moments of the reduced density matrix R n ≡ Trρ n A = i λ n i , where λ i are the eigenvalues of ρ A , as follows Following [84], we define such that λP (λ) = lim ϵ→0 Imf (λ − iϵ).Using f n (z) = e S Li k (e −S /z) and Im Li k/2 (y + iϵ) = π (ln y) k/2−1 /Γ(k/2) for y ≥ 1 and small positive real ϵ, we find (2.41) The entanglement spectrum is thus continuous, and UV-dependent (through S). Reflected entanglement spectrum We now compute the reflected entanglement spectrum of massless Lifshitz scalar field for disjoint regions A, B on an interval (see Figure 3a).Using (2.7), we have where we defined Hence we obtain the reflected entanglement spectrum as (2.43) It is discrete, as expected.We show the first few eigenvalues in Figure 10.3a.Left: the largest four eigenvalues λ i in the reflected spectrum as functions of cross-ratio η.Right: the eigenvalues λ i as functions of i, for fixed η.Note that i only takes value in non-negative integers.The spectrum becomes flatter as η → 1. The reflected entropy is the thermal entropy of a quantum harmonic oscillator with energies (2.44) The same conclusions hold for the massive Lifshitz scalar, with η = sinh ωℓ A sinh ωℓ B sinh ωℓ AC sinh ωℓ BC .For massive Lifshitz scalar and two disjoint regions on a circle as in Figure 7, using (2.29a) we obtain the reflected entanglement spectrum where q a = η a 1 + √ 1 − η a −2 with η a defined in (2.28).The spectrum is, again, discrete. Violation of monotonicity A fundamental property a correlation measure C should satisfy is that it must not increase under the local discarding of information, i.e. the following inequality must holds A quantity satisfying this inequality is said to be monotonically nonincreasing under partial trace, or simply 'monotonic'.It has recently been shown that the reflected entropy is not a measure of physical correlations since it is not monotonic [48].However, for holographic theories and free fields (scalars and Dirac fermions) [33], the reflected entropy still seems to satisfy monotonicity.For Lifshitz theories, using our exact results for two adjacent intervals (2.14a) and (2.14b), we show in Figure 11 that monotonicity is violated.This provides a counterexample to monotonicity of reflected entropy for a free theory.For the critical theory, it can be rigorously shown that monotonicity is violated for Rényi indices 0 < n < 4/3.Consider the large system limit ℓ C 1 , ℓ C 2 → ∞ of (2.14a) and (2.14b), and let v = ℓ B /ℓ A .The derivative of the reflected entropy with respect to v is found to be lim which is negative for large v in the range 0 < n < 4/3.It can be shown to be positive for n ⩾ 4/3 for any v > 0. This implies that the reflected entropy is not monotonic. Comparison with mutual information: the Markov gap In this section, we study a tripartite entanglement measure recently proposed [49][50][51], the Markov gap M , defined as the difference of the reflected entropy with mutual information I M (A : The reflected entropy is lower bounded by mutual information [16], hence the Markov gap is a nonnegative quantity M (A : B) ≥ 0. As shown in [51], a nonvanishing M for contiguous subsystems A, B, C signals irreducible tripartite entanglement, beyond bipartite entanglement and GHZ-type entanglement. Here, we show that for Lifshitz groundstates, the Markov gap is strictly positive M > 0, both for critical and gapped states (except for vanishing correlation length where it is zero), implying nontrivial tripartite entanglement.It is interesting that, despite the fact that two disjoint regions are not entangled [77], a form of nonbipartite entanglement is still shared between the three complementary subsystems.Before proceeding, let us recall certain results about mutual information in Lifshitz groudstates. Mutual information For the sake of being self-contained, we give here the (Rényi) mutual information for the various cases considered in this paper, most of which can be found in [77].The Rényi mutual information I n (A : B) between two subsystems A and B is defined as where S n (X) is the Rényi entropy for the subsystem X.Mutual information is obtained in terms of entanglement entropies by taking the n → 1 limit, I(A : B) = lim n→1 I n (A : B), and is a measure of total correlations between A and B. The (Rényi) mutual information of two disjoint intervals in the bulk of a finite system with Dirichlet BCs (see Figure 3b) reads , while for two adjacent subsystems it is given by On the circle with periodic BC, the Rényi mutual information of two disjoint regions (as in Figure 7) reads with η 1,2 defined in (2.28), while for two adjacent subsystems it is given by In all cases, mutual information for the critical Lifshitz scalar is obtained by setting ω → 0. We note that, interestingly, the m = 2 reflected entropies equal mutual information: The Markov gap We are now in the position to compute the Markov gap, as defined in (3.1), using the results of Section 2 on reflected entropy and mutual information given just above. Disjoint regions For two disjoint regions in the bulk of a finite system with Dirichlet BCs, as in Figure 3b, we find In this case, M (A : B) is a positive-definite, monotonically increasing function of η.For infinite systems The Markov gap reaches its maximum value when the distance between the two regions goes to zero, i.e. ℓ C 2 → 0 or η → 1 In the opposite η → 0 limit, which corresponds to either large separation or large mass, the Markov gap vanishes as Note that in this η → 0 limit, we have I(A : B) ≃ 1 2 η, and S R m (A : B) ≃ − 1 4 η log η, which matches the conjecture of [33] (see also [85]) for CFTs that It is interesting that for Lifshitz theory, although not a CFT, this relation is also satisfied. For two disjoint regions on a circle, see Figure 7, the Markov gap is given by with η 1,2 defined in (2.28).In the critical massless case, i.e. η 1 = 1 and , there is no divergence due to the zero mode as for the reflected entropy and mutual information.Furthermore, at large separation ℓ C → ∞ the Markov gap does not vanish, but goes to M (A : B) = 1 − log 2. Note that the massless limit and large separation limit do not commute.In the small separation ℓ C → 0, the Markov gap also goes to a constant M (A : B) = 2(1 − log 2).This suggests that for two disjoint regions with k pairs of very near boundaries, the gap is k(1 − log 2).Finally, at large separation ωℓ C → ∞ or large mass, we find twice (3.10) and (3.11) for the asymptotics of M and S R , respectively. Adjacent regions Let us now turn to adjacent subsystems.For two adjacent regions in the bulk of a finite system with Dirichlet BCs, as depicted in Figure 5, we obtain where u is given in (2.13) with the replacement ℓ i → sinh ωℓ i and m = 1.In the critical massless case, for infinite systems and takes value between 1 2 log 2 for v → 0, ∞ and 3 2 √ It is interesting to compare this result for Lifshitz theories with the CFT one, which is universal [51]: M (A : B) = (c/3) log 2 (with c the central charge).For CFTs, the Markov gap of two adjacent subsystems is independent of the relative size of those in the scaling limit, contrarily to Lifshitz theory.Moreover, a nonvanishing M indicates irreducible tripartite entanglement between the three subsystems A, B, C, beyond GHZ-like entanglement. In the highly gapped case where the size of each subsystem is much larger than the correlation length ξ = ω −1 , the Markov gap vanishes exponentially which is expected for gapped systems with vanishing correlation length [51]. On a circle with periodic BC (Figure 9), the Markov gap is given by with u defined in (2.34) setting m = 1.In the two limits ℓ C → ∞ and ω → ∞, it displays the same asymptotics as that for Dirichlet BCs. Computable cross-norm negativity and operator entanglement entropy The reflected entropies are related to two other information-theoretic quantities (see, e.g.[43]), namely the computable cross-norm or realignment (CCNR) negativity and the operator entanglement entropy (OEE).As a byproduct of our results on reflected entropy, we provide exact formulas for the CCNR negativity and OEE for Lifshitz theories. Computable cross-norm negativity The CCNR negativity is based on the CCNR separability criterion [86,87].It has recently been discussed in a variety of contexts [43,82,88,89], and it is expressed in terms of the realignment matrix R of a bipartite density matrix ρ AB , i.e. ⟨a| ⟨a The Rényi CCNR negativities are defined as E n (A : B) = log Tr(RR † ) n , and the limit n → 1/2 yields the CCNR negativity E. For separable states, E ≤ 0. A generalization of CCNR negativity has been introduced in [43] using the realignment matrix of ρ m/2 AB .The (m, n)-Rényi CCNR negativity can then alternatively be expressed in terms of the (m, n)-Rényi reflected entropy as [43] such that setting m = 2 we get the Rényi CCNR negativity E n .The (m, n)-Rényi CCNR negativity can be viewed as the unnormalized (m, n)-Rényi reflected entropy. Disjoint regions.Using our results on reflected entropy, we obtain the CCNR negativities for disjoint regions on a interval (Figure 3b) as where η = ℓ A ℓ B ℓ AC ℓ BC .The CCNR negativity E is negative, as long as the cutoff is much smaller than all other length scales.This is consistent with the separability of the density matrix ρ AB for disjoint subsystems [77]. When adding mass, the CCNR negativity becomes with η = sinh ωℓ A sinh ωℓ B sinh ωℓ AC sinh ωℓ BC .It has qualitatively the same behavior as the massless case. Adjacent regions.For adjacent regions (Figure 5), we find For infinite systems, i.e. ℓ C 1 , ℓ C 3 → ∞, the CCNR negativity simplifies as Adding a mass we have Quite surprisingly, the CCNR negativity for two adjacent regions is cutoff independent. In fact in this case, the (m, n)-Rényi CCNR negativity is UV-finite for mn = 1.For finite systems, expressions (4.5) and (4.7) can be negative or positive.However, for infinite systems in the critical massless case, the CCNR negativity (4.5) is nonpositive and thus fails to detect entanglement.Finally, CCNR negativity does not satisfy monotonicity as is evident from (4.6).The nonmonotonicity of CCNR negativity was proven in [43]. Operator entanglement entropy The Rényi OEE [79][80][81]90], which we denote as E n , is the (Rényi) Shannon entropy of the squared probability distribution values of the operator Schmidt decomposition of a (normalized) bipartite density matrix ρ AB .Hence, the OEE is the reflected entropy of ρ AB , that is Our results for the m = 2 reflected entropy thus straightforwardly apply to the OEE.In particular, we have shown that for disjoint intervals, the reflected entropies for Lifshitz theories are independent of m, see (2.7) and (2.29a).For two adjacent intervals on a finite system with Dirichlet BCs, the m = 2 reflected entropy exactly matches mutual information, thus so does the OEE. Towards 2 + 1 dimensions In this section, we compute the reflected entropies for 2 + 1D Lifshitz groundstates on a cylinder.The idea is that, given the symmetry of the system in two spatial dimensions, one can perform a dimensional reduction along the compact direction to get a tower of 1d free massive theories.This allows us to compute 2d entropies by summing up (massive) 1d entropies.We first reproduce the 2d entanglement entropies from the 1d results, and then move on to reflected entropies in 2d in certain limits. Entanglement entropy on a cylinder Consider the (2 + 1)-dimensional non-compact free real Lifshitz scalar with Hamiltonian on a cylinder of total length ℓ = ℓ A +ℓ B with Dirichlet BC at both ends, which corresponds to Figure 1 times a circle S 1 with length ℓ y .The entanglement entropy of region A can be calculated using the replica trick and field redefinitions [62].The universal (constant) term in the entanglement entropy is [67] where w = ℓ A /ℓ and τ = iℓ/ℓ y .Let us now reproduce this 2d result using that of 1d.The partition function on the replica manifold for the 2d theory reads Mn Dϕ e −S cl [ϕ] , S cl [ϕ] = dxdy(∇ϕ) 2 . (5.3) On a cylinder, any replica manifold M n is a graph times S 1 .This S 1 symmetry allows us to perform a dimensional reduction (in the transverse y-direction) and to write (5.4) We have expanded ϕ(x, y) = ϕ 0 (x) + k̸ =0 e iky ϕ k (x), and the frequencies are ω k = 2πk ℓy .Each mode k is a complex scalar field, which can be viewed as two real scalars.Hence where Z 0 and Z k are quantum mechanical partition function of free particle and harmonic oscillators.For the entropies we have where S 0 (A) and S k (A) are the 1d entanglement entropies of massless and massive Lifshitz scalar with mass ω k , respectively, which are given by (5.7) Here we have discarded nonuniversal constant terms.Using η(τ ) = q 1 24 ∞ n=1 (1 − q n ), with q = e 2πiτ and the zeta-function regularization, i.e. 1 + 2 + 3 + • • • = −1/12, we can rewrite the sums of log sinh-terms as η-functions.Further, making use of ∞ n=1 (an) = 2π/a, the sum of log ω k can be computed in closed form, and we reproduce (5.2) exactly. The entanglement entropy of a bulk cylinder whose dimensional reduction corresponds to region C in Figure 3a can be calculated in the same fashion.Using the 1d results [77] we obtain where τ = iℓ/ℓ y and w i = ℓ i /ℓ . Reflected entropy on a cylinder Let us first consider A, B disjoint on a cylinder, whose dimensional reduction leads to Figure 3a, and the limiting regimes where the reflected entropies get simplified to allow for an analytic treatment.In the limit the 1d massive reflected entropies vanish.Using (5.6), the 2d reflected entropy is then entirely given by the 1d massless contribution with η = ℓ A ℓ B ℓ AC ℓ BC .In all these three limits, mutual information of the massive theory also vanishes, and the 2d mutual information is given by the massless contribution as well.Thus, we expect relation (3.11) to also hold for 2 + 1D Lifshitz theory, at least in the above three limits. For A, B adjacent on a cylinder, whose dimensional reduction leads to Figure 5, taking the limits ℓ A ≫ ℓ y and ℓ B ≫ ℓ y , (5.12) the reflected entropies for the 1d massive theory behaves as − log ω k (see (2.22)), whose sum leads to −(1/2) log ℓ y .Note that any constant terms in the 1d entropies will drop out after regularization.Using (5.6), the (universal term in) the 2d reflected entropy reads There are several future avenues worth exploring.An important direction would be to investigate further the reflected entropy in higher-dimensional Lifshitz theories.We took here a decisive step in that direction, though much remains to be done.For example, the study of skeletal regions [91], i.e. subregions A, B that have no volume, could lead to new analytical results in two or more spatial dimensions.Reflected entropy bears importance in the holographic context, where it is dual to the area of entanglement wedge cross-section.It would be interesting to relate our results to holography, see, e.g., [92,93].Finally, it is worth studying reflected entropy for the more general class of Rokhsar-Kivelson states-to which belong Lifshitz groundstates-to further explore the nature of tripartite entanglement in such states (see [77,78] for recent developments on their entanglement structure). A Matrices and determinants In this appendix, we list the matrices and their determinants used in the main text. A.1 Matrices of replica graphs In our calculations, partition functions Z m,n corresponding to replica graphs of geometric configurations are given in terms of the determinants of certain matrices M m,n as We list here such matrices for the different cases we consider. Disjoint regions on an interval.The geometry and replica graph M m,n are drawn in Figure 4, and the corresponding matrix M m,n is a 2n × 2n matrix of the form Adjacent regions on an interval.The geometry and replica graph M m,3 (which can easily be generalized to arbitrary n) are drawn in Figures 5 and 6, and the corresponding M m,n is the (2n + 2) × (2n + 2) matrix: Adjacent regions on a circle.The geometry is shown in Figure 9, and the replica manifold is similar to that in Figure 6.The corresponding matrix M m,n is given by (A.13)This can be shown by calculating the minors of M 1 , which are expressed in terms of Chebyshev polynomials of the second kind. B Neumann boundary conditions Here we consider Neumann BCs at the boundaries of a finite system.We study the effect of changing one of the BCs from Dirichlet to Neumann, see, e.g., Figure 12.We shall show that imposing Neumann BC in the massless theory is equivalent to imposing Dirichlet BC and then letting the boundary go to infinity. If we impose Neumann BC on a boundary, the boundary value ϕ N of the field is an additional anchor to be integrated on in the calculation of partition functions.In fact this integral can be done easily.Suppose that in a replica graph, this boundary vertex is connected to another inner vertex with ϕ = ϕ i with an edge of length ℓ N , then the integral over ϕ N gives dϕ b K(ϕ N , ϕ i ; ℓ N ) = 1 . (B.1) Thus, the final result does not depend on ℓ N .Now, a propagator that connects an inner vertex to a boundary vertex with Dirichlet BC contributes an exponential factor to the Gaussian integral, whose dependence on the length between the vertices disappears when sent to infinity, similarly as (B.1).Hence, imposing Neumann BCs is equivalent to imposing Dirichlet BCs and letting the boundary go to infinity.As a concrete example, let us compute the reflected entropy on a finite interval with Dirichlet-Neumann BCs with configuration shown in Figure 12.The replica manifold is similar to that in Figure 4, only with the BCs changed.Labelling the field values at these boundaries as ϕ 2,i , • • • , ϕ 2n,i where i = 1, • • • , m, we now have where One clearly sees that the final result boils down to setting ℓ B → ∞ in Section 2.1.1. For the massive case, the equivalence of Neumann BC with Dirchlet at infinity does not carry through, since (B.1) does not hold.However, direct calculation with Neumann BC can be done as well. C Deflected entropy Recently introduced in [34], the deflected entropy is defined as a "modular flowed" version of reflected entropy.It is the entanglement entropy of the following deflected density matrix Similarly as the reflected entropy, the deflected entropy can also be calculated using a replica trick [34] and then performing the analytic continuation ρ = iσ.The minus sign comes from Hermitian conjugation.We now compute the deflected entropy of two disjoint regions, as depicted in Figure 3a, for massless Lifshitz theory.The replica graph is almost the same as Figure 4, with the difference that now we have alternating numbers (m/2 − ρ and m/2 + ρ) of length ℓ C edges connecting two adjacent vertices.The matrix (A.2) for the deflected entropies reads AA * can be obtained by first computing ρ m/ 2 AB 2 AB with m ∈ 2Z + .A canonical doubling of the Hilbert space provides the simplest purification |ρ m/2 AB ⟩, that is we turn bras of ρ m/into kets in H A * ⊗ H B * .The moments Tr ρ (m) AA * n can then be calculated as the partition function Z m,n on a replica graph M m,n , Z m,n = Mm,n Dϕ e −S cl [ϕ] = Tr ρ (m) AA * n , Figure 4 : Figure 4: Reduced density matrix ρ AB (left) and replica graph M m,n (right) for two disjoint regions as in Figure 3a.There are m/2 copies of length ℓ C edges between two vertices labeled by ϕ i and ϕ i+1 . Figure 5 : Figure 5: Two adjacent regions A and B on finite system with Dirichlet BCs. . 26 ) Here X is the "coefficient matrix" of the vertices on the inner and outer circle of the replica graph M m,n , while Y is a diagonal matrix connecting the two circles.We have that det M m,n = det(X + Y ) det(X − Y ), and both X + Y and X − Y are of the form (A.5) up to a total factor.Using (A.6), we find det M m,n = mω/2 sinh ωℓ C 4n 16 sinh 2 n arccosh η Figure 9 : Figure 9: Two adjacent regions A and B on finite system with periodic BCs. Figure 10 : Figure 10: Reflected spectrum for two disjoint regions as depicted in Figure3a.Left: the largest four eigenvalues λ i in the reflected spectrum as functions of cross-ratio η.Right: the eigenvalues λ i as functions of i, for fixed η.Note that i only takes value in non-negative integers.The spectrum becomes flatter as η → 1. Figure 12 : Figure 12: Configuration with both Dirichlet and Neumann BCs.As before, we use hollow circle to denote Dirichlet BC and solid dots to denote free (Neumann) BC. 1: Bipartite configuration of a finite system with Dirichlet BCs: two complementary regions A, B of respective length ℓ A , ℓ B .Small hollow circles denotes Dirichlet BCs at the end of the system, while a black solid dot indicates that the value of ϕ is free.
9,959.6
2023-07-23T00:00:00.000
[ "Physics", "Computer Science" ]
Optogenetic Monitoring of Synaptic Activity with Genetically Encoded Voltage Indicators The age of genetically encoded voltage indicators (GEVIs) has matured to the point that changes in membrane potential can now be observed optically in vivo. Improving the signal size and speed of these voltage sensors has been the primary driving forces during this maturation process. As a result, there is a wide range of probes using different voltage detecting mechanisms and fluorescent reporters. As the use of these probes transitions from optically reporting membrane potential in single, cultured cells to imaging populations of cells in slice and/or in vivo, a new challenge emerges—optically resolving the different types of neuronal activity. While improvements in speed and signal size are still needed, optimizing the voltage range and the subcellular expression (i.e., soma only) of the probe are becoming more important. In this review, we will examine the ability of recently developed probes to report synaptic activity in slice and in vivo. The voltage-sensing fluorescent protein (VSFP) family of voltage sensors, ArcLight, ASAP-1, and the rhodopsin family of probes are all good at reporting changes in membrane potential, but all have difficulty distinguishing subthreshold depolarizations from action potentials and detecting neuronal inhibition when imaging populations of cells. Finally, we will offer a few possible ways to improve the optical resolution of the various types of neuronal activities. INTRODUCTION As developers of genetically-encoded voltage indicators (GEVIs) we are often asked for our best probe. Until recently, a good GEVI would have been any that gave a voltage-dependent, optical signal in mammalian cells (Dimitrov et al., 2007;Lundby et al., 2008Lundby et al., , 2010Perron et al., 2009a,b). Now the experimenter has several probes to choose from that differ in their voltage-dependencies, speed, signal size, and brightness Jin et al., 2012;Kralj et al., 2012;Han et al., 2013;St-Pierre et al., 2014;Zou et al., 2014;Gong et al., 2015;Piao et al., 2015;Abdelfattah et al., 2016). The combinations of these varying characteristics result in strengths and weaknesses of every GEVI available. There is no perfect probe that can optically resolve action potentials, synaptic activity, and neuronal inhibition in vivo. Some GEVIs will give large, voltage-dependent optical signals but are very dim limiting their usefulness in vivo. Others will give large optical signals but are very slow reducing their ability to resolve fast firing action potentials. So now, when asked which is the best probe, the answer is simply another question. What do you want to measure? To fit with the theme of this edition, we will assume that the answer to that question is synaptic activity. Several reviews have been published comparing the signal size, speed, and brightness of the GEVIs currently available at the time of publication (Wachowiak and Knöpfel, 2009;Akemann et al., 2012Akemann et al., , 2015Knöpfel, 2012;Mutoh et al., 2012;Perron et al., 2012;Mutoh and Knöpfel, 2013;Emiliani et al., 2015;Knöpfel et al., 2015;St-Pierre et al., 2015;Storace et al., 2015bStorace et al., , 2016Antic et al., 2016). In this review, we will shift the focus to one of the lesser considered characteristics of a GEVI, the voltage-sensitivity of the probe. Of course, the other characteristics, especially signal size and brightness, are still important, but the range and steepness of the voltage sensitivity of the optical response have extremely important consequences on which type of neuronal activity a GEVI reports well. For instance, a GEVI with a voltage range from −20 mV to +30 mV would be perfect for monitoring action potentials but not ideal for observing synaptic potentials. Even the shape of the slope of the optical response over the voltage range of the probe will affect its performance. The consequences of the slope and voltage range should also be considered when choosing probes for monitoring neuronal activity. A BRIEF DESCRIPTION OF CURRENTLY AVAILABLE GEVIs There now exist several GEVIs with multiple mechanisms of converting membrane potential changes into an optical signal. These GEVIs fall into two main classes. One class utilizes bacterial rhodopsin to detect alterations in voltage, while the other class relies on a voltage-sensing domain (VSD) from voltage-sensing proteins. Another viable alternative for optical, neuronal recordings is hybrid voltage sensor (hVOS) which consists of a genetically encoded component, a farnesylated fluorescent protein (FP), and a quenching compound, dipicrylamine (DPA; Chanda et al., 2005;Wang et al., 2010Wang et al., , 2012Ghitani et al., 2015). The requirement for the treatment with an exogenous chemical limits hVOS use in vivo but still has value for imaging voltage in slice preparations. The molecular schematics of representative probes from these classes and their corresponding voltage ranges are shown in Figure 1. The voltage range of a generic mammalian neuron is color coded to represent different neuronal activities. The inhibitory postsynaptic potential (IPSP) voltage range is shown in blue. The excitatory postsynaptic potential (EPSP) voltage range is shown in yellow. Voltages corresponding to action potentials are color coded red. As can be seen from Figure 1, the slopes of these optical voltage responses are significantly different. This is an important consideration when measuring synaptic potentials. For instance, Butterfly 1.2 has nearly reached its maximal fluorescent change at −40 mV which would imply that differentiating subthreshold potentials from action potentials will be very difficult. Class I-The Rhodopsin-Based Probes Channel rhodopsin has revolutionized neuroscience. The rhodopsin-based voltage sensors are promising to do the same FIGURE 1 | Various types of genetically-encoded voltage indicators (GEVIs) and their voltage sensitivities. The voltage sensitivities of different GEVIs are compared. Typical voltage ranges of inhibitory postsynaptic potential (IPSP), excitatory postsynaptic potential (EPSP) and action potential are indicated as blue, yellow and red, respectively. The vertical scale bar with minus ∆F/F indicates that the fluorescence dims upon depolarization of the plasma membrane. The voltage-sensitivity curves were as reported in: Arch (D95N; modified with permision from Kralj et al., 2012, Figures 3B, 5); Ace2N-mNeon (modified with permission from Gong et al., 2015, Figure 1D Figure 1D). thing for imaging membrane potential. First developed in Adam Cohen's lab, the intrinsic fluorescence of rhodopsin as a Schiff base being protonated or deprotonated in response to voltage was used to image changes in membrane potential (Maclaurin et al., 2013). This probe, Arch, was extremely fast having a tau under 1 ms. The fast optical response is due in part to the fact that the chromophore resides in the voltage field enabling a nearly instantaneous response. The signal size was also large giving roughly a 70% ∆F/F optical signal per 100 mV membrane depolarization (Kralj et al., 2012; Figure 1, Arch (D95N)). Arch excelled in speed and signal size but suffered from some serious weaknesses. The first weakness was that the original version had an associated, light-induced current. The D95N mutation drastically reduced this current but also resulted in a slower probe (Kralj et al., 2012). The second weakness was that it does not traffic well to the plasma membrane. Even with the addition of endoplasmic reticulum and Golgi network release motifs, every image of a rhodopsin probe in the literature exhibits high intracellular fluorescence (Kralj et al., 2012;Flytzanis et al., 2014;Gong et al., 2014Gong et al., , 2015Hochbaum et al., 2014;Hou et al., 2014). The third and most devastating weakness was that Arch is very dim. The best versions of Arch and related probes are still at least 5× dimmer than the green fluorescent protein (GFP) requiring exceptionally strong illumination, at least 700× the light intensity required for ASAP-1 to visualize the probe activity (Flytzanis et al., 2014;St-Pierre et al., 2014). The weak fluorescence of Arch limits its use to single cell in culture studies or to C. elegans (Kralj et al., 2012;Flytzanis et al., 2014) for two main reasons. The first is that the intrinsic fluorescence of higher order neuro-systems will mask the fluorescence of Arch-type probes. The second is that ∆F in addition to ∆F/F is an important characteristic of the GEVI when it comes to the signal to noise ratio. An example of this is shown in Figure 2. The HEK cell in Figure 2 is expressing a GEVI from which the ∆F and the ∆F/F traces from three different light levels are shown (Lee et al., 2016). As can be seen from this comparison, a high ∆F/F value can be achieved by a large change in fluorescence or a small change in fluorescence when the probe is dim. Notice the increased noise in trace 3, a telltale sign of poor expression/dim fluorescence. An ingenious solution to compensate for the poor fluorescence of the rhodopsin voltage probes was developed simultaneously by the Adam Cohen and Mark Schnitzer laboratories. By fusing an FP to the rhodopsin protein, förster resonance energy transfer (FRET) enabled the rhodopsin chromophore to affect the fluorescence of the fused FP. This design reduced the excitation light intensity needed to visualize the GEVI while maintaining the speed of the optical response since the voltage-sensing chromophore was still in the voltage field. These probes could also cover different wavelengths since many different FPs could be fused to rhodopsin and give a signal Zou et al., 2014). While this made the rhodopsin probes better, the optical signal sometimes could only indirectly report neuronal activity by determining the frequency of the noise in the optical recording (See Supplementary Figure 5 in Gong et al., 2014). Now, an exciting new version using the FP, mNeonGreen, has recently been reported (Gong et al., 2015). mNeonGreen is a very bright FP (Shaner et al., 2013) enabling Ace2N-mNeon to resolve action potentials in vivo in both flies and mice. Class II-VSD Containing GEVIs The second class of GEVIs is also the oldest. The original GEVI, Flash (Siegel and Isacoff, 1997), was the result of inserting GFP downstream of the pore domain of the voltage-gated potassium channel, Shaker. Like the rhodopsin-based probes, the first generation of VSD-based probes had significant drawbacks making them useless in mammalian cells . The main problem was that the GEVIs did not traffic to the plasma membrane. In 2007, one of the biggest advancements in GEVI development was achieved by the Knöpfel laboratory when they fused FPs to the VSD of the voltage-sensing phosphatase gene from Ciona intestinalis (Murata et al., 2005). This probe, voltage-sensing fluorescent protein (VSFP) 2.1 trafficked well to the plasma membrane which resulted in the first voltagedependent optical signals from cultured neurons (Dimitrov et al., 2007). Another issue with VSD-based GEVIs is that the chromophore resides outside of the voltage field so the optical signal relies on the conformational change of the VSD. These probes are therefore generally slower than the rhodopsin-based probes, but a recently developed red-shifted GEVI is extremely fast having taus under 1 ms (Abdelfattah et al., 2016). There are three different designs for GEVIs that utilize a VSD. The first design uses a FRET pair flanking the VSD. An example is Butterfly 1.2 . This probe is somewhat slow and gives a very small optical signal, less than 3% ∆F/F per 100 mV depolarization. A butterfly style probe that gives a faster and larger optical signal was developed last year called Nabi (Sung et al., 2015). An advantage of FRET-based probes is that the ratiometric imaging can remove movement artifacts due to respiration and blood flow in vivo. Theoretically, a ratiometric measurement could also be used to determine the absolute value of the membrane potential since the ratio is concentration independent. In practice, however, the relative fluorescence of the two chromophores differ substantially resulting in a potential increase in the noise for the analysis of the optical signal. Often the experimenter should only analyze the brighter signal (Wilt et al., 2013). It is also difficult to only excite the donor chromophore and not the acceptor as well. These factors combined with the relatively low signal size of FRET-based probes prohibit any reliable absolute measurement of membrane potential. The second design involves a circularly-permuted fluorescent protein (cpFP) attached to the VSD. Initial designs fused the cpFP downstream of the VSD so that the chromophore was in the cytoplasm (Gautam et al., 2009;Barnett et al., 2012). Electrik PK gave very small signals less than 1% ∆F/F per 100 mV depolarization but were very fast having a tau under 2 ms. A substantial increase in signal size was achieved when the cpFP was placed between the S3 transmembrane segment and the S4 transmembrane segment of the VSD putting the chromophore outside of the cell (St-Pierre et al., 2014). This probe, ASAP-1, is one of the better GEVIs giving a fast and robust optical signal (tau = 1-2 ms and about 20% ∆F/F per 100 mV depolarization in HEK cells). ASAP-1 has a very broad voltage range which is virtually linear over much of the physiologically relevant potentials of neurons. The third design of GEVIs that utilize a VSD simply fuses the FP at the carboxy-terminus which puts the chromophore in the cytoplasm. During a systematic test of different FPs fused at different linker lengths from the VSD done in collaboration by Vincent Pieribone's lab and Larry Cohen's lab, a point mutation on the outside of the FP, Super Ecliptic pHlorin (Miesenbock et al., 1998;Ng et al., 2002) converted an alanine to an aspartic acid improving the optical signal 15 fold from 1% ∆F/F to 15% per 100 mV depolarization of the plasma membrane (Jin et al., 2012). This negative charge on the outside of the β-can seems to affect the fluorescence of a neighboring chromophore when S4 moves since mutations that favor the monomeric form of the FP reduce the voltage-dependent optical signal substantially . Further development of ArcLight has gotten signals as high as 40% ∆F/F per 100 mV depolarization step (Han et al., 2013). While ArcLight has the drawback of being slow, its brightness and signal size make it one of the better probes for imaging in vivo and in slice. In 2015, two publications improving the speed of this sensor were published. One dramatically improved the off rate called Arclightening but reduced the signal size to under 10% ∆F/F per 100 mV depolarization (Treger et al., 2015). The other, Bongwoori, improved the speed of the sensor and shifted the voltage response to more positive potentials which improved the resolution of action potentials but decreased the signal size for synaptic potentials . The reduced optical signal response for sub-threshold potentials gives Bongwoori a better ''contrast'' for optically resolving action potentials. A final design for researchers to consider when choosing a GEVI is the genetically encoded, hVOS (Chanda et al., 2005;Wang et al., 2010Wang et al., , 2012Ghitani et al., 2015). First developed in the Bezanilla lab, hVOS consists of an FP anchored to the plasma membrane with the addition of a small charged molecule, DPA, that binds to the plasma membrane effectively acting as a fluorescent quencher. Since DPA is a lipophilic anion, the quenching agent will move from the outer surface of the plasma membrane to the inner surface upon membrane depolarizations generating a voltage-responsive fluorescent signal. Like the other sensors, hVOS also has drawbacks which are primarily due to the fact that an exogenous chemical must be administered to the sample to be imaged. This is not a trivial process since too much DPA will significantly increase the capacitance of the plasma membrane and alter the neuronal activity of the cell. However, once the appropriate conditions are determined, hVOS gives optical signals for subthreshold potentials as well as action potentials in slice from populations of cells (Wang et al., 2012) or individual cells when expression of the FP is sparser (Ghitani et al., 2015). SYNAPTIC ACTIVITY MONITORING IN SLICES WITH GEVIs Brain slices are invaluable for studying in detail the cellular, molecular, and circuitry activity of neuronal functions (Ting et al., 2014). GEVIs can expand this information since every pixel potentially becomes an electrode. There are not many examples of synaptic potential recordings from GEVIs in slice. Most examples are proof-of-principle type of recordings in the original publication of a new sensor to demonstrate its potential. The VSFP family of GEVIs are the most published recordings in brain slice (Akemann et al., 2013;Scott et al., 2014;Carandini et al., 2015;Empson et al., 2015;Mutoh et al., 2015). Here, we compare optical synaptic recordings in brain slices from VSFP Butterfly 1.2 and hVOS. Figure 3A shows the population imaging in coronal cortical slices prepared from a mouse brain electroporated in utero with VSFP-Butterfly 1.2 . To explore voltage imaging from populations of cells, cortical slices were imaged at low magnification while delivering a single electrical stimulus ( Figure 3A, left panel). The amplitude of the evoked optical signal ranged from 1 to 1.5% ∆R/R 0 ( Figure 3A). Disinhibition with 25 mM gabazine increased the signal to 11% ∆R/R 0 . Since VSFP-Butterfly 1.2 is a FRET probe, the ratio of the fluorescent change can be reported, but in slice the advantage of a ratiometric recording is of lesser value since movement artifacts due to respiration and blood flow do not exist. Despite this advantage, the voltage-dependent change in fluorescence is quite small, less than 0.5% ∆F/F which requires multiple trials to improve the signal to noise ratio. could be seen throughout the field of view (Wang et al., 2012). hVOS Signal in the Hippocampal Slice VSFP-Butterfly 1.2, and hVOS can all generate an optical signal corresponding to synaptic responses in acute brain slices. hVOS has the larger ∆F/F. VSFP-Butterfly 1.2 does not require additional drug application to detect voltage changes in the neuron. Other sensors can also give optical signals in slice but those recordings have focused on action potentials in individual cells and are not shown here. The brightness, signal size, and voltage range of ASAP-1 make it a potentially useful sensor for imaging synaptic potentials in slice. While there are no reports in the literature of ArcLight being used to analyze neuronal activity in brain slices, the brightness, signal size and voltage-sensitivity are also ideal for optically recording synaptic potentials. SYNAPTIC ACTIVITY MONITORING IN VIVO WITH GEVIs While slice recordings are extremely valuable for deciphering neuronal circuitry, the ultimate goal of voltage imaging is to detect neuronal activity in a behaving animal. This is an ambitious endeavor with very few examples, but some GEVIs are now capable of giving a robust signal that allows in vivo imaging. Proof of principle for in vivo voltage imaging was established by the Knöpfel lab using the VSFP family of probes (Akemann et al., , 2013. Figure 4A shows single trial responses in the barrel cortex during whisker stimulation. Clearly, a stimulus evoked voltage signal could be detected in single trials even though the signal size is very small. Asterisks denote potential spontaneous voltage transients. However, unlike the stimulus evoked optical response, these potential transients exhibit different start times and kinetics. Having a low signal to noise ratio undermines the confidence in reliably detecting neuronal activity trial to trial (Carandini et al., 2015). Another drawback with VSFP Butterfly 1.2 is that the V 1/2 is roughly −70 mV with maximal fluorescent change occurring at −40 mV (Figure 1) making it virtually impossible to distinguish synaptic activity from action potentials based solely on signal size. ArcLight has also been tested in vivo in mice and flies (Cao et al., 2013;Storace et al., 2015a). The V 1/2 for ArcLight is around −30 mV making it ideal to detect neuronal activity in flies whose action potentials range from a resting potential of −40 mV to a final excitation of −10 mV. As a side note this is why our probe, Bongwoori, should not be used for imaging neuronal activity in flies since the V 1/2 has been shifted to around 0 mV . Figure 4B shows a recording from a mouse expressing ArcLight in the olfactory bulb. As can be seen, respiration causes an optical artifact, but since ArcLight gives a large signal, identifying regions of the olfactory bulb responding to an odor is still possible. Again, though, it is not possible to resolve synaptic activity from action potentials. The rhodopsin-based GEVIs have also been shown to elicit an optical signal in vivo. An example is shown in Figure 5A from the Gradinaru lab (Flytzanis et al., 2014). With the dim fluorescence of the GEVI, Archer, C. elegans is one of the few multicellular organisms one would be able to record from. After odorant stimulation, there is a slight variation in the ∆F compared to control. While there appears to be a slight signal, the low signal to noise ratio again undermines one's confidence in being able to reliably detect neuronal activity from trial to trial. The last example of in vivo recordings is the best example of resolving action potentials. Ace2N-4AA-mNeon has been imaged in flies and mice (Gong et al., 2015). This probe is extremely fast showing the best fit of optical data to voltage yet ( Figure 5B). The red arrow shows an optical response to a 5 mV depolarization. However, comparing the optical signal at the spike to the subthreshold potential, one can see that the optical response is skewed towards action potential activity. This gives a fantastic response when imaging the visual cortex in response to visual stimuli. Action potentials are easily discernible. The ability to optically report synaptic activity is less clear but still promising. CONCLUSION GEVIs come in many flavors. As demonstrated, the signal size, speed, and voltage sensitivity affect the neuronal activity a GEVI can resolve. Many probes will give an optical signal in slice and in vivo but some signals will be more informative. If The cell is projected onto multiple pixels enabling the experimenter to choose pixels with optical activity. In this example the red, highlighted pixels labeled 1-4 have a large ∆F/F while pixels highlighted in blue, labeled 5-8, exhibit low or no change in fluorescence upon depolarization of the plasma membrane. (B) Same cell under low magnification now only projects onto a few pixels. The pixel highlighted in black, labeled 9, is a summation of pixels 1-8 in (A). The experimenter is no longer able to avoid the non-responsive, internal fluorescence. the experimenter wants to image any neuronal activity from a population of cells in brain slice, the recommendations would be hVOS, ArcLight, and ASAP-1. All have a broad voltage range, traffic to the membrane well and give relatively large signals. ArcLight and ASAP-1 will have some difficulty in separating synaptic activity from action potentials due to their voltage sensitivities, but this could theoretically be overcome by coexpression of a red calcium sensor to verify action potential activity if the neuron tested has an action potential-induced calcium transient. If one wants to measure neuronal activity of individual cells in slice, then one should also consider Ace2N-4AA-mNeon. Imaging single cells vs. a population of cells will also affect the choice of GEVI to be used. When imaging single cells, probes with broad voltage ranges will enable the optical detection of inhibition, synaptic potentials, and action potentials. However, these same probes when imaging large populations of cells are potentially less informative since the depolarization of a subgroup of neurons could swamp the small, hyperpolarizing signals from inhibited neurons. Inefficient trafficking or high intracellular expression will affect the voltage imaging of a population of cells more so than when imaging individual cells. The reason for this is that the spatial representation of the cell under high magnification onto the pixels of the camera has changed. Under high magnification, a researcher can choose only pixels that correspond to regions of the cell that exhibited a fluorescent response. When imaging a population of cells, a pixel will be less likely to capture only the responsive fluorescence. This situation is depicted in Figure 6. When imaging a single cell, it is much easier to avoid the internal, non-responsive fluorescence and maximize the signal to noise ratio. While the GEVIs currently available have shown significant improvement in their ability to optically detect neuronal activity, there is still much room for improvement. Refining the voltagesensitivity will enable maximizing the optical signal. For instance, a probe that only responded to hyperpolarization of the plasma membrane would make identifying the inhibited parts of a neuronal circuit much easier. Improving the membrane expression of the GEVI will decrease the nonresponsive fluorescence in a population of cells, thereby improving the signal to noise ratio. Most efforts to improve trafficking involve the addition of endoplasmic reticulum and Golgi release motifs. Codon optimization is another approach which for membrane proteins may be a misnomer. The idea of codon optimization is to use only the most abundant codons for rapid translation of the protein. This has been shown to be effective for cytoplasmic proteins. However, for membrane proteins slowing the translation to allow proper folding and insertion into the translocon may also be important (Norholm et al., 2012;Yu et al., 2015). Finally, limiting the expression of the GEVI to subcellular components (i.e., the soma, dendrites, etc.) could also focus the optical signal to the desired region of the neuron again improving the signal to noise ratio. AUTHOR CONTRIBUTIONS RN and AJ wrote the manuscript and contributed figures. B-JY and BJB helped to write the manuscript.
5,828.2
2016-08-05T00:00:00.000
[ "Biology", "Physics" ]
Application of Blockchain Technology in e-LoA Technopreneurship Journal LoA or Letter of Acceptance is a statement received. LoA is usually used as a statement of scholarship acceptance or even journal acceptance. In addition, the current issuance of LoA is still conventional, using paper as the access medium. This is certainly inefficient. In addition, the issuance of LoAs that can be accessed through the Platform also has certain problems. Many parties doubt the security of the LoA issued through the Platform, whether confidentiality is really maintained or not. To solve this problem, e-LoA (electronic LoA) was implemented using blockchain technology on the Technopreneurship Journal Platform. With the use of blockchain technology on this Platform, LoA data cannot be changed, duplicated or deleted. So that it can improve security and support transparency in the issuance and access of LoA. e-LoA issued through the Platform also makes it easier for parties to access and download a verified LoA. Introduction Letter of Acceptance or commonly referred to as LoA is a statement that has been received, this LoA is applied to the Technopreneurship Journal as a statement of receipt of a journal that has been submitted by the author. At present the LoA for managing and issuing LoA is still conventional, which requires paper as an access medium. Of course this is very ineffective and inefficient. So the e-LoA (electronic LoA) that owners can access online through the Platform is implemented, but there are many security problems when testing the system. Based on these problems, it is necessary to use a decentralized publishing and access system for LoA. One of the technologies in implementing the verification system in a decentralized and transparent manner is blockcert. Blockcert was built using blockchain technology that provides transparency and accountability [1]. The first blockchain was introduced in 2008 by a person or group of people known as Satoshi Nakamoto. Then the following year was implemented by Nakamoto as a core component of bitcoin (bitcoin core), which is used as a public ledger blockchain for all transactions that occur in open networks [2]. In addition, the application of blockchain technology that is equipped with cryptography can guarantee the coherence of stored records as well as maintaining the confidentiality of LoA owners [3]. Based on this, e-LoA (Electronic Letter of Acceptance) was applied using blockchain technology. BlockChain Blockchain technology in Indonesia is currently included in the early stage (early stage) because its application is still relatively small. Even so there have been several software businesses that have collaborated with related institutions and agencies to develop this technology, they have also joined the Indonesian Blockchain Association (ABI). Great potential for industry, education, economy, health, agriculture, trade and other sectors is still very wide open [4]. But in the fields of government, health, science, literacy, culture and art. Benefits and Advantages of Blockchain According to Alexander Grech and Anthony in his book entitled "Blockchain in Education" the benefits and benefits obtained when implementing blockchain technology, namely [6]: 1. Self-sovereignty, for users to identify and control simultaneously the storage and management of personal data. 2. Trust (Trust), for infrastructure can provide confidence in operational transactions such as payments or certificates. 3. Transparency & Provenance, for users to conduct transactions with the knowledge of each party that has the capacity to carry out such transactions. 4. Immutability, as an archive to be written and stored permanently, without the possibility of being changed. 5. Disintermediation (elimination of intermediaries), the elimination of the need for central control authority over transaction management and archival storage. 6. Collaboration, the ability of parties to deal directly without requiring third party mediation. Distributed Database and Decentralized System Distributed database is a storage method where storage devices are not installed on a computer, but rather several computers. Some of these computers can be connected to a storage device or several different storage devices, which are connected through a network. This method makes the database more transparent [7]. Decentralized system is a system design that consists of several computers called nodes in a network and has the responsibility to set its own tasks to achieve the targets of the central system. Each node can have its own system and perform certain functions to contribute to the universal system (main system) [8]. Blockcert is an open standard used to publish and verify official documents designed based on blockchain technology. This digital recording is registered in the blockchain, signed in cryptography, temper-proof, and shareable. The purpose of creating a blockcert is to invite each individual to innovate in owning and sharing official documents of their own [9]. The second stage is the verification stage of the security of documents downloaded and stored by interested parties. This is to ensure information and security [10], [11] because security and privacy are important in guaranteeing and maintaining the author's trust to submit a journal to the Technopreneurship Journal [12], [13], [14], [15] , [16], [17]. The process is as follows: Application of Blockchain Technology… 1. Only the names and emails of the authors listed in the Technopreneurship Journal article can download and store e-LoA 2. The downloader will be asked to verify via email, the verification email is only sent to the emails listed in the article on the Technopreneurship Journal Platform 3. The system verifies to the blockchain to find out the safety and accuracy of e-LoA recipients The use of blockchain technology [19], [20] in e-LoA can protect and prevent stored data from changing and duplicating data c. Autonomy Data records are distributed in a decentralized way so that the blockchain nodes have data records. Decentralized data storage reduces the risk of server downtime and data loss.
1,318.8
2020-03-26T00:00:00.000
[ "Computer Science" ]
Dynamics of two-level atom interaction with single-mode field In this paper, the dynamics of fluorescent light emitted by a two-level atom interacts with squeezed vacuum reservoir is studied wisely using two-time correlation function fundamentals. The mathematical analysis shows the fluorescent spectrum of light emitted by the atom is turned out to be a single peak at a Lorentz's frequency. On the other hand, the squeezed vacuum reservoir input is responsible to the stimulated emission of photon from the atom. Moreover, it is identified that thermal reservoir is more efficient than squeezed vacuum reservoir to have valuable power spectrum. Introduction The Different quantum optical investigations have been made by various authors for several years. The power spectrum of a fluorescent light emitted by a two-level has been studied by enormous authors [1][2][3][4][5][6][7][8] . In connection with this, there is an author who investigates the dynamics of the power spectrum of the light scattered by a two-level atom 8 . The investigation clearly shows that the power spectrum of the scattered field is directly obtained from the dipole moment correlation function. In addition, Eyob and Fesseha considered that two-level atom interaction with squeezed light. According to their work, the fluorescent light is in squeezed state and the power spectrum consists of a single peak only 5 . Another examination on the interaction of a two-level atom with a single mode of radiation has been made by Cirac. Accordingly, with the aid of the master equation for the atomic density operator in a bad-cavity limit the light emitted by the atom into the background mode has been studied 7 . The interaction a system in which initially unexcited two-level atom with a weak cavity field has been another area regarding with two-level atomic interaction dynamic with single light mode 10 . Taking this as motivation, in this paper it is necessary to see how would be the dynamics of a fluorescent light emitted by a two-level atom but coupled to squeezed vacuum reservoir. Basic fundamental effects of squeezed vacuum reservoir are clearly stated in this paper. Here we derive the time evolution for atomic variables rather than light mode variable using the pertinent master equation. It contributes a very valuable scientific merit for the development of instrument empowerment which uses two-level atom interaction with single mode radiation principles. Operator dynamics It is very important to an expression for probability of the atom to be in the upper or lower electronic energy state. To this end, let's introduce the probability amplitudes denoted by a  (the probability of an atom to be in the upper electronic energy level) and b  (the probability an atom to be in the lower electronic energy level) which can be expressed as At a time the atom can occupy whether the upper energy state or the lower energy state. Here we use many energy state approximation into only two-levels in which the atomic transition is resonant with the cavity mode, the sum of the probabilities given in Equations (5) and (6) turn out to be 1 ab   . This indicates that the sum of the probabilities of an atom yields one. This condition is agreed with the fact of probability theory. To study the dynamics of the considered system, we have developed the master equation which encompasses all the property of it. By taking consideration of the system interaction with the squeezed vacuum reservoir, we get the formal mathematical expression to the system as where  is the atomic decay constant and the effects of the squeezed vacuum reservoir incorporated through the symbols N (the reservoir mean photon number of the reservoir) and M . Power Spectrum of a single-mode field We now define the power spectrum of fluorescent light emitted by a two-level atom in an open space by In which 0  is the central transition frequency of the atom from the upper level to the lower energy state and ss stands for steady state conditions. It proves convenient that Equation (9) can be rewritten as It is better to understand the two-time correlation function is not depending on the time t rather it is depending only on the time difference . Then let's replace t by t   in the first integral of Equation (10) and performing the change of variables to get the power spectrum in the form where Re denotes the real part of the integral. We next seek to determine the two-time correlation function involved in Equation (11) at steady state. To do this, we fist write the time evolution of atomic operators by employing the master equation given in Equation (8) and with the aid of trace properties, we have generated the following two Equations. and  ˆ( According to Equation (4), we can express equations (12) and (13) as and ˆˆ( ) 4 According to the quantum regression theorem for two quantum operators  andB , the two-time correlation is written as 11 (21) Employing this equation into Equation (20), one gets Furthermore, using Equations (5), (6), (7) and (8) In view of Equations (23) and (24), the power spectrum described in Equation (11) can be put in the form of so that carrying out the integration, one readily finds This equation describes the power spectrum of the fluorescent light emitted by a two-level atom interact with squeezed vacuum reservoir. If we plot the spectrum we will generate the graph depicted in Figure 1. Results and discussions From the figure 1a, we note that the power spectrum of a fluorescent light emitted from a two-level atom has a peak point at the frequency 0   . On the other hand, we see that the sharpness of the spectrum decreases as we go far from the point at which the transition frequency equals to the frequency of a light mode. This must be due to the decrement of the correlation between the two-time correlations. Moreover, the graph indicates that the mean number of photon in the reservoir mode has an effect on the sharpness of the power spectrum. One effect of the squeezed vacuum reservoir is to deform the sharpness of the spectrum. This is explained as when the mean photon number from the squeezed vacuum reservoir increases the power spectrum of the light emitted by a two-level atom deforms. But it is important to see from Equation (26) that the power spectrum is highly depending on the mean photon number of squeezed vacuum reservoir. The existence of these photons in the reservoir is very important for stimulated emission by the atom. This situation takes place when the atom is initially in the upper electronic energy state and the photon from the reservoir hits the atom which has resonant frequency with atomic transition frequency. Here we have done the mathematical equation behind such conditions which has to be true for the selected model. As we observe the scale of the power spectrum in Figure 1(a) is in the order of 3 10  while Figure 1(b) is in the order of 2 10  for the same parameters used in both figures. Thus, the power spectrum a fluorescent light emitted by the atom coupled to thermal reservoir is greater than that of squeezed vacuum reservoir. Conclusions The paper addresses, the dynamics of a power spectrum emitted by a two-level atom interacts with the reservoir submodes with the aid of evolution of atomic variables. By driving the master equation that governs the system under consideration, we have obtained the time evolution for the atomic variable. Finally, we have got the two-time correlation for the raising and lowering atomic operators which is the core problem of this paper and using the result (Equation (22)) power spectrum for a fluorescent light is obtained. It is identified that the presence of reservoir mode is responsible to stimulating emission of photon from an atom. Moreover, the power spectrum of a fluorescent light is greater when the atom interacts with thermal reservoir than squeezed vacuum reservoir. The other outstanding point in this paper is that the power spectrum of a fluorescent light is turn out to be a single peak whether the reservoir is squeezed vacuum or thermal. The result shows only there will be a single peak at the point at which the frequency of the light is equals to the central Lorentz's frequency 11 . This must be due to high correlation is induced between the rising and lowering atomic operators expressed in Equation
1,978.2
2019-11-05T00:00:00.000
[ "Physics" ]
Controlling phonons and photons at the wavelength scale: integrated photonics meets integrated phononics AMIR H. SAFAVI-NAEINI, DRIES VAN THOURHOUT, ROEL BAETS, AND RAPHAËL VAN LAER* Department of Applied Physics and Ginzton Laboratory, Stanford University, Stanford, California 94305, USA Photonics Research Group (INTEC), Department of Information Technology, Ghent University–imec, Belgium Center for Nanoand Biophotonics, Ghent University, Ghent, Belgium *Corresponding author<EMAIL_ADDRESS> INTRODUCTION Microwave-frequency acoustic or mechanical wave devices have found numerous applications in radio-signal processing and sensing. They already form mature technologies with large markets, typically exploited for their high quality compared to electrical devices [1]. The vast majority of these devices are made of piezoelectric materials that are driven by electrical circuits [2][3][4][5][6][7][8]. A major technical challenge in such systems is obtaining the suitable matching conditions for efficient conversion between electrical and mechanical energy. Typically, this entails reducing the effective electrical impedance of the electromechanical component by increasing the capacitance of the driving element. This has generally led to devices with large capacitors that drive mechanical modes with large mode volumes. Here, we describe a recent shift in research toward structures that are only about a wavelength, i.e., roughly 1 μm at gigahertz frequencies, across in two or more dimensions. microwave circuits becomes significantly more difficult due to vanishing capacitances. We can classify confinement in terms of its dimensionality (Fig. 1). The dimensionality refers to the number of dimensions where confinement is on the scale of the wavelength of the excitation in bulk. For example, surface acoustic wave (SAW) resonators [2], much like thin-film bulk acoustic wave (BAW) resonators [3], have wavelength-scale confinement in only one dimension-perpendicular to the chip surfaceand are therefore 1D-confined. Until a few years ago, wavelengthscale phononic confinement at gigahertz frequencies beyond 1D remained out of reach. Intriguingly, both near-infrared optical photons and gigahertz phonons have a wavelength of about 1 μm. This results from the 5 orders of magnitude difference in the speed of light relative to the speed of sound. The fortuitous matching of length scales was used to demonstrate the first 2D-and 3D-confined systems, in which both photons and phonons are confined to the same area or volume (Fig. 1). These measurements have been enabled by advances in low-loss photonic circuits that couple light to material deformations through boundary and photoelastic perturbations. Direct capacitive or piezoelectric coupling to these types of resonances has been harder, since the relatively low speed of sound in solid-state materials means that gigahertz-frequency phonons have very small volume, leading to minuscule electrically induced forces at reasonable voltages, or, in other words, large motional resistances that are difficult to match to standard microwave circuits [1]. Here, we primarily consider recent advances in gigahertzfrequency phononic devices. These devices have been demonstrated mainly in the context of photonic circuits and share many commonalities with integrated photonic structures in terms of their design and physics. They also have the potential to realize important new functionalities in photonic circuits. Despite recent demonstrations of confined mechanical devices operating at gigahertz frequencies and coupled to optical fields, phononic circuits are still in their infancy, and applications beyond those of interest in integrated photonics remain largely unexplored. Several attractive aspects of mechanical elements remain unrealized in chip-scale systems, especially in those based on nonpiezoelectric materials. In this review, we first describe the basic physics underpinning this field, with specific attention to the mechanical aspects of optomechanical devices. We discuss common approaches used to guide and confine mechanical waves in nanoscale structures in Section 2. Next, we describe the key mechanisms behind interactions between phonons and both optical and microwave photons in Section 3. These interactions allow us to efficiently generate and readout mechanical waves on a chip. Section 4 briefly summarizes the state of the art in opto-and electromechanical devices. It also describes a few commonly used figures of merit in this field. Finally, we give our perspectives on the field in Section 5. In analogy to integrated photonics [10][11][12][13][14][15][16]18,19,22,24,25], the field may be termed "integrated phononics." While not limited to the material silicon, its goal is to develop a platform whose fabrication is in principle scalable to many densely integrated and coherent phononic devices. GUIDING AND CONFINING PHONONS Phonons obey broadly similar physics as photons so they can be guided and confined by comparable mechanisms, as detailed in the following subsections. A. Total Internal Reflection In a system with continuous translational symmetry, waves incident on a medium totally reflect when they are not phasematched to any excitations in that medium. This is called total internal reflection. The waves can be confined inside a slow medium sandwiched between two faster media by this mechanism [ Fig. 2(a)]. This ensures that at fixed frequency Ω the guided wave is not phase-matched to any leaky waves since its wave vector K Ω Ω∕v ϕ -with v ϕ its phase velocity-exceeds the largest wave vector among waves in the surrounding media at that frequency. In other words, the confined waves must have maximal slowness 1∕v ϕ . This principle applies to both optical and mechanical fields [26][27][28]. Still, there are important differences between the optical and mechanical cases. For instance, a bulk material has only two transverse optical polarizations while it sustains two transverse mechanical polarizations with speed v t and a longitudinally polarized mechanical wave with speed v l . Unlike in the optical case, these polarizations generally mix in a complex way at interfaces [26]. In addition, a boundary between a material and air leads to geometric softening (see next section), a situation in which interfaces reduce the speed of certain mechanical polarizations. This generates slow SAW modes that are absent in the optical case. So achieving mechanical confinement requires care in looking Photonic and phononic systems can be classified according to the number of dimensions in which they confine excitations to a wavelength. Most previous systems contain several wavelengths in more than one dimension: they are 0D-or 1D-confined. New structures have emerged in which both photons and phonons are confined to the wavelength scale in two or three dimensions. Here we focus on such 2D-or 3D-confined wavelength-scale systems at gigahertz frequencies. Related reviews on integrated opto-and electromechanical systems are [9,17,31,33,45,113,183,205,206]. The table gives as examples of 2D-and 3D-confined devices a sub-μm 2 silicon photonic-phononic waveguide [34] and a subμm 3 silicon optomechanical crystal [114]. The depicted 0D-and 1Dconfined structures are a long Fabry-Perot cavity, a vertical-cavity surface-emitting laser [20] for the optical case and a thick quartz [21] and a thin aluminum nitride BAW resonator [177] for the mechanical case. The study of heat flow [23,45] is beyond the scope of this review. for the slowest waves in the surrounding structures. These are often surface instead of bulk excitations. Among the bulk excitations, transversely polarized phonons are slower than longitudinally polarized phonons (v t < v l ). Conflicting demands often arise when designing waveguides or cavities to confine photons and phonons in the same region: photons can be confined easily in dense media with a high refractive index and thus small speed of light, but phonons are naturally trapped in soft and light materials with a small speed of sound. In particular, the mechanical phase velocities scale as v ϕ ffiffiffiffiffiffiffiffi ffi E∕ρ p , with E the stiffness or Young's modulus and ρ the mass density. For instance, a waveguide core made of silicon (refractive index n Si 3.5) and embedded in silica (n SiO 2 1.45) strongly confines photons by total internal reflection but cannot easily trap phonons (for exceptions see next sections). On the other hand, a waveguide core made of silica (v t 5500 m∕s) embedded in silicon (v t 5843 m∕s) can certainly trap mechanical [29] but not optical fields. Still, some structures find a sweet spot in this trade-off: the principle of total internal reflection is currently exploited to guide phonons in Ge-doped optical fibers [30] and chalcogenide waveguides [31]. Since silicon is "slower" than silicon dioxide optically, but "faster" acoustically, simple index guiding for co-confined optical and mechanical fields is not an option in the canonical platform of silicon photonics, silicon-on-insulator. Below we consider techniques that circumvent this limitation and enable strongly colocalized optomechanical waves and interactions. B. Impedance Mismatch The generally conflicting demands between photonic and phononic confinement (see above) can be reconciled through impedance mismatch [ Fig. 2(b)]. The characteristic acoustic impedance of a medium is Z m ρv ϕ , with ρ the mass density [26]. Interfaces between media with widely different impedances Z m , such as between solids and gases, strongly reflect phonons. In addition, gases have an acoustic cutoff frequency-set by the molecular mean-free path-above which they do not support acoustic excitations [32]. At atmospheric pressure, this frequency is roughly Ω c ∕2π ≈ 0.5 GHz. Above this frequency Ω c acoustic leakage and damping because of air are typically negligible. The cutoff frequency Ω c can be drastically reduced with vacuum chambers, an approach that has been pursued widely to confine low-frequency phonons [33]. These ideas were harnessed in silicon-on-insulator waveguides to confine both photons and phonons to silicon waveguide cores [34][35][36] over milli-to centimeter propagation lengths. The acoustic impedances of silicon and silica are quite similar, so in these systems the silica needs to be removed to realize low phonon leakage from the silicon core. In one approach [34], the silicon waveguide was partially underetched to leave a small silica pillar that supports the waveguide [ Fig. 2(b)]. In another, the silicon waveguide was fully suspended while leaving periodic silica or silicon anchors [35,36]. C. Geometric Softening The guided wave structures considered above utilize full or partial underetching of the oxide layer to prevent leakage of acoustic energy from the silicon into the oxide. Geometric softening is a technique that allows us to achieve simultaneous guiding of light and sound in a material system without underetching and regardless of the bulk wave velocities. Although phonons and photons behave similarly in bulk media, their interactions with boundaries are markedly different. In particular, a solid-vacuum boundary geometrically softens the structural response of the material below and thus lowers the effective mechanical phase velocity [Fig. 2(c)]. This is the principle underpinning the 1D confinement of Rayleigh SAWs [26,37]. This mechanism was used in the 1970s in the megahertz range [37][38][39] to achieve 2D confinement and was recently rediscovered for gigahertz phonons, where it was found that both light and motion can be guided in unreleased silicon-on-insulator structures [40]. More recently, fully 3D-confined acoustic waves have been demonstrated [41] with this approach on silicon-on-insulator where a narrow silicon fin, clamped to a silicon dioxide substrate, supports both localized photons and phonons. D. Phononic Bandgaps Structures patterned periodically, such as a silicon slab with a grid of holes, with a period a close to half the phonons' wavelength Λ 2π∕K result in strong mechanical reflections, as in the optical case. At this X -point-where K π∕a-in the dispersion diagram forward-and backward-traveling phonons are strongly coupled, resulting in the formation of a phononic bandgap [ Fig. 2(d)] whose size scales with the strength of the periodic Mechanisms for phononic confinement in micro-and nanostructures. We illustrate the main approaches with phononic dispersion diagrams ΩK and mark the operating point in black. (a) A waveguide core whose mechanical excitations propagate more slowly than the slowest waves in the surrounding materials supports acoustic total internal reflection. Examples include chalcogenide rib waveguides on silica [31], silica waveguides cladded by silicon [29], and Ge-doped fibers [30]. (b) Even when phonons are phase-matched to surface or bulk excitations, their leakage can be limited by impedance mismatch such as in suspended silicon waveguides and disks [34][35][36]42,44] and silica microtoroids [115]. (c) In contrast to optical fields, mechanical waves can be trapped by surface perturbations that soften the elastic response such as in case of Rayleigh surface waves [2], silicon fins on silica [40,41], and allsilicon surface perturbations [37][38][39]51]. (d) Finally, phonons can be trapped to line or point defects in periodic structures with a phononic bandgap such as in silicon optomechanical crystals [52,114], line defects [57], and bulls-eye disks [54]. Many structures harness a combination of these mechanisms. perturbation. The states just below and above the bandgap can be tuned by locally and smoothly modifying geometric properties of the lattice, resulting in the formation of line or point defects. This technique is pervasive in photonic crystals [43] and was adapted to the mechanical case in the last decade [45][46][47][48][49][50]. This led to the demonstration of optomechanical crystals that 3D-confine both photons and gigahertz phonons to wavelength-scale suspended silicon nanobeams [52,53,55]. In these experiments, confinement in one or two dimensions was obtained by periodic patterning of a bandgap structure, while in the remaining dimension, confinement is due to the material being removed to obtain a suspended beam or film. Conflicting demands similar to those discussed in Section 2.B complicate the design of simultaneous photonic-phononic bandgap structures [50]. For example, a hexagonal lattice of circular holes in a silicon slab, as is often used in photonic bandgap cavities and waveguides, does not lead to a full phononic bandgap. Conversely, a rectangular array of cross-shaped holes in a slab, as has been used to demonstrate full phononic bandgaps in silicon and other materials, does not support a photonic bandgap. Nonetheless, both one-dimensional [55] and two-dimensional crystals [52] with simultaneous photonic and phononic gaps have been proposed and demonstrated in technologically relevant material systems. In addition, full bandgaps are ideal [56] but not strictly necessary for good confinement as long as there is strong reflectivity within the momentum-distribution associated with the confined excitations and the disorder [43]. Beyond enabling 3D-confined wavelength-scale phononic cavities, phononic bandgaps also support waveguides or wires, which are 2D-confined defect states. These have been realized in silicon slabs with a pattern of cross-shaped holes supporting a full phononic bandgap, with an incorporated line defect within the bandgap material [57][58][59][60]. Robustness to scattering is particularly important to consider in such nano-confined guided wave structures, since as in photonics, intermodal scattering due to fabrication imperfections increases with decreasing crosssectional area of the guided modes [61]. Single-mode phononic wires are intrinsically more robust, as they remove all intermodal scattering except backscattering. They have been demonstrated to allow robust and low-loss phonon propagation over millimeterlength scales [60]. Multi-and single-mode phononic waveguides are currently considered as a means of generating connectivity and functionality in chip-scale solid-state quantum emitter systems using defects in diamond [62,63]. E. Other Confinement Mechanisms The above mechanisms for confinement cover many, if not most, current systems. However, there are alternative mechanisms for photonic and phononic confinement, including but not limited to: bound states in the continuum [64][65][66], Anderson localization [67,68], and topological edge states [69,70]. We do not cover these approaches here. F. Phononic Dissipation Phononic confinement, propagation losses, and lifetimes are limited by various imperfections such as geometric disorder [35,52,60,[71][72][73], thermo-elastic and Akhiezer damping [26,74,75], two-level systems [76][77][78], and clamping losses [34,79,80]. Losses in 2D-confined waveguides are typically quantified by a propagation length L m α −1 m with α m the propagation loss. In 3D-confined cavities, one usually quotes linewidths γ or quality factors Q m ω m ∕γ. A cavity's internal loss rate can be computed from the decay length L m through γ v m α m in high-finesse cavities with negligible bending losses [81] with v m the mechanical group velocity. Mechanical propagation lengths in bulk crystalline silicon are limited to L m ≈ 1 cm at room temperature and at a frequency of ω m ∕2π 1 GHz by thermo-elastic and Akhiezer damping. Equivalently, taking v m ≈ 5000 m∕s, one can expect material-limited minimum linewidths of γ∕2π ≈ 0.1 MHz and maximum quality factors of Q m ≈ 10 4 [26,74] at ω m ∕2π 1 GHz. Generally, crystalline materials have better intrinsic loss limits than polycrystalline and amorphous materials, while insulators have lower loss than semiconductors and metals [26]. These limits deteriorate rapidly at higher frequencies, typically scaling as L m ∝ ω −2 m and Q m ∝ ω −1 m [26,74,78] or worse. This makes the f m · Q m product a natural figure of merit for mechanical systems. For gigahertz-frequency resonators at room temperature, the highest demonstrated values of f m · Q m are on the order of 10 13 in several materials [82]. Intriguingly, the maximum length of time that a quantum state can persist inside a mechanical resonator with quality factor Q m at temperature T is given by t decoherence ℏQ m kT , and so requiring that the information survive for more than a mechanical cycle is equivalent to the condition t decoherence > ω −1 m , or f m · Q m > 6 × 10 12 Hz at room temperature [33]. This is usually seen as a necessary condition for optomechanics in the quantum regime, although pulsed measurements can relax this in some situations [83,84]. Recently, new loss-mitigation mechanisms called "dissipation dilution," "strain engineering," and "soft clamping" have been invented for megahertz mechanical resonators under tension that enable mechanical quality factors and f m · Q m products beyond 10 8 and 10 15 Hz, respectively, under high vacuum but without refrigeration [85][86][87]. This unlocks exciting new possibilities for quantum-coherent operations at room temperature. These approaches are challenging to extend to stiff gigahertz mechanical modes as they require the elastic energy to be dominantly stored in the tension [88]. Finally, many material loss processes, with the possible exception of two-level systems [76,77,89], vanish rapidly at low temperatures (Section 4). Despite impressive progress, the ultimate limits to phononic confinement are unknown and under active study (Section 4). Sidewall roughness and disorder pose a major roadblock in exploring these limits in the context of integrated phononics (Section 5.E). PHOTON-PHONON INTERACTIONS In this section, we describe the key mechanisms underpinning the coupling between photons and phonons. Photon-phonon interactions occur via two main mechanisms: • Parametric coupling [ Fig. 3(a)]: two photons and one phonon interact with each other in a three-wave mixing process as in Brillouin and Raman scattering and optomechanics, where the latter includes capacitive electromechanics. • Direct coupling [ Fig. 3(b)]: one photon and one phonon interact with each other directly as in piezoelectrics. This requires photons with a small frequency, as in the case of interactions between microwave photons and phonons. The parametric three-wave mixing takes place via two routes: • Difference-frequency driving (DFD): two photons with frequencies ω and ω 0 drive the mechanical system through a beat note at frequency ω − ω 0 Ω ≈ ω m in the forces. • Sum-frequency driving (SFD): two photons with frequencies ω and ω 0 drive the mechanical system through a beat note at frequency ω ω 0 Ω ≈ ω m in the forces. Three-wave DFD is the only possible mechanism when the photons and phonons have a large energy gap, as in interactions between phonons and optical photons. In contrast, microwave photons can interact with phonons through any of the three-wave and direct processes. A. Interactions Between Phonons and Optical Photons Parametric DFD in a cavity is generally described by an interaction Hamiltonian of the form (see Supplement 1): with ∂ x ω o the sensitivity of the optical cavity frequency ω o to mechanical motion x and a the photonic annihilation operator. The terminology "parametric" refers to the parameter ω o , essentially the photonic energy, being modulated by the mechanical motion [90][91][92][93], whereas the term "three-wave mixing" points out that there are three operators in the Hamiltonian given by Eq. (1). This does not restrict the interaction to only three waves, as discussed further on. Describing the Hamiltonian H int in this manner is a concise way of capturing all the consequences of the interaction between the electromagnetic field a and the mechanical motion x. The detailed dynamics can be studied via the Heisenberg equations of motion defined by _ a − i ℏ a, H int when making use of the harmonic oscillator commutator a, a † 1 [33]. Since by definition x x zp δb δb † with x zp the mechanical zero-point fluctuations and δb the phonon annihilation operator, this is equivalent to with the zero-point optomechanical coupling rate, which quantifies the shift in the optical cavity frequency ω o induced by the zero-point fluctuations x zp of the mechanical oscillator. Here we neglect the static mechanical motion [33,94,95]. Achieving large g 0 thus generally requires small structures with large sensitivity ∂ x ω o and zero-point motion x zp ℏ∕2ω m m eff 1∕2 , where m eff is the effective mass of the mechanical mode. This is brought about by ensuring a good overlap between the phononic field and the photonic forces acting on the mechanical system [34,96,97] and by focusing the photonic and phononic energy into a small volume to reduce m eff . There are typically separate bulk and boundary contributions to the overlap integral. The bulk contribution is associated with photoelasticity, while the boundary contribution results from deformation of the interfaces between materials [96][97][98][99]. Achieving strong interactions requires careful engineering of a constructive interference between these contributions [34,[96][97][98]100]. Optimized nanoscale silicon structures with mechanical modes at gigahertz frequencies typically have x zp ≈ 1 fm and g 0 ∕2π ≈ 1 MHz (Section 4). The zero-point fluctuation amplitude increases with lower frequency, leading to an increase in g 0 : megahertz-frequency mechanical systems with g 0 ∕2π ≈ 10 MHz have been demonstrated [101]. The dynamics generated by the Hamiltonian of Eq. (2) can lead to a feedback loop. The beat note between two photons with slightly different frequencies ω and ω 0 generates a force that drives phonons at frequency ω − ω 0 Ω. Conversely, phonons modulate, at frequency Ω, the optical field, scattering photons into upand downconverted sidebands. This feedback loop can amplify light or sound, lead to electromagnetically induced transparency, or cooling of mechanical modes. In principle, this interaction can cause nonlinear interactions at the few-photon or phonon limit if g 0 ∕κ > 1 [102], though current solid-state systems are more than 2 orders of magnitude away from this regime (see Fig. 5 and Section 5.A). Assuming g 0 ∕κ ≪ 1, valid in nearly all systems, we linearize the Hamiltonian of Eq. (2) by setting a α δa with α a classical, coherent pump amplitude, yielding with g g 0 α the enhanced interaction rate-taking α real-and δa and δb the annihilation operators representing photonic and phononic signals, respectively. Often there are experimental conditions that suppress a subset of interactions present in Interactions between phonons and high-frequency photons occur through parametric threewave mixing: two high-frequency photons couple to one phonon via third-order nonlinearities such as photoelasticity and the movingboundary effect [96,97]. Interactions between low-frequency photons and a phonon can also occur through these mechanisms. Depending on which of the three waves is pumped, the interaction results in either down/upconversion (δaδb h:c:) or state-swapping (δaδb † h:c:) events. The frequency difference between the two high-frequency photons ω − ω 0 must approximately equal the phononic frequency Ω for efficient parametric interactions to occur (left). In addition, in structures with translational symmetry, the wave vector difference between the two high-frequency photons β − β 0 must also approximately equal the phononic wave vector K for efficient coupling (right). Here we depict only DFD; SFD proceeds analogously but with minus signs replaced by plus signs. (b) Direct conversion via second-order nonlinearities such as piezoelectricity is possible when the photonic energy is sufficiently low to match the phononic energy. Stronger mechanical waves can typically be generated by direct conversion than by indirect mixing of two optical waves (Section 5.B). A microwave photon can be converted into a phonon and subsequently into an optical photon by cascading two of these processes: either with one direct and one indirect process or with two indirect processes. Review Article Hamiltonian (4). For instance, in sideband-resolved optomechanical cavities (ω m > κ), a blue-detuned pump α sets up an entangling interaction, that creates or annihilates photon-phonon pairs. Similarly, a reddetuned pump α sets up a beam-splitter interaction: that converts photons into phonons or vice versa. This beam-splitter Hamiltonian can also be realized by pumping the phononic instead of the photonic mode. In that case, α represents the phononic pump amplitude, whereas both δa and δb are then photonic signals. In multimode systems, such as in 3D-confined cavities with several modes or in 2D-confined continuum systems, the interaction Hamiltonian is a summation or integration over each of the possible interactions between the individual photonic and phononic modes. For instance, linearized photon-phonon interactions in a 2D-confined waveguide with continuous translational symmetry are described by [103][104][105]: In this case, the three-wave mixing interaction rate g βK g 0jβK α ⋆ βK is proportional to the amplitude α βK of the mode with wave vector β K , which is usually considered to be pumped strongly. In contrast to the single-mode cavity described by Eq. (4), in the waveguide case, the symmetry between the two-mode-squeezing δaδb and the beam-splitter δaδb † terms is broken by momentum selection from the onset as generally g βK ≠ g β−K . The Hamiltonian of Eq. (7) assumes an infinitely long waveguide where phase-matching is strictly enforced. In contrast, a finite-length waveguide allows for interactions between a wider set of modes, although it suppresses those with a large phase mismatch (see Supplement 1). In essence, shorter waveguides permit larger violations of momentum conservation. The momentum selectivity enables nonreciprocal transport of both photons [106][107][108][109] and phonons [45,110]-in a continuum version of interference-based synthetic magnetism schemes using discrete optomechanical elements [59,111]. It also allows for sideband resolution even when the optical linewidth far exceeds the mechanical frequency [112]. Cavities can be realized by coiling up or terminating a 2Dconfined waveguide with mirrors. Then the cavity's optomechanical coupling rate g 0 is connected to the waveguide's coupling rate g 0jβK by with L the roundtrip length of the cavity (see Supplement 1). The parameters g 0jβK and g 0 are directly related to the so-called Brillouin gain coefficient G B that is often used to quantify photon-phonon interactions in waveguides [34,36,113]. In particular [81], with v p and v s the group velocities of the interacting photons, ℏω the photon energy, and γ the phononic decay rate. Equations (8) and (9) enable comparison of the photon-phonon interaction strengths of waveguides and cavities. Since this gain coefficient depends on the mechanical quality factor Q m via γ ω m ∕Q m , it is occasionally worth comparing waveguides in terms of the ratio G B ∕Q m . The g 0 ∕2π ≈ 1 MHz measured in silicon optomechanical crystals [114] is via Eq. (9), in correspondence with the G B ∕Q m ≈ 10 W −1 m −1 measured in silicon nanowires at slightly higher frequencies [34,35]. Both g 0 and G B ∕Q m have an important dependence on mechanical frequency ω m : lower-frequency structures are generally more flexible and thus generate larger interaction rates. The Hamiltonians given in Eqs. (5), (6), and (7) describe a wide variety of effects. The detailed consequences of the threewave mixing depend on the damping, intensity, dispersion, and momentum of the interacting fields. Next, we describe some of the potential dynamics. We quantify the dissipation experienced by the photons and phonons with decay rates κ and γ, respectively. The following regimes appear: • Weak coupling: g ≪ κ γ. The phonons and photons can be seen as independent entities that interact weakly. A common figure of merit for the interaction is the cooperativity C 4g 2 ∕κγ, which quantifies the strength of the feedback loop discussed above. In particular, for C ≫ 1, the optomechanical back action dominates the dynamics. The pair-generation Hamiltonian (5) generates amplification, whereas the beamsplitter interaction (6) generates cooling and loss. Whether the phonons or the photons dominantly experience this amplification and loss depends on the ratio κ∕γ of their decay rates. The linewidth of the phonons is effectively 1 Cγ when κ ≪ γ, where the minus-sign in holds for the amplification case (Hamiltonian 5). In contrast, the linewidth of the photons is effectively 1 Cκ when γ ≪ κ. A lasing threshold is reached for the phonons or the photons when C 1. In waveguide systems described by Eq. (7), C 1 is equivalent to the transparency point G B P p ∕α 1, with P p the pump power and α the waveguide propagation loss per meter. In fact, interactions between photons and phonons in a waveguide can also be captured in terms of a cooperativity that is identical to C under only weakly restrictive conditions [81]. • Strong coupling: g ≫ κ γ. The phonons and photons interact so strongly that they can no longer be considered independent entities. Instead, they form a photon-phonon polariton with an effective decay rate κ γ∕2. The beam-splitter interaction (6) sets up Rabi oscillations between photons and phonons with a period of 2π∕g [33,115,116]. This is a necessary requirement for broadband intracavity state swapping, but is not strictly required for narrowband itinerant state conversion [117,118]. Strong coupling has been demonstrated in several systems [115,116,[119][120][121], but not yet in the single-photon regime (Section 5.A). Neglecting dynamics and when the detuning from the mechanical resonance is large (ΔΩ ≫ γ), the phonon ladder operator is δb g 0 ∕ΔΩa † a such that Hamiltonian (2) generates an effective dispersive Kerr nonlinearity described by This effective Kerr nonlinearity [122][123][124][125][126] is often much stronger than the intrinsic material nonlinearities. Thus, a single optomechanical system can mediate efficient and tunable interactions between up to four photons in a four-wave mixing process that annihilates and creates two photons. The mechanics enhances the intrinsic optical material nonlinearities for applications such Review Article as wavelength conversion [34,127,128]. Generally, such enhancements come at the cost of reduced bandwidth compared to intrinsic material nonlinearities. However, multimode effects can enable bandwidths far exceeding the intrinsic mechanical linewidth. For instance, a 2D-confined structure has a nearcontinuum set of mechanical modes. In a properly engineered structure, each of these modes might provide strong photonphonon interactions. Additional dynamical effects exist in the multimode case. For instance, in a waveguide described by Eq. (7), there is a spatial variation of the photonic and phononic fields that is absent in the optomechanical systems described by Eq. (4). This includes: • The steady-state spatial Brillouin amplification of an optical sideband. This has been the topic of recent research in chip-scale photonic platforms. One can show that an optical Stokes sideband experiences a modified propagation loss 1 − Cα, with C G B P p ∕α the waveguide's cooperativity [81]. This Brillouin gain or loss is accompanied by slow or fast light [129,130]. Here we assumed an optical decay length exceeding the mechanical decay length, which is valid in nearly all systems. In the reverse case, the mechanical wave experiences a modified propagation loss 1 − Cα m , and there is slow and fast sound [81,131,132]. • Traveling photonic pulses can be converted into traveling phononic pulses in a bandwidth surpassing the mechanical linewidth. This is often called Brillouin light storage [133][134][135][136]. The traveling optical pump and signal pulses may counterpropagate or occupy different optical modes. Several of these and other multimode effects have received little attention so far. This may change with the advent of new nanoscale systems realizing multimode and continuum Hamiltonians such as given in Eq. (7) with strong coupling rates [34,36,103,104,137]. B. Interactions Between Phonons and Microwave Photons The above Section 3.A on parametric three-wave DFD also applies to interactions between phonons and microwave photons. However, microwave photons may interact with phonons via two additional routes: (1) three-wave SFD and (2) direct coupling. In three-wave SFD, two microwave photons with a frequency below the phonon frequency ω m excite mechanical motion at the sum-frequency ω ω 0 Ω ≈ ω m [4]. Such interactions can be realized in capacitive electromechanics, where the capacitance of an electrical circuit depends on mechanical motion. In particular, the capacitive coupling sets up an interaction, with ∂ x C the sensitivity of the capacitance Cx to the mechanical motion x and V the voltage across the capacitor. In terms of ladder operators, we have V V zp a a † and x x zp δb δb † such that Here the zero-point voltage is V zp ℏω μ ∕2C 1∕2 , with ω μ the microwave frequency and C the total capacitance. This interaction contains three-wave DFD (Hamiltonian 2) as a subset via the a † a term with an interaction rate g 0 given by In addition to three-wave DFD, it also contains three-wave SFD via the aa and a † a † terms. These little-explored terms enable electromechanical interactions beyond the canonical three-wave DFD optomechanical and Brillouin interactions. Similar reasonings can be developed for inductively coupled mechanical resonators [138]. Further, by applying a strong bias voltage V b , the capacitive interaction gets linearized: using V V b δV and keeping only the 2V b δV term in V 2 yields With δV V zp δa δa † , this generates an interaction, which is identical to the linearized optomechanics Hamiltonian in expression (4) with an interaction rate set by that is enhanced with respect to g 0 by g g 0 α and α V b ∕V zp the enhancement factor. The linearized Hamiltonian (15) realizes a tunable, effective piezoelectric interaction that can directly convert microwave photons into phonons and vice versa. Piezoelectric structures are described by Eq. (15) as well with an intrinsically fixed bias V b determined by material properties. The electromechanical coupling rate can be written as or, alternatively, as g 0 ∂ x ω μ x zp -precisely as in Section 3.A but with the optical frequency ω o replaced by the microwave frequency ω μ with ω μ 1∕ ffiffiffiffiffiffiffiffiffi ffi L in C p and L in the circuit's inductance. Typically the capacitance C C m x C p consists of a part that responds to mechanical motion C m x and a part C p that is fixed and usually considered parasitic. This leads to with η p C m ∕C the participation ratio that measures the fraction of the capacitance responding to mechanical motion. For the canonical parallel-plate capacitor with electrode separation s, we have ∂ x C m C m ∕s such that g 0 −η p x zp ω μ ∕2s. This often drives research towards small structures with large zero-point motion x zp and small electrode separation s. Contrary to the optical case, however, increasing the participation ratio η p motivates increasing the size and thus the motional capacitance C m of the structures until η p ≈ 1. Finally, 3D-confined gigahertz mechanical modes have small mode volumes and motional capacitances C m . They are difficult to match to common microwave circuits. This can be addressed by developing circuits with a small parasitic capacitance C p and a large inductance L in [119,[139][140][141][142]. In gigahertz-range microwave circuits with unity participation and electrode separations on the order of s ≈ 100 nm, we have g 0 ∕2π ≈ −10 Hz, about a factor ω o ∕ω μ ≈ 10 5 smaller than the optomechanical g 0 ∕2π ≈ 1 MHz (Section 3.A). Despite the much smaller g 0 , it is still possible to achieve large cooperativity C 4g 2 ∕κγ in electromechanics, as the typical microwave linewidths are much smaller and the enhancement factors α can be larger than in the optical case [139,141,142]. STATE OF THE ART Here we give a concise overview of the current state of the art in opto-and electromechanical systems by summarizing the parameters obtained in about 50 opto-and electromechanical cavities and waveguides. The figures are not exhaustive. They are meant to give a feel for the variety of systems in the field. First, we plot the mechanical quality factors as a function of mechanical frequency (Fig. 4), including room temperature (red) and cold (blue) systems. As discussed in Section 2.F, cold systems usually reach much higher quality factors. The current record is held by a 5 GHz silicon optomechanical crystal with Q m > 10 10 , yielding a lifetime longer than a second [143] at millikelvin temperatures. Measuring these quality factors requires careful optically pulsed readout techniques, as the intrinsic dissipation of continuouswave optical photons easily heats up the mechanics, thus destroying its coherence [144]. Comparably high quality factors are measured electrically in quartz and sapphire at lower frequencies [145,148]. It is an open question whether these extreme lifetimes have reached intrinsic material limits. The long lifetimes make mechanical systems attractive for delay lines and qubit storage [150] (Section 5). Figure 4 displays a rather weak link between mechanical frequency and quality factor. It also shows that 2Dconfined systems have relatively low mechanical coherence so far, likely related to inhomogeneous effects [71]. Next, we look at the coupling strengths in these systems (Fig. 5). As discussed in Section 3, a few different figures of merit are commonly used, depending on the type of system. We believe the dimensionless ratios g 0 ∕κ and the cooperativity C are two of the most powerful figures of merit (Section 5). The ratio g 0 ∕κ determines the single-photon nonlinearity, the energy-per-bit in optical modulators, as well as the energy-per-qubit in microwave-to-optical photon converters. The cooperativity C must be unity for efficient state conversion as well as for phonon and photon lasing. In the context of waveguides, it measures the maximum Brillouin gain as C G B P p ∕α [81]. Thus we compute g 0 ∕κ for about 50 opto-and electromechanical cavities and waveguides [ Fig. 5(a)]. We convert the waveguide Brillouin coefficients G B to g 0 via expressions (8) and (9) by estimating the minimum roundtrip length L a cavity made from the waveguide would have. In addition, we convert the waveguide propagation loss α to the intrinsic loss rate κ in αv g with v g the group velocity. This brings a diverse set of systems together in a single figure. No systems exceed g 0 ∕κ ≈ 0.01, with the highest values obtained in silicon optomechanical crystals [55,114], Brillouin-active waveguides [34,35,158], and Raman cavities [164]. There is no strong relation between g 0 ∕κ and C: systems with low interactions rates g 0 often have low decay rates κ and γ as well, since they do not have quite as stringent fabrication requirements on the surface quality. The absolute zero-point coupling rates g 0 illustrate the power of moving to the nanoscale. We plot them as a function of the maximum quantum cooperativity C q C∕n th withn th the thermal phonon occupation [ Fig. 5(b)]. When C q > 1, the state transfer between photons and phonons takes place more rapidly than the mechanical thermal decoherence [33]. This is a requirement for hybrid quantum systems such as efficient microwave-tooptical photon converters (Section 5). There are several chip-scale electro-and optomechanical systems with C q > 1, with promising values demonstrated in silicon photonic crystals. An important impediment to large quantum cooperativities in optomechanics is the heating of the mechanics caused by optical absorption [144]. Further, we give an overview of the Brillouin coefficients G B found in 2D-confined waveguides [ Fig. 5(c)]. The current record G B 10 4 W −1 m −1 in the gigahertz range was measured in a suspended series of silicon nanowires [35]. However, larger Brillouin amplification was obtained with silicon and chalcogenide rib waveguides, which have disproportionately lower optical propagation losses α and can handle larger optical pump powers P p [158,165]. We stress that the maximum Brillouin gain is identical to the cooperativity [81]. They are both limited by the maximum power and electromagnetic energy density the system in question can withstand. At room temperature in silicon, the upper limit is usually set by two-photon and free-carrier absorption [34,36,99]. Moving beyond the two-photon bandgap of 2200 nm in silicon or switching to materials such as silicon nitride, lithium niobate, or chalcogenides can drastically improve the power handling [99,166,167,171,172]. In cold systems, it is instead set by the cooling power of the refrigerator and the heating of the mechanical system [144]. Another challenge for 2D-confined waveguides is the inhomogeneous broadening of the mechanical resonance. This arises from atomic-scale fluctuations in the waveguide geometry along its length, effectively smearing out the mechanical response [34][35][36]71]. In 2D-confined systems consisting of a series mechanically active sections [34,36], one must ensure that each section is sufficiently long to let the mechanical mode build up [174]. Review Article Finally, compared to gigahertz systems, flexible megahertz mechanical systems give much higher efficiencies of G B ≈ 10 6 W −1 m −1 as measured in dual-nanoweb [176] fibers and of G B ≈ 10 9 W −1 m −1 , as predicted in silicon double-slot waveguides [112]. In contrast to photonics, phononic systems have operating frequencies varying over many orders of magnitude: from kilohertz to gigahertz with acoustic phonons, and even terahertz with optical phonons. This is accompanied by great diversity in the mechanical structures. The choice of mechanical operating frequency can be influenced by many factors, including but not limited to the ability to passively freeze out thermal motion [177], to achieve spectral sideband resolution [178], large f m · Q m products [85,179], large zero-point motion x zp [101,181,182], fast response [123], or better sensitivity [183]. The balance between the various trade-offs must be found case by case. Tightly confined gigahertz modes have attractive properties for low-energy communications, as discussed further on. PERSPECTIVES A. Single-Photon Nonlinear Optics The three-wave mixing interactions discussed in Section 3 in principle enable single-photon nonlinear optics in opto-and electromechanical systems [102,184,185]. For instance, in the photon blockade effect, a single incoming photon excites the motion of a mechanical system in a cavity, which then shifts the cavity resonance and thus blocks the entrance of another photon. Realizing such quantum nonlinearities sets stringent requirements on the interaction strengths and decay rates. For instance, in an optomechanical cavity, the force exerted by a single photon is hF i −h∂ x H int i −ℏg 0 ∕x zp ha † ai −ℏg 0 ∕x zp . To greatly affect the optical response seen by another photon impinging on the cavity, this force must drive a mechanical displacement that shifts the optical resonance by about a linewidth κ or x π κ∕∂ x ω o κ∕g 0 x zp . In other words, we require F ∕m eff ω 2 m x π , which leads to ϑ cav ≡ 4g 2 0 ∕κω m ≈ π, where ϑ cav is the mechanically mediated cross-phase shift experienced by the other photon, assuming critical coupling to the cavity. This extremely challenging condition is relaxed when two photonic modes with a frequency difference Δω roughly resonant with the mechanical frequency are used. In this case, the mechanical frequency can be replaced by the detuning from the mechanical resonance in the above expressions: ω m → 2ΔΩ with the detuning ΔΩ Δω − ω m . This enhances the shift per photon so that quantum nonlinearities are realized at [185,186] with ΔΩ ≪ ω m . The photon blockade effect also requires sideband resolution (ΔΩ > κ) so is generally a necessary condition for single-photon nonlinear optics with opto-and electromechanical cavities [33,187]. In the case of 2D-confined waveguides, it can similarly be shown [137] that a single photon drives a mechanically mediated cross-Kerr phase shift, Cooperativities are typically highest in cold systems (blue) and can be as high in less tightly confined systems, since they often have lower photonic and phononic decay rates than the smallest systems. Cavities so far achieve higher cooperativities than waveguides. This may change if the 2D-confined waveguide systems can be studied at low temperatures and if they can overcome inhomogeneous broadening. (b) Zero-point coupling rate g 0 versus the maximum quantum cooperativity C q C∕n th withn th the thermal phonon occupation for a selection of cold 3D-confined systems. It is significantly harder to reach C q > 1 than C > 1. In particular, optical absorption easily heats up mechanical systems-effectively increasingn th [144]. (c) Brillouin gain coefficient G B versus maximum net Brillouin gain G B P p ∕α − 1 for a selection of 2D-confined waveguides. Up to extrinsic cavity losses, C − 1 G B P p ∕α − 1. Achieving this maximum Brillouin gain requires the waveguide to have a length L 1∕α, with α the optical propagation loss per meter. However, this is challenging as longer waveguides can-but do not always-suffer from inhomogeneous broadening of the mechanical resonance due to atomic-scale disorder in the waveguide geometry. Inhomogeneous effects are typically weaker in less confined systems. The data points correspond to Refs. [30,[34][35][36]41,42,52,55,72,86,87,97,101,[114][115][116]119,[139][140][141][143][144][145][146][147]149,[151][152][153][154][155][156][157][158][159][160][161][162][163][164]166,[168][169][170]173,175,180,232,251,270,298,299,311,313]. There was no requirement for sideband resolution in these figures. Review Article on another photon with v g the optical group velocity (see Supplement 1). The cross-Kerr phase shift ϑ wg can be enhanced drastically by reducing the group velocity v g via Brillouin slow light [130,137,188]. If sufficiently large, the phase shifts ϑ cav and ϑ wg can be used to realize controlled-phase gates between photonic qubits-an elementary building block for quantum information processors [137,[189][190][191]. Using Eq. (8), we have with F 2π∕κT rt the cavity finesse and T rt the cavity roundtrip time. Therefore, cavities generally yield larger single-photon cross-Kerr phase shifts than their corresponding optomechanical waveguides. Currently state-of-the-art solid-state and sideband-resolved (ω m > κ) opto-and electromechanical systems yield at best g 0 ∕κ ≈ 0.01 in any material (Fig. 5). Significant advances in g 0 may be made in, e.g., nanoscale-slotted structures [101,192,193], but it remains an open challenge to not only increase g 0 but also g 0 ∕κ by a few orders of magnitude [194]. Beyond exploring novel structures, other potential approaches include effectively boosting g 0 ∕κ by parametrically amplifying the mechanical motion [195], by employing delayed quantum feedback [196], or via collectively enhanced interactions in optomechanical arrays [197,198]. Although singlephoton nonlinear optics may be out of reach for now, many-photon nonlinear optics can be enhanced very effectively with mechanics. Specifically, mechanics realizes Kerr nonlinearities orders of magnitude beyond those of typical intrinsic material effects. This is especially so for highly flexible, low-frequency mechanical systems [122,123,199,200], but has been shown in gigahertz silicon optomechanical cavities and waveguides as well [34,127]. B. Efficient Optical Modulation Phonons provide a natural means for the spatiotemporal modulation of optical photons via electro-and optomechanical interactions. Hybrid circuits that marry photonic and phononic excitations give us access to novel opto-electromechanical systems. Two aspects of the physics make phononic circuits very attractive for the modulation of optical fields. First, there is excellent spatial matching between light and sound. As touched upon above, the wavelengths of microwave phonons and telecom photons are both about a micron in technologically relevant materials such as silicon. The matching follows from the 4 to 5 orders of magnitude difference between the speed of sound and the speed of light. Momentum conservation, i.e., phase-matching, between phonons and optical photons (as discussed in Section 3) is key for nonreciprocal nonlinear processes and modulation schemes with traveling phonons [59,106,108,201]. Second, the optomechanical nonlinearity is strong and essentially lossless. Small deformations can induce major changes on the optical response of a system. For instance, in an optomechanical cavity [Eq. (1)], the mechanical motion required to encode a bit onto a light field has an amplitude of approximately x π κ∕g 0 x zp . Generating this motion requires energy, and this corresponds to an energy-per-bit E bit m eff ω 2 m x 2 π ∕2, which we rewrite as Thus the energy-per-bit also depends on the dimensionless quantity g 0 ∕κ: a single phonon can switch a photon when this quantity reaches unity, in agreement with Section 5.A. For silicon optomechanical crystals with g 0 ∕κ ≈ 10 −3 , this yields E bit ≈ 1aJ∕bit: orders of magnitude more efficient than commonly deployed electro-optic technologies [11]. The similarity between the fundamental interactions in optomechanics [33] and electro-optics [202,203] allows one to compare the two types of modulation head-to-head. In particular, in an optical cavity made of an electro-optic material the voltage drop across the electrodes required to encode a bit is V π κ∕∂ V ω o κ∕g 0 V zp , with g 0 ∂ V ω o V zp the electro-optic interaction rate [202,203], which is defined analogously to the optomechanical interaction rate. It is the parameter appearing in the interaction Hamiltonian H int ℏg 0 a † ab b † , with b b † now proportional to the voltage across the capacitor of a microwave cavity [202,203]. The required V π corresponds to an energy-per-bit E bit CV 2 π ∕2, which again can be rewritten as expression (23). Electro-optic materials such as lithium niobate [204] may yield up to g 0 ∕2π ≈ 10 kHz, corresponding to an energy-per-bit E bit ≈ 10 fJ∕bit, keeping the optical linewidth κ constant-on the order of today's world records [11]. Although full system demonstrations using mechanics for electro-optic modulation are lacking, based on estimates like these we believe that mechanics will unlock highly efficient electrooptic systems. The expected much lower energy-per-bit implies that future electro-optomechanical modulators could achieve much higher bit rates at fixed power, or alternatively, much lower dissipated power at fixed bit rate than current direct electro-optic modulators. Although the mechanical linewidth does not enter expression (23), bandwidths of a single device are usually limited by the phononic quality factor or transit time across the device. Interestingly, the mechanical displacement corresponding to the estimated 1 aJ∕bit is only x π ≈ 10 pm. Here we highlighted the potential for optical modulation based on mechanical motion at gigahertz frequencies. However, similar arguments can be made for optical switching networks based on lower frequency mechanical structures. In particular, voltage-driven capacitive or piezoelectric optical phase-shifters exploiting mechanical motion do not draw static power and can generate large optical phase shifts in small devices [205][206][207][208][209][210][211][212]. These photonic microelectromechanical systems (MEMS) are thus an attractive elementary building block in reconfigurable and densely integrated photonic networks used for high-dimensional classical [11,[213][214][215][216] and quantum [217][218][219][220] photonic information processors. They may meet the challenging power and space constraints involved in running a complex programmable network. Demonstrating fully integrated acousto-optic systems requires that we properly confine, excite, and route phonons on a chip. Among the currently proposed and demonstrated systems are acousto-optic modulators [108], as well as optomechanical beam-steering systems [73,221]. Besides showing the power of sound to process light with minuscule amounts of energy, these phononic systems have features that are absent in competing approaches. For instance, gigahertz traveling mechanical waves with large momentum naturally enable nonreciprocal features in both modulators [108] and beam-steering systems [73]. In order to realize these and other acousto-optic systems, it is crucial to efficiently excite mechanical excitations on the surface of a chip. In this context, electrical excitation is especially promising, as it allows for stronger mechanical waves than optical excitation. With optical excitation of mechanical waves, the flux of phonons is upper-bounded by the flux of optical photons injected into the structure. The ratio of photon to phonon energy limits the mechanical power to less than a microwatt, corresponding to 10-100 mW of optical power. Nevertheless, proof-of-concept demonstrations [107,109,228] have successfully generated nonreciprocity on a chip using optically generated phonons. In contrast, microwave photons have a factor 10 5 larger fluxes than optical photons for the same power. Therefore, microwave photons can drive milliwatt-level mechanical waves in nanoscale cavities and waveguides. Such mechanical waves can have displacements up to a nanometer and strains of a few percent-close to material yield strengths. Electrical generation of gigahertz phonons in nanoscale structures has received little attention so far, especially in nonpiezoelectric materials such as silicon and silicon nitride. As discussed in Section 3.B, this can be realized either via capacitive or via piezoelectric electromechanics. Capacitive approaches work in any material [229][230][231] and have recently been demonstrated in a silicon photonic waveguide [232]. They require small capacitor gaps and large bias voltages to generate effects of magnitude comparable to piezoelectric approaches. More commonly, piezoelectrics such as gallium arsenide [58,233], lithium niobate, aluminum nitride [234,235], and lead-zirconate titanate can be used as the photonic platform, or be integrated with existing photonic platforms such as silicon and silicon nitride in order to combine the best of both worlds [73,[236][237][238][239]. Such hybrid integration typically comes with challenging incompatibilities in material properties [240], especially when more than one material needs to be integrated on a single chip. Efficient electrically driven acoustic waves in photonic structures have the potential to enable isolation and circulation with an optical bandwidth beyond 1 THz-limited only by optical walk-off [106,227,[241][242][243][244]. C. Hybrid Quantum Systems Strain and displacement alter the properties of many different systems and therefore provide excellent opportunities for connecting dissimilar degrees of freedom. In addition, mechanical systems can possess very long coherence times and can be used to store quantum information. In the field of hybrid quantum systems, researchers find ways to couple different degrees of freedom over which quantum control is possible to scale up and extend the power of quantum systems. Realizing hybrid systems by combining mechanical elements with other excitations is a widely pursued research goal. Studies on both static tuning of quantum systems using nanomechanical forces [245][246][247][248] as well as on quantum dynamics mediated by mechanical resonances and waveguides [62,177,246,249,250] are being pursued. Among the emerging hybrid quantum systems, microwave-tooptical photon converters utilizing mechanical degrees of freedom have attracted particular interest recently [249,[251][252][253][254][255]. In particular, one of the leading platforms to realize scalable, error-corrected quantum processors [256,257] is superconducting microwave circuits in which qubits are realized using Josephson junctions [258,259] in a platform compatible with silicon photonics [260]. To suppress decoherence, these microwave circuits are operated at millikelvin temperatures inside dilution refrigerators. Heat generation must be restricted in these cold environments [261]. The most advanced prototypes currently consist of on the order of 50 qubits on which gates with at best 0.1% error rates can be applied [261,262]. Scaling up these systems to millions of qubits, as required for a fully error-corrected quantum computer, is a formidable unresolved challenge [257]. Also, the flow of microwave quantum information is hindered outside of the dilution refrigerators by the microwave thermal noise present at room temperature [263,264]. Optical photons travel for kilometers at room temperature along today's optical fiber networks. Thus quantum interfaces that convert microwave to optical photons with high efficiency and low noise should help address the scaling and communication barriers hindering microwave quantum processors. They may pave the way for distributed and modular quantum computing systems or a "quantum Internet" [265,266]. Besides, such interfaces would give optical systems access to the large nonlinearities generated by Josephson junctions, which enables a new approach for nonlinear optics. The envisioned microwave-to-optical photon converters are in essence electro-optic modulators that operate on single photons and preserve entanglement [202]. They exploit the beam-splitter Hamiltonian discussed in Section 3 to swap quantum states from the microwave to the optical domain and vice versa. To realize a microwave-to-optical photon converter, one can start from a classical electro-optic modulator and modify it to protect quantum coherence. Several proposals aim to achieve this by coupling a superconducting microwave cavity to an optical cavity made of an electro-optic material. For instance, the beam-splitter Hamiltonian can be engineered by injecting a strong optical pump red-detuned from the cavity resonance in an electro-optic cavity. In order to suppress undesired Stokes scattering events, the frequency of the microwave cavity needs to exceed the optical cavity linewidth, i.e., sideband resolution is necessary. In this scenario, continuous-wave state conversion with high fidelity requires an electro-optic cooperativity C eo close to unity: with g 0 the electro-optic interaction rate as defined in the previous section, jαj 2 the number of optical pump photons in the cavity, and γ μ the microwave cavity linewidth. The quantum conversion is accompanied by an optical power dissipation P diss ℏω o jαj 2 κ in , with κ in the intrinsic decay rate of the optical cavity. Operating the converter in a bandwidth of γ μ and inserting condition (24), this leads to an energy-per-qubit of which is the quantum version of the energy-per-bit (23). This yields an interesting relation between the efficiency of classical and quantum modulators: We stress that E qbit is the optical dissipated energy in a quantum converter, whereas E bit is the microwave or mechanical energy necessary to switch an optical field in a classical modulator [267]. The quantum electro-optic modulator dissipates roughly 5 orders of Review Article magnitude more energy per converted qubit, as it requires an optical pump field to drive the conversion process. Strategies developed to minimize E bit , as pursued for decades by academic groups and the optical communications industry, also tend to minimize E qbit . Recently, a coupling rate of g 0 ∕2π 310 Hz was demonstrated in an integrated aluminum nitride electro-optic resonator [268]. Switching to lithium niobate and harnessing improvements in the electro-optic modal overlap may increase this to g 0 ∕2π ≈ 10 kHz, corresponding to E qbit ≈ 1 nJ∕qbit. Electro-optic polymers [269] may yield higher interaction rates g 0 but bring along challenges in optical and microwave losses κ and γ μ . Cooling powers of roughly 10 μW at the low-temperature stage of current dilution refrigerators [261] imply that conversion rates with common electro-optic materials will likely not exceed about 10 kqbits∕s. Considering that the g 0 ∕κ demonstrated optomechanical devices are much larger than those found in electro-optic systems, and following a reasoning similar to that presented in Section 5.B for classical modulators, it is likely that microwave-to-optical photon converters based on mechanical elements as intermediaries will be able to achieve large efficiencies. It has been theoretically shown that electro-optomechanical cavities with dynamics described in Section 3 allow for efficient state transduction between microwave and optical fields when with C em and C om the electro-and optomechanical cooperativities. Noiseless conversion additionally requires negligible thermal microwave and mechanical occupations [117,118,252]. Since the dominant dissipation still arises from the optical pump, the energy-per-qubit can still be expressed as in Eq. (25) for a electro-optomechanical cavity. Given the large nonlinearity g 0 ∕κ enabled by nanoscale mechanical systems (Fig. 5), we expect conversion rates up to 100 Mqbits∕s are feasible by operating multiple electro-optomechanical photon converters in parallel inside the refrigerator. State-of-the-art integrated electro-and optomechanical cavities have achieved C em > 1 and C om > 1 in separate systems (Fig. 5). It is an open challenge to achieve condition (27) in a single integrated electro-optomechanical device. Finally, the long lifetimes and compact nature of mechanical systems also makes them attractive for the storage of classical and quantum information [140,150,183,250,[270][271][272][273][274][275]. Mechanical memories are currently pursued both with purely electromechanical [140,177,270] and purely optomechanical [133,134] systems. Interfaces between mechanical systems and superconducting qubits may lead to the generation of nonclassical states of mesoscopic mechanical systems [33,271,[276][277][278][279][280][281], probing the boundary between quantum and classical behavior. D. Microwave Signal Processing In particular, in the context of wireless communications, compact and cost-effective solutions for radio-frequency (RF) signal processing are rapidly gaining importance. Compared to purely electronic and MEMS-based approaches, RF processing in the photonics domain-microwave photonics-promises compactness and light weight, rapid tunability, and integration density [282][283][284]. Currently demonstrated optical solutions, however, still suffer from high RF-insertion loss and an unfavorable trade-off between achieving sufficiently narrow bandwidth, high rejection ratio, and linearity. Solutions mediated by phonons might overcome this limit, as they offer a narrow linewidth without suffering from the power limits experienced in high-quality optical cavities [165,285]. Given the high power requirements, 2D-confined waveguides lend themselves more naturally to many RF applications. As such, stimulated Brillouin scattering (SBS) has been extensively exploited. Original work focused on phonon-photon interactions in optical fibers, which allows for high SBS gain and high optical power but lacks compactness and integrability. Following the demonstration of SBS gain in integrated waveguide platforms [34,36,166], several groups now also demonstrated RF signal processing using integrated photonics chips. In the most straightforward approach, the RF signal is modulated on a sideband of an optical carrier which is then overlaid with the narrowband SBS loss spectrum generated by a strong pump [286,287]. Tuning the carrier frequency allows rapid and straightforward tuning of the notch filter over several gigahertz and a bandwidth below 130 MHz was demonstrated. The suppression was only 20 dB, however, limited by the SBS gain achievable in the waveguide platform used, in this case a chalcogenide waveguide. This issue is further exacerbated in more complementary metaloxide-semiconductor (CMOS)-compatible platforms, where the SBS gain is typically limited to a few decibels. This can be overcome by using interferometric approaches, which enable over 45 dB suppression with only 1 dB of SBS gain [288,289]. While this approach outperforms existing photonic and nonphotonic approaches on almost all specifications (see Table 1 in [288]), a remaining issue is the high RF insertion loss of about 30 dB. Integration might be key in bringing the latter to a competitive level, as excessive fiber-to-chip losses and high modulator drive voltages associated with the discrete photonic devices currently being used are the main origin of the low system efficiency. Also, the photonic-phononic emit-receive scheme proposed in [227,290] results in a lower RF insertion loss. Although it gives up tunability, additional advantages of this approach are its engineerable filter response [290,291] and its cascadability [227]. Exploiting the phase response of the SBS resonance also phase control of RF signals has been demonstrated [285]. Phase control of RF signals via the phase response of the SBS resonance has also been demonstrated. Again, interferometric approaches allow one to amplify the intrinsic phase delay of the system, which is limited by the available SBS gain. In the examples above, the filter is driven by a single-frequency pump, resulting in a Lorentzian filter response. More complex filter responses can be obtained by combining multiple pumps [130]. However, this comes at the cost of the overall system response, since the total power handling capacity of the system is typically limited. As such, there is still a need for waveguide platforms that can handle large optical powers and at the same time provide high SBS gain. Further, low-noise oscillators are also a key building block in RF systems. In [292], an SBS-based narrowband tunable filter is integrated in the fiber loop of a hybrid opto-electronic oscillator (OEO), allowing single-mode operation and wide frequency tunability. Also, oscillators fully relying on optomechanical interactions have been demonstrated. Two approaches, equivalent with the two dissipation hierarchies (γ ≫ κ and γ ≪ κ) identified in Section 3, have been studied. In the first case, if the photon lifetime exceeds the phonon lifetime (γ ≫ κ), optical line narrowing and eventually self-oscillation is obtained at the transparency condition C 1-resulting in substantial narrowing of the Stokes wave and thus a purified laser beam [293][294][295]. Cascading this process leads to higher-order Stokes waves with increasingly narrowed linewidths. Photomixing a pair of cascaded Brillouin lines gives an RF carrier with phase noise determined by the lowest-order Stokes wave. Using this approach in a very lowloss silica disk resonator, a phase noise suppression of 110 dBc at 100 kHz offset from a 21.7 GHz carrier was demonstrated [296]. In the alternate case, with the phonon lifetime exceeding the photon lifetime (γ ≪ κ), the Stokes wave is a frequency-shifted copy of the pump wave apart from the phase noise added by the mechanical oscillator. At the transparency condition C 1, the phonon noise goes down, eventually reaching the mechanical Schawlow-Townes limit [297]. Several such "phonon lasers" have been demonstrated already, relying on very different integration platforms [298][299][300][301][302]. Further work is needed to determine if these devices can deliver the performance required to compete with existing microwave oscillators. In the examples above, the mechanical mode is excited alloptically via a strong pump beam. Both in terms of efficiency and in terms of preventing the pump beam from propagating further through the optical circuit, this may be not the most appropriate method. Recently, several authors have demonstrated electrical actuation of optomechanical circuits [58,108,232,251,[303][304][305]. While this provides a more direct way to drive the acousto-optic circuit, considerable efforts are still needed to improve the overall efficiency of these systems and to develop a platform where all relevant building blocks including, e.g., actuators and detectors, optomechanical oscillators, and acoustic delay lines can be co-integrated without loss in performance. E. General Challenges Each of the perspectives discussed above potentially benefits enormously from miniaturizing photonic and phononic systems in order to maximize interaction rates and pack more functionality into a constrained space. Current nanoscale electro-and optomechanical devices indeed demonstrate some of the highest interaction rates (Section 4). However, the fabrication of high-quality nanoscale systems requires exquisite process control. Even atomic-scale disorder in the geometric properties can hamper device performance, especially when extended structures or many elements are required [35,306,307]. This can be considered the curse of moving to the nanoscale. It manifests itself as photonic and phononic propagation loss [61,72], backscattering [61], intermodal scattering, as well as inhomogeneous broadening [71], dephasing [73], and resonance splitting [35,52]. To give a feel for the sensitivity of these systems, a 10 GHz mechanical breathing mode undergoes a frequency shift of about 10 MHz per added monolayer of silicon atoms [34]. Therefore, nanometer-level disorder is easily resolvable in current devices with room-temperature quality factors on the order of 10 3 . Developing better process control and local tuning [308] methods is thus a major task for decades to come. In addition, shrinking systems to the nanoscale leads to large surface-to-volume ratios that imply generally ill-understood surface physics determines key device properties, even with heavily studied materials such as silicon [97,309,310]. This is a particular impediment for emerging material platforms such as thin-film aluminum nitride [311], lithium niobate [312], and diamond [313][314][315]. The flip side of these large sensitivities is that opto-and electromechanical systems may generate exquisite sensors of various perturbations. Among others, current sensor research takes aim at inertial and mass sensing [179,[316][317][318], as well as local temperature [319] and geometry mapping [320][321][322][323]. Finally, tight confinement restricts the number of photons that can be loaded into the system. This influences not merely the maximum cooperativity, but also the thermal phonon occupation. A key target of high confinement systems is increasing the unitless nonlinearity g 0 ∕κ and the cooperativity C, enhancing the interaction rates faster than various parasitic effects. Continued investments in nanotechnology give hope for further progress on this front. CONCLUSION New hybrid electro-and optomechanical nanoscale systems have emerged in the last decade. These systems confine both photons and phonons in structures about one wavelength across to set up large interaction rates in a compact space. Similar to integrated photonics more than a decade ago, nanoscale phononic circuitry is in its infancy and severe challenges such as geometric disorder hinder its development. Still, we expect much to come in the years ahead. We believe mechanical systems are particularly interesting as low-energy electro-optic interfaces with potential use in coherent classical and quantum information processors and sensors. Phonons are a gateway for photons to a world with 5 orders of magnitude slower time scales. Linking the two excitations has the potential for major impact on our information infrastructure in ways we have yet to fully explore.
15,747.2
2019-04-20T00:00:00.000
[ "Physics" ]
Formation and Evolution of Galaxies: Starlight Synthesis Algorithm This study will see the resurgence of interest in precise velocity dispersion measurements, both for the study of galactic and active nuclei kinematics. As several works suggest, an excellent tactic to measure σ is to use the absorption lines of the calcium triplet, as it is a spectral region relatively free from complications. The discovery of an empirical relationship between the mass of the central black hole (M •) and σ was the leading guide of my detailed study of the calcium triplet region. This search for more accurate methods to calculate the dispersion of velocities, in addition to the careful study of uncertainties. After investing so much time in the development and improvement of the method and its application to so many galaxies, it is time to reap the rewards of this effort, using my results to address a series of questions concerning the physics of galaxies. Introduction A significant fraction of the galaxies in the universe is distributed in groups and clusters, making these large structures ideal environments for studying the evolution of their member galaxies. The transition between groups and clusters is quite subtle, given by the number of galaxies present in each system. Although clusters are richer, clusters are more abundant in the universe. While only 10% of the matter in the universe is found in clusters, around 50% is found in clusters [1]. Groups have about 5 -100 bright galaxies within a radius of 1 h −1 Mpc and scattering speeds around 500 km/s. Already clusters can have hundreds of galaxies distributed in a radius of 1 to 2 h −1 Mpc and with dispersion speeds as high 1 1 d where λ is the wavelength, I(λ) is the observed galactic spectrum, F(λ 0 ) is the stellar spectrum, c is the speed of light, and f(v) is the func distribution of speeds [6]. Assuming a Gaussian distribution, with a dispersion of stellar velocities σ, the Doppler broadening can be quantified by ∆λ ~ λ 0 σ/c, where λ 0 is the central wavelength of the line. In this way, if we can measure the Doppler broadening of the spectral lines of a galaxy, we can infer their dispersion of stellar velocities. In Figure 1, we compare the spectrum of a star and that of a galaxy in the calcium triplet region; note how the lines of the galaxy are much wider than those of the star [7]. In a dynamically relaxed system, the virial theorem guarantees that the potential energy V of the system is directly linked to the kinetic energy K, so that V = −2K. Therefore, stellar velocity dispersion, as it is a measure of kinetic energy, is directly linked to the mass of the system, which is a measure of potential energy. In the central regions of a galaxy (r ~ 102 − 3 pc), this mass essentially represents the mass in stars M. If we have a measure of the luminosity and size of the system, we can calculate the mass/luminosity ratio, M/L. The M/L ratio, in turn, can be used to diagnose stellar populations. Just remember that for main-sequence stars, L ~ M4; this implies that younger populations have a lower M/L ratio than older populations [7]. It is not just the mass/lightness ratio. However, that can be inferred from galactic kinematics and velocity dispersion [8]. The studies by Kirby et al. revealed a good empirical correlation between σ and the mass M• of the supermassive black hole, present at the center of most galaxies with a spheroidal component (i.e., ellipticals or early-type spirals) [9]. Although σ as a good central potential tracer, it is not completely intuitive to discover a relationship between black holes in active nuclei of galaxies (AGNs) and the kinematic properties of the gal. Host Axia, since the mass of the central black hole, is much smaller than the mass of stars in the bulge. It is speculated that this relationship, in fact, indicates a connection between the processes of formation and/or evolution of black holes and their host galaxies. Therefore, the kinematics of a galaxy, measured mainly through σ, carries very useful information about the central potential well, the stellar populations, and even about the activity of AGNs. In the next section, we will review the most relevant studies done to date on stellar kinematics in AGNs [10]. Previous Work The Work by Terlevich et al. pioneered the systematic study of the absorption lines of the calcium triplet 8498.02, 8542.09, 8662.14 Å in AGNs. This spectral N. Barua International Journal of Astronomy and Astrophysics region stands out for being relatively clean, free from strong emission lines and other spurious effects. These authors emphasized the equivalent width of the calcium triplet (WCaT) and showed that it is a great tool for the diagnosis of stellar populations and the presence of a continuum non-stellar (featureless continuum) [11]. They came to the conclusion that WCaT for Seyfert-type galaxies does not suffer dilution and that, therefore, the featureless continuum disappears from the optical to the near-infrared. This is interpreted as a sign that the Ca ii lines are produced by red supergiants in bursts of star formation (starbursts). This result was one of the first indications that nuclear activity and stellar formation can coexist in AGNs, a theme that, at the time, caused great controversy due to the clash between defenders of the starburst model for AGN and advocates of the more orthodox (and nowadays hegemonic) school that links the energy source in AGNs to black holes [12]. Another way to study stellar populations is via the mass/luminosity ratio. Oliva et al. estimated M/L for AGN host galaxies by analyzing the CO (1.62, 2.29 µm) and Si (1.59 µm) bands in the near-infrared. It was observed, then, that the M/L ratio in this spectral range is an excellent diagnostic to distinguish between the presence of red supergiant (evidence of recent starbursts) and that of red giants (old population) [13]. Another result of this study is that about 40% of Seyfert 2 have features of young populations, in agreement with studies in other spectral ranges, such as Cid Fernandes et al. [14]. Interestingly, none of the 8 Seyfert 1 studied by Oliva et al. [15] present evidence of recent stellar formation, contradicting the expectation that Seyferts 1 and 2 differ only in orientation. Unfortunately, to date, this study has not been repeated for a significant sample of objects [16]. It was from the Work of Nelson and Whittle, however, that the dispersion of velocities in the calcium triplet region gained notoriety. In this study, velocity dispersions were measured for 85 objects with the cross-correlation method, for the absorption lines of the calcium triplet and Mg b λλ5167.5, 5172.7, 5183.6 Å [17]. The main contribution of this Work was the investigation of the relationships between 1) the stellar kinematics of the bulge and the properties of the active core galaxies, and 2) the stellar kinematics and the gaseous kinematics in these galaxies. 1) The relationship between the stellar kinematics of the bulge and the properties of the active-core host galaxies was explored through the Faber-Jackson relation, L ∝ σ n, for Seyfert-type galaxies. For normal galaxies, it was known that n ~ 3 -4; for Seyfert galaxies, these authors found n ~ 2.7. That is, there is no significant difference between the Faber-Jackson relation for normal galaxies and for Seyfert galaxies and, therefore, the bulge of Seyfert galaxies is kinematically normal. The authors [18] [19] [20] [21] found, however, an offset in the Faber-Jackson relation for Seyfert galaxies Figure 2 so that the velocity dispersions are on average 20% smaller than for normal galaxies given the same luminosity. That is, given the same brightness, the Seyfert galaxies are less massive; or, given the same mass, the Seyfert galaxies are more luminous. This implies N. Barua International Journal of Astronomy and Astrophysics that the M/L ratio for Seyfert galaxies is smaller than for normal galaxies. This result is consistent with starbursts and a young stellar population. 2) Another important result of this Work is the relationship between stellar and gaseous kinematics [22]. A strong correlation was found between the width σgas of the emission lines of [O iii]λ5007 and the dispersion of stellar velocities σ. This implies that the narrow-line region (NLR) has a strong dependence on the central gravitational potential. From these conclusions, we started to use σgas as an alternative to measuring σ. In order to investigate in which situations can σgas replace σ, Greene & Ho (2005) [22]. They also evaluated the effect of the fiber opening used in the sdss observations in σ. For the studied sample, the result is that the rotation effect on σ is despicable. These were the works of Ferrarese and Merritt [10] and Gebhardt et al. [12], however, which revived the interest in velocity dispersions. Both studies showed that there is a good correlation between this and the mass of the central supermassive black hole [13]. As this relationship is the most direct and probably the only relatively reliable way to measure M• for distant galaxies, it is not surprising the resurgence of works N. Barua International Journal of Astronomy and Astrophysics aiming to obtain measurements most accurate for σ in AGNs. In the next section, it presents the contribution to this search. Another additional advantage of working with this spectral range is the diagnostics on stellar populations that can, in principle, be derived from the equivalent width of the Ca ii triplet. Therefore, the importance of deriving reliable methods for measuring velocity dispersion is indisputable. In studies initiated by Vega and Garcia-Rissmann et al., we present an atlas of calcium triplet spectra for 78 objects, most of which are active-core host galaxies [23]. We measured nuclear stellar velocity dispersions for 72 objects with the crosscorrelation and direct fit methods, and we verified that there is no significant difference between the results of the two methods [24]. Measurements were also taken of the equivalent width of the calcium triplet. In the present Work, details of the methods used to measure velocity dispersions were published in Garcia-Rissmann et al. The particularities of each method adopted are presented here, in addition to the extensive tests involved in uncertainty calculations [25]. The main objective is to promote a deeper understanding of the merits and limits of the methods used. The present level of detail will contribute to this deepening by clarifying and adding some points from the studies already published. It should be noted that, for the direct fit method, we use the starlight stellar population synthesis code. Although this is not a synthesis application and we are only interested in σ, our code proved to be more than adequate for this purpose. For a sub-sample of 34 of our objects, I also present an unprecedented study with spatially resolved spectra. We observe, with these data, the variation of σ and of the equivalent width of Ca ii with the spatial position. From the behavior of these parameters, we obtained some diagnoses about stellar populations in Seyfert 1 and 2 galaxies [26]. Sloan Digital Sky Survey The Great Databases Heaven has always enchanted the human mind with its tireless cycle of days and nights. Nights, in particular, have always been conducive to speculation about the seemingly endless bright spots. Since the admiration and cataloging of stars made by our most remote ancestors, humanity has sought, with the help of these vast data, to find cohesion and patterns that would explain the phenomena of the universe. Furthermore, when treading these paths, he came across a universe so immense that he had no choice but to subjugate himself and continue cataloging [27]. The efforts to catalog the celestial objects were not small; they came from the Babylonians, passed through Ptolemy, and expanded with photographic plates from the beginning of the 20th century. With the increasing storage capacity and the new possibilities of the so-called digital age, grandiose catalogs are being produced. The Two Degree Field Galaxy Redshift Survey (2dfgrs) and the Sloan Digital Sky Survey (sdss) are some of the boldest projects that have emerged in recent times [28]. Large databases with good quality data allow much more robust statistical tests, essential for corroboration, refutation, or even theory creation. Large databases are also revolutionizing the N. Barua International Journal of Astronomy and Astrophysics way astronomers work. What used to be done in an almost artisanal way-certain types of reduction and data analysis, for example-nowadays needs innovative and automated approaches [29]. New computational and statistical solutions are increasingly in vogue. sdss, in particular, intends to map an area of π stereo-radians (i.e., a quarter of the sky) and catalog more than 100 million objects. Spectral information will be obtained for about a million galaxies in the local neighborhood selected among these objects [30]. The spectra cover the region of 3800 -9200 Å, with resolution λ/∆λ ~ 1800, and are observed through 3-arcse- Previous Work The abundance of sdss objects is reflected in the immense amount of studies carried out with this database. It is an almost superhuman task to review all studies derived from this base, despite their relative youth-the first wave of data was released in 2001. We will therefore focus on the Work of three groups: 1) Kauffmann et al. [23], 2) Heavens et al. [27]; Panter et al. [28], and 3) Cid Fernandes et al. [29]; Matthew et al. [30]; Stasi'nska et al. [31]. These groups work with the main stellar population synthesis methods applied to sdss objects to recover the history of stellar formation and galactic properties. The studies by Kauffmann et al. [23] retrieve the history of stellar formation, stellar masses, and dust extinction of sdss objects from spectral indices such as the break at 4000 Å and the Balmer's Hδ absorption line, quantified by the indices Dn(4000) and HδA, respectively [29]. In a nutshell, the method of this group consists of comparing the measured indices with a library of star formation histories. The first step of the analysis is to find the spectrum that best models the stellar contribution to the galactic spectrum. Thus, one can separate the galactic spectrum into two: a purely stellar spectrum, in which stellar indices are measured, and a "purely" nebular spectrum, resulting from the subtraction between the observed spectrum and the stellar modeled, in which nebular emission lines are measured [30]. After calculating the spectral indices, we then use the indices Dn(4000) and HδA as diagnostics for starbursts and to determine the stellar age; and, from the broadband photometry, the extinction by dust and the mass in stars are recovered [31]. These parameters are obtained by comparing the measurements of indices and colors with a library of Monte Carlo realizations of different histories of star formation, which take into account, various metallicities, both continuous star formation and starbursts. These authors present some important results regarding the mass distribution in galaxies and the host galaxies of AGNs. The authors [32] found that most of the mass of the local universe resides in galaxies of M? ~ 5 × 1010M. Furthermore, there is a clear division between high N. Barua International Journal of Astronomy and Astrophysics and low mass galaxies Figure 3. Low-mass galaxies generally have a young population, a typical disk concentration index, recent starbursts, and evidence that the rate of star formation is larger than the galactic halo, perhaps as a result of supernova feedback processes. High-mass galaxies have older populations, a typical bulge concentration index, evidence that the rate of star formation decreases the larger the halo, and an indication that there is little star formation oncemassive galaxies form. For the studies of narrow-line AGNs (i.e., Seyferts 2 and LINERs), this group showed that their hosts have properties very similar to those of early-type galaxies [32]. For low luminosity AGNs (calculated from the emission line [O iii]λ5007), the stellar populations are very similar to those of early-type galaxies; on the other hand, for high luminosity AGNs (L[O iii] > 107 L), populations appear to be much younger, and there is evidence of recent starbursts. Based on the study of broad-line AGNs, these authors claim that there is no difference [32] between the stellar composition of galaxies of the Seyfert 2 type and quasars with the same luminosity and redshift. This means that young stellar populations are characteristic of AGNs with high luminosity. Previous studies, such as the one by Cid Fernandes et al., already pointed to this conclusion, but only with the extraordinary statistics of sdss can it be confirmed beyond any doubt. Multiple Optimized Parameter Estimation and Data compression (moped) is another stellar population synthesis method that has been applied to the sdss dataset. According to Heavens et al. and Panter et al., the technique consists of modeling the entire spectrum of a galaxy to retrieve a certain set of parameters about the history of stellar formation, the evolution of metallicity, and extinction. The first step in the analysis of the moped is the removal of the emission lines and the degradation of the spectra to ∆λ ~ 20 Å, that is, the same resolution as the spectra of simple stellar populations used in the base. The compression algorithm then reduces the spectra to just ~ 25 data points which, in principle, contain as much information as possible about the parameters of interest. These ~ 25 points are chosen as the dot product between ~ 25 different weight vectors and the vector constructed from the spectral data. The secret is, therefore, in finding weight vectors in order to privilege the wavelengths most sensitive to the parameters of interest (i.e., age, metallicity, and extinction) [32]. According to the authors, this process manages to conserve all the information on the spectrum, or at least all the information of interest; therefore, certain degeneracies, such as age-metallicity, are much milder than for studies with spectral indices. Furthermore, it is worth remembering that no a priori hypothesis is made about the history of star formation. The authors of studies with the moped obtain some results with implications for the evolution of galaxies. One of the main results is that more massive galaxies form stars sooner compared to less massive galaxies, and therefore the history of star formation is not the same for galaxies of different masses Figure 4. This points to downsizing behavior. They also find that the rate of star formation has been declining for about 6 billion years; for redshifts of order z > 2, the N. Barua International Journal of Astronomy and Astrophysics . The fraction of total stellar mass in galaxies in the local universe as a function of stellar mass (left panel) and Dn(4000) (right panel). Dn(4000) is used as a diagnosis for stellar populations, so the higher Dn(4000), the younger the populations. Note how this distribution is bimodal-extracted from Kauffmann et al. [23]. rate of star formation calculated by the authors is in agreement with observations and independent studies on high redshifts, which is expected by the Copernican principle. Furthermore, the authors state that the distribution of metallicities for stellar-forming gas is inconsistent with closed-box models but consistent with the infall model, by the galaxy. They then calculate the fraction of the mass of the gas in relation to the total mass from infall models. The studies by Cid Fernandes et al. [29]; Matthew et al. [30]; Stasi'nska et al. [31]; Gomes [32] presented some results obtained with the semi-empirical synthesis of stellar populations, using the starlight code, for samples of about 20,000 -50,000 galaxies of [32] carried out an extensive study on the problem of synthesis for elliptical galaxies. It was concluded that these galaxies must have a population with a different α/Fe ratio from the solar one and that it was not well represented by the stellar population base. They were simply used in the synthesis. This shows that we must be very careful with our results, always checking that they are not being distorted by some effect of incompletion in the base. Matthew et al. [30] highlight the study of the bimodal distribution of stellar populations in galaxies. This bimodality is seen mainly in the color of the galaxies, in the Dn(4000) index, and in the middle star age. It is found that the average age weighted by luminosity is the main responsible for this effect. Furthermore, evidence of downsizing is also shown. Stasi'nska et al. [29] used the nebular results to study ways to distinguish AGNs from NSFGs. The emphasis of this Work is to understand why diagnostic diagrams take the form they do, comparing data with photoionization models. In addition, a new diagram, allows the classification of galaxies at high redshifts with optical spectra. There are, therefore, considerable differences between the methods applied by our group, such as starlight, and the techniques of Kauffmann et al. [23] and Heavens et al. [27]. The group involved in the study by Kauffmann et al. [23], on the one hand, uses spectral indices to create libraries of stellar formation histories, which limits the level of detail in studies of the evolution of galaxies. On the other hand, as they obtain a pure nebular spectrum by subtracting a stellar continuum, they can investigate in-depth the emission lines coming from the gas in galaxies (and from there, for example, study nebular abundances and classify galaxies in AGNs and NSFGs). The group working with the moped [27], on the other hand, obtains much more robust information about the contribution of stellar populations to galaxies. Therefore, they get a reasonably detailed history of star formation and galactic evolution. However, as their technique consists of completely eliminating the N. Barua International Journal of Astronomy and Astrophysics emission lines, the entire nebular study carried out by this group is based on chemical evolution models and, therefore, does not reach the degree of detail achieved by Kauffmann et al. (2003a). Starlight, in turn, brings together the best of both worlds and with a greater degree of detail. Our studies of the history of star formation in galaxies, for example, are based on a much more detailed adjustment of the galactic spectrum than the one used by the moped. And, in the study of emission lines, we have at our disposal, for comparison, much more robust stellar parameters than those obtained only with spectral indices, as the Work of Kauffmann et al. We, therefore, have much more complete tools for dealing with various astrophysical problems. In the next section, we present some extensions of the application of the starlight synthesis code to the sdss database, continuing the studies by Cid Fernandes et al. [29]; Matthew et al. [27]; Stasi'nska et al. [30]; Gomes [29]. Current Work In this Work, we apply the starlight synthesis algorithm to 354,992 galaxies from the sdss database [27] [32] [33]. We present here two studies made from this sample of galaxies. The first study is dedicated to the technical improvement of the synthesis. Our main aim was to solve the problem of automatic mask creation (that is, the elimination of unwanted regions in the fit) for the spectra of the database objects. Good masks are fundamental to improve the reliability of the fit and obtain an excellent residual spectrum, in which we can more clearly observe nebular emission lines. The second part of this Work represents the beginning of exploring the extensive database built with the synthesis results for 354,992 galaxies. Such a database makes it possible to address numerous astrophysical questions about the nature and evolution of galaxies. To illustrate this potential, we present two preliminary results. First, we use diagnostic diagrams to investigate the differences between the sequences of host galaxies of active cores and normal galaxies with stellar formation. In the second, we obtain the history of the stellar formation of galaxies as a function of their mass in stars for both AGNs and NSFGs. Although many details have not been considered at this time, we already see some fascinating results, which we intend to deepen in further studies. Synthesis of Stellar Populations A hundred years ago, there was still no clear distinction between the Milky Way and the rest of the universe. It was only in the 1920s, mainly from the studies of Edwin Hubble, that it was accepted that some of the "spiral nebulae" observed in the sky were, in fact, other galaxies. The fact that we have somehow discovered galaxies so recently speaks volumes about the difficulty of observing them. Of course, our understanding of extragalactic objects has advanced enormously in the last century, going through the Hubble diagram and into large-scale structure studies. Still, one difficulty remains: for most galaxies, except for the very N. Barua International Journal of Astronomy and Astrophysics close ones, we cannot resolve individual stars. They still seem, even to the eyes of the most modern telescopes, interesting and curious foggy [26]. Despite the observational difficulty, astronomers have developed some tricks to obtain information about the composition of galaxies, even without being able to solve their constituents. If we can have information about the ions that make up a star even without observing the ions, why shouldn't we be able to deduce which stars make up a galaxy even without being able to directly observe its stars? The strategy found was to reproduce or model the integrated spectrum of a galaxy and, in this process, find the population of stars that inhabit it. Thus, two schools emerged that developed the so-called synthesis of stellar populations: the evolutionary and the semi-empirical. A detailed review of these studies is made by Gomes [32]. In general terms, evolutionary stellar population synthesis produces a library of spectra from certain initial hypotheses, such as the history of stellar formation, the evolution of which imic, and the initial mass function. An attempt is then made to find, within this library, the spectrum that most resembles the observed galactic spectrum. Semi-empirical stellar population synthesis, on the other hand, attempts to model the spectrum of a galaxy as a linear combination of stars or stellar clusters. Whichever method is adopted, the vast majority of studies do not work with the entire spectrum, only with spectral indices (like equivalent widths and colors, for example). Obviously, both approaches have their advantages and disadvantages. The main drawback of evolutionary synthesis is to ensure that initial hypotheses lead to real physical models and that the object library is complete. The difficulty of the semi-empirical synthesis, on the other hand, is to start with a base of stars or clusters comprehensive enough to reproduce the different conditions of the other galaxies. And regardless of the approach to the synthesis problem, it is convenient to use the observed spectrum as a whole, which certainly contains more information than isolated spectral indices. Our group decided to tackle the synthesis problem in order to make the best of both schools. We thus try to model the entire spectrum of a galaxy from a linear combination of base elements. This is, at first glance, just a step forward for the semi-empirical synthesis, which has only recently been done with spectral indices. We can, however, insert a dash of evolutionary synthesis if we wish. The base need not necessarily consist of observed stars; simple theoretical stars or stellar populations can be used. All the hypotheses of chemical evolution, atmospheric models, and stellar formation history will be embedded in this theoretical basis. Despite the great myriad of applications already covered with our synthesis program from observed starbases, only from the library of simple stellar populations developed by Bruzual & Charlot [34] is that the relevant galactic properties, such as age, metallicity, and mass, could be retrieved in a more robust way. This, added to the large databases with quality spectra, such as the Sloan Digital Sky Survey (sdss), encouraged the improvement of the starlight N. Barua International Journal of Astronomy and Astrophysics synthesis algorithm [29]. In this work, we apply the starlight synthesis code to two studies. One of these is the study of stellar kinematics in galaxies in the calcium triplet region, at 8498.02, 8542.09, 8662.14 Å. The other is a demonstration of a small fraction of the information that can be derived from the sdss database. In the next section, we will generally discuss the mathematical implementation of our synthesis code. We analyze your idiosyncrasies and care that should be taken when analyzing the results inferred by our algorithm. The Starlight Synthesis Algorithm As explained in the previous section, the mathematical approach of the starlight population synthesis code follows, in a way, the semi-empirical school. Our method consists of modeling an observed O λ spectrum using a convex combination (i.e., a linear combination with positive coefficients) of the elements of a base. We've also included a Gaussian to account for kinematics and a term for extinction by dust, according to the following equation: where: • M λ is the synthetic spectrum. • M λ0 is a normalization factor, defined as the total flux of the synthetic spectrum at wavelength λ 0 . • T j,λ is the spectrum of the jth (j = 1, •••, N) base component, normalized to λ 0 . The basis can be constituted either by observed stars or combinations of stars or by theoretical stars or stellar populations. One can even include, depending on the problem studied, other components in the base, such as quasars or power laws. • x j is the fraction that each element T j,λ of the base contributes to the flow of M λ . • G(v, σ) is a Gaussian distribution of line-of-sight velocities, centered on v and enlarged by σ. • ⊗ expresses a convolution. The best fit is defined as the one that minimizes the χ two between the observed spectrum and the model: where w λ is the inverse of the noise in O λ . We use the Metropolis algorithm in conjunction with simulated annealing (see, for example, MacKay 2003) to try to prevent the fit from being locked by local minimums of χ 2 . Spectral features that are not desirable in the adjustment, either because they are too noisy or because they are not included in the base, can be masked with the simple definition of w λ = 0. As a general rule, we verify that masks based on general characteristics of the studied sample produce a good first approximation for the modeled spectra. However, for a refinement of the adjustments, it is essential to make an individual mask for each studied object, taking into account the peculiarities of each spectrum to be modeled. The fact that "general" masks, that is, applied to all ob-N. Barua International Journal of Astronomy and Astrophysics jects in a sample, produce good results in a first approximation lies in the control mechanisms designed in the program. One of these mechanisms, for example, consists of automatically masking parts of the spectrum that are from two to three sigma (this number is adjustable by the user) above the noise. Thus, it is expected to reduce the interference in the adjustment by stronger emission lines or bad pixels that were eventually not masked a priori by the user. Simulations and empirical tests to verify the reliability of this method in the recovery of galactic properties were performed by Gomes [32]. In an analysis of the results of any application of the synthesis code, one must take into account, for example, the effect of the multiplicity of solutions. This is mainly linked to intrinsic base degenerations (such as the age-metallicity effect or the use of very similar stars) or even statistical degenerations, that is, the impossibility of differentiating two solutions distinct due to the noise level in the studied spectrum. Applications of the Stellar Population Although the starlight synthesis code was developed mainly with the objective of recovering the star formation history of a galaxy, he proved to be much more versatile. The algorithm can be used, for example, to remove all the stellar contributions to a spectrum in order to study the pure nebular emission spectrum of a galaxy. Another possibility is to use it to model only a small spectral band that contains absorption lines and obtain good measurements for the dispersion of stellar velocities of this line. The study presents the application of starlight to measuring the dispersion of stellar velocities in the calcium triplet range. We use standard starbases of observed velocities in the same instrumental configuration as our objects. The bases are mainly constituted by stars of the spectral type K, more frequently of the type K0III, and eventually some stars of the type G and F. In these cases, we clearly have the problem of the intrinsic linear dependence of the base. We analyzed the extent to which this affects our measures of velocity dispersion and the corresponding uncertainties. In this study, we used an extensive and extremely detailed theoretical base of 150 simple stellar populations from the Work of Bruzual & Charlot [34] of 25 different ages and six metallicities. Depending on the degree of precision desired, we use either reduced bases, which save a lot of computation time for the hundreds of thousands of sdss objects, or more complete bases, including a finer grid of populations. Different ages and metallicities. This will actually be the first application of this giant sample, whose modeling was recently completed. Figure 5 shows an example of the application of our synthesis code to the study of the calcium triplet. The following Figure 6 and Figure 7 provide examples of spectral synthesis for sdss database objects. Study of the Calcium Triplet This chapter presents a study of the stellar dynamics of galaxies by analyzing the absorption lines of the calcium triplet 8498.02, 8542.09, 8662.14 Å. Section N. Barua International Journal of Astronomy and Astrophysics 3.1 deals with sampling, observations, and data reduction. Despite having followed this process, this part of the Work was carried out almost entirely by collaborators [13]. This last section also detailed a work conducted by me and supervised by Dr. R. Cid Fernandes, taking advantage of the algorithm for calculating the equivalent width of the Ca ii triplet encoded by Vega [20]. Observations and Data Reduction The observations present in this Work were made in six shifts in three different telescopes. The following were observed: N. Barua International Journal of Astronomy and Astrophysics We obtained a total of 80 spectra from 78 galaxies-the Mrk 1210 and NGC 7130 galaxies were each observed twice in different telescopes. Our sample consists of 43 Seyfert type 2 galaxies, Seyfert type 1 galaxies, and nine non-active galaxies shows in Figure 8. The spectra of the respective galaxies are shown in Figures 2-6. In addition to the sample galaxies, we also obtained spectra of stars to use as models for speed calibration. The stars were observed with the same instrumental configurations as the galaxies for all turns. This eliminates the need to make further corrections to the instrumental resolution for measuring velocity dispersion [15]. Data reduction was conducted by Dr. A. Garcia-Rissmann [8] and L. R. Vega [20]. Corrections were made for reading noise (bias), the pixel-by-pixel difference (flatfield), and dark current (dark) effects. As the standard procedure, the flow and wavelength calibration with the iraf tool package was applied. The fact Figure 8. Examples of spectra modeled with a direct fit. In full-line, the observed spectra are graphed; in dotted line, the modeled spectra. The regions considered in the adjustments are drawn in a thicker line. The galaxies NGC 7410, NGC 1068, Mrk 1, NGC 1241, NGC 2997, and NGC 3115 are "a" quality; Mrk 516, Mrk 3 and Mrk 461, of "b" quality; and IRAS 04502-0317, Mrk 273, and Mrk 705, of "c" quality. N. Barua International Journal of Astronomy and Astrophysics that our spectra are in the near-infrared region caused us to have problems with fringing, that is, internal reflections in the CCD camera that create interference patterns in the image. In these cases, a careful procedure was adopted, following the recipe of Plait and Bohlin, to minimize the effect of the fringes. More details can be found in Garcia-Rissmann et al. [8]. To test to what extent fringes could interfere with our measurements of velocity dispersion σ, one of the main parameters of our analysis, we compared measurements in spectra with and without fringe correction. The difference between the σ derivatives proved to be within the uncertainty estimates. All spectra were placed in the resting frame, using the mean redshift (redshift) measured in the Ca ii lines [31]. The Galactic reddening effect was also corrected for each galaxy, using NED data. All spectra presented in this chapter are nuclear. For all program objects, a three-pixel region centered on the luminosity peak of the galaxy was defined as nuclear. Defining the opening radius rab as the radius of a circle whose area comprises the area of our nuclear spectrum, we have, for our sample, rab = 50 -700 pc, and a median of 286 pc. Some spectra, after data reduction, present certain characteristics or artifacts that can hinder the analysis of the Ca ii region. Such characteristics are, for example, emission lines in the Ca ii region, excess noise, atmospheric spectrum remnants, and spurious effects [26]. For this reason, I, Dr. R. Cid Fernandes, and L. R. Vega classified the spectra, by visual inspection, according to the quality in the Ca ii region. "a" quality spectra are relatively clean and little affected by the aforementioned effects. Those of "b" quality are those in which one of the lines of Ca ii is contaminated. The quality "c" is the most problematic, with at least two lines heavily compromised. The "d" spectra are those that have complexities beyond the analysis capacity of our study. Examples of the latter are the narrow-line Seyfert 1 galaxies with wide Ca ii lines in emission. Direct-Fit Method A tool increasingly used [33] to measure velocity dispersion is the direct fitting method (dfm, the acronym in English for the direct fitting method). This method consists of modeling the observed spectrum using a linear combination of elements of a base of standard velocity stars, whose lines of Ca ii are enlarged and displaced by the convolution with a Gaussian filter, according to the same formula used in the synthesis of stellar populations: with the following particularities: • M λ is the synthetic spectrum. • M λ0 is a normalization factor, defined as the total flux of the synthetic spectrum at wavelength λ 0 = 8564 Å. • T j,λ is the spectrum of the jth (j = 1, •••, N) base component, normalized to λ 0 . Each galaxy was modeled with a base that included only the observed velocity standard stars with the same instrumental configuration, so we do not need to make corrections for the resolution. Instrumental dog. N. Barua International Journal of Astronomy and Astrophysics We also include taking into account the effects of stellar populations not included in the base, a continuum C λ , which is a combination of power laws of the type λ β. • x j is the fraction that each element T j,λ of the base contributes to the flow M λ . • r λ ≡ 10 − 0.4 (A λ − A λ0 ) takes into account the effects of extinction by dust. In the case of the Ca ii triplet analysis, the measured value of r λ should not be taken into account, as we are working with a very narrow spectral range in which the effects cannot be measured, of dust. • G(v, σ) is a Gaussian distribution of line-of-sight velocities, centered on v and enlarged by σ. • ⊗ expresses a convolution. The best fit is defined as the one that minimizes the χ two between the observed spectrum (O λ ) and the model, as in Equation above. Simply by defining the inverse of the noise as w λ = 0, individual masks are built for each galaxy in order to avoid spectral features that interfere with the adjustment, such as emission lines, bad pixels, and remnants of heaven. Furthermore, due to the construction of the algorithm's output files, the visualization and comparison between the observed spectrum and the obtained model are quite straightforward [17]. Figure 8 shows a depiction of this. The Effect of Masks Depending on the quality of the spectrum and the contamination of the Ca ii lines, the mask used in the direct fit method can have a considerable influence on the dispersion measurement. To speeds. In the preliminary tests with the masks, we were guided by the Work of Barth et al. (2002). In principle, we use a window from 8480 to 8690 Å, which completely includes the absorption lines of Ca ii. The region from 8560 to 8640 Å was excluded from the analysis as it resulted in a poor fit. We test this mask for all our objects. After a visual inspection, we created another half-dozen "general masks", that is, masks used by all galaxies indiscriminately. After this first screening, we started to look more carefully at the galaxy by the galaxy. For each of the spectra, we verified the suitability of the "gm" and "gm5" masks. Direct-Fit Method The same comparative test between individual masks and general masks was not so promising, on the other hand. Excluding two complex cases, the variations in σ due to the differences between general and individual masks were of the order of 17 km/s, still comparable to the uncertainties ∆σ. For "a" quality spectra, the variations are of the order of 9 km/s; for those of "b" quality, 30 km/s; for type "c," 27 km/s. The confection of individual masks, therefore, brings notable benefits for the reliability of measurements of σ, despite a certain amount of subjectivity involved. On the other hand, general masks, made according to the characteristics of the sample as a whole, are a good approximation for preliminary studies or those that do not need a very high degree of reliability. For large samples, where it is not possible to visually inspect all spectra, general masks can be used, taking certain precautions [12]. Calculation of Uncertainties In this section, I detail the procedure for estimating uncertainties in σ derived from the direct fit method. Although at first glance, this seems like an already well-defined subject in the literature, there are several details that can be complex and call into question the validity of a measure of ∆σ [21]. [13]. After obtaining the fit with the individual stars, we take the mean and the mean square deviation of σ for each of the sub-samples. Figure 2 compares the results of σ obtained by this procedure with the results measured with the base that includes all the stars of the turn. We use exactly the same individual masks for adjustments with different bases. For the ESO sub-sample, the uncertainty ∆σtm? Due to the template mismatch, it is of the order of 5 km/s or 30% larger than the measure adopted in this work ∆σ, measured from the χ two distribution func- line, it will need to extend this by a smaller amount. Therefore, it is these underestimated measures, using individual stars, that cause the offset in Figure 2. When the modeling is done with a base of stars K, F, and G, on the other hand, the program gives preference to those that do not have Paschen lines and, therefore, estimates a correct value for σ of Ca ii. The question is, therefore, how to quantify the uncertainties from the direct fits with the complete starbase. According to Press et al., one way to estimate the error bar is from the χ two distribution function. Because we are interested in uncer-N. Barua International Journal of Astronomy and Astrophysics tainty in just one parameter, σ, this method provides a simple and straightforward procedure. In this case, a variation of ∆χ 2 = 1 is statistically equivalent to an extension that covers 68.3% of a data set in a normal distribution. The method is computationally expensive but not too complicated. For each galaxy in our sample, we need to obtain a curve of χ two as a function of σ. We start from the σ, the best value obtained with the model that minimizes χ 2 . Let's call this χ two χ 2 min. We travel an interval of 40 km/s around the σ, best. In this window, we set the value of σ, adjust the other parameters of the model and recalculate χ 2 [33]. Thus, we have a distribution of χ two as a function of the parameter σ, as shown in Figure 9 and Figure 10. Discussion and Conclusion The discovery of an empirical relationship between the mass of the central black hole (M•) and σ was the main guide of our detailed study of the calcium triplet region. Therefore, the search for more accurate methods for calculating the dispersion of velocities, in addition to the careful study of uncertainties. So one of the following steps is to calculate M• for our objects and relate it to other properties. It has recently been questioned whether there are "intermediate" black holes, that is, not as massive as those usually found in the center of galaxies. We suspect that this possibility is worth investigating for at least one object (NGC 4748) in our sample. However, it should be remembered that our instrumental scattering is not very convenient to detect σ small. Another critical study to be done with the calcium triplet is calculating the mass/luminosity (M/L) ratio from σ and photometric information. Thus, one can further investigate the stellar populations of the active galaxies. This possibility is exciting for type 1 Seyferts, for which traditional stellar population diagnoses are not feasible. For the study of sdss, we have as much to improve our preliminary results as to investigate another immense range of analyses. The mask detection program, for example, can receive an even more complete list of emission lines to be detected, including recombination lines important in the study of chemical abundances, like C and O. Furthermore, the program can be improved to fit the lines more precisely by calculating other parameters of the Gaussian with which we try to model the lines-at the moment, we just adjust their amplitude. Also, we can use it for weak line detection [29]. As we've seen, galaxy mid-range spectra allow you to see certain spectral features that are hidden by noise in individual spectra-both adjustment problems and weak emission lines. That is, we can use the program that masks lines with slight modifications (or perhaps none at all) to detect these faint lines in medium spectra automatically. We also fit new models for our sample of ~355,000 objects, this time with a simple stellar population base that excludes populations with very low metallicity (Z = 1/200 e 1/50Z) included in the analysis presented here. These populations have important effects on some parameters, such as M/L, mass, and average age. However, their spectra in the Bruzual and Charlot [34] models are computed relatively coarsely due to the lack of a complete library of stellar spectra with such metallicities. Therefore, we found it convenient to investigate their effect on our adjustments. This same group will even soon release new versions of metal-poor populations. As for the preliminary results presented here, there are several details that we have not yet included in our calculations for the BPT diagnostic diagram and star formation history. We need, for example, to further investigate the completeness of the sample and correct our timescale to a cosmological timescale. We hope, therefore, to obtain SFRs that show in more detail the evolution of galaxies in the local universe. Therefore, the immediately subsequent steps are to detail our studies of sdss more rigorously. Furthermore, as we have already men-N. Barua International Journal of Astronomy and Astrophysics tioned, there are many other synthesis products that we can investigate, and it would be impossible to list all these branches here. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
11,236.4
2022-01-01T00:00:00.000
[ "Physics" ]
Search for standard model production of four top quarks in the lepton + jets channel in pp collisions at sqrt(s) = 8 TeV A search is presented for standard model (SM) production of four top quarks (t t-bar t t-bar) in pp collisions in the lepton + jets channel. The data correspond to an integrated luminosity of 19.6 inverse femtobarns recorded at a centre-of-mass energy of 8 TeV with the CMS detector at the CERN LHC. The expected cross section for SM four top quark production is approximately 1 fb. A combination of kinematic reconstruction and multivariate techniques is used to distinguish between the small signal and large background. The data are consistent with expectations of the SM, and an upper limit of 32 fb is set at a 95% confidence level on the cross section for producing four top quarks in the SM, where a limit of 32 +/- 17 fb is expected. Introduction Since its discovery in 1995 at the Fermilab Tevatron [1,2], the top quark has been studied primarily using events containing top quark-antiquark pairs (tt) and events containing a single top quark.With the larger centre-of-mass energy and luminosity of the CERN LHC, the study of more rare processes involving top quarks becomes possible.One such process is the production of four top quarks (tttt).In the standard model (SM), tttt production proceeds via gluon-gluon fusion or quark-antiquark annihilation.Feynman diagrams contributing to this process at leading order (LO) are shown in Fig. 1.The cross section for SM tttt production at the LHC is predicted, at LO, to be extremely small: σ SM tttt ≈ 1 fb at √ s = 8 TeV [3].Next-toleading-order (NLO) corrections increase the cross section by as much as 30% [4].The main background is due to tt production, a process that has a cross section more than five orders of magnitude larger [5] and is one of the reasons that a tttt signal has not yet been observed.As the data used in this paper correspond to an integrated luminosity of 19.6 fb −1 , there are ≈20 tttt events expected in the data.Because of the very large tt background, the direct observation of events leading to a measurement of σ tttt is unlikely.However, in many models beyond the SM (BSM) involving massive coloured bosons, Higgs boson or top quark compositeness, or extra dimensions, σ tttt is enhanced [4,[6][7][8][9][10][11].In some supersymmetric extensions of the SM, tttt final states can also be produced via cascade decays of coloured supersymmetric particles such as squarks and gluinos [12].In certain regions of BSM parameter space, these final states have kinematics similar to those of SM tttt production.In such cases, reinterpretation of an upper limit on SM production of tttt has the potential to constrain BSM theories.Moreover, in direct searches for these BSM signatures, SM production of tttt can be a background.Hence, experimental constraints on σ tttt have the potential to enhance the discovery reach of such searches.This paper presents a search for SM production of tttt in events that contain a single lepton ( ) and multiple jets.Signal events are sought in final states with a single muon (µ) or a single electron (e), in what are termed µ + jets and e + jets channels.The muon or electron originates either from the direct decay of a W boson or from the leptonic decay of a τ lepton in t → bW, W → τν τ , τ → ν ν τ .The chosen final state has a larger branching ratio (≈41%) than the zero-lepton (≈30%), two-lepton (≈22%) or three-and four-lepton (≈6%) final states, when only muons and electrons are considered as final-state leptons.Kinematic reconstruction techniques and multivariate analyses (MVA) are used to discriminate the tttt signal from the tt background. Data and simulation The data are collected using triggers based on the presence of a muon candidate with p T > 24 GeV, or an electron candidate with p T > 27 GeV.The signal process is modelled at LO using the MADGRAPH (v5.1.3.30)Monte Carlo (MC) generator [14].The tt background is modelled at LO with up to three additional partons in the zero-lepton, one-lepton and two-lepton final states, also using MADGRAPH, as is the production of tt + electroweak (EW) bosons and the production of EW bosons with additional partons.Single top quarks and any additional partons produced in the tand s-channels, and in association with a W boson, are modelled at NLO using POWHEG (v1.0 r1380) [15][16][17][18][19].The tt + Higgs boson and diboson (WW, WZ and ZZ) processes are modelled at LO using PYTHIA 6.426 [20].The decays of τ leptons are simulated with TAUOLA (v27.121.5)[21].The simulation of additional initial-state and final-state radiation, and the fragmentation and hadronisation of quarks and gluons are performed using PYTHIA with the Z2* tune [22].The (LO) CTEQ6L1 [23] set of parton distribution functions (PDF) is used with the MADGRAPH and PYTHIA samples while the (NLO) CTEQ6M set is used with the POWHEG samples.The generated events are passed through a full simulation of the CMS detector, based on the GEANT4 package [24].Next-to-leading-order or, when available, next-to-next-to-leading-order (NNLO) cross sections are used to normalise the predictions. For all samples of simulated events, multiple minimum bias events generated with PYTHIA are added to simulate the presence of additional proton-proton interactions in a single or neighbouring proton bunches (pileup).To refine the simulation, the simulated events are weighted to reproduce the distribution in the number of reconstructed vertices observed in data. The cross section ratio σ ttbb /σ ttjj is measured by CMS to be 2.2 ± 0.3 (stat) ± 0.5 (syst)%, where the ttjj process is defined as the production of a tt pair with any two additional jets (j) [25].As the MADGRAPH tt simulation used in this analysis predicts σ ttbb /σ ttjj to be 1.2%, the simulation is corrected to the ratio of cross sections measured in Ref. [25].To illustrate the composition of this dominant background, the sample is divided into two subcategories.The categories correspond to the production of tt with additional light quarks or gluons "tt + ll/gg" and to the production of tt with additional charm or bottom quarks "tt + cc/bb ".The samples of electroweak bosons with additional partons and of single top quarks are grouped into a category termed "EW", while samples of tt + electroweak bosons, tt decaying into final states containing zero or two leptons are grouped into a category termed "tt other". Event reconstruction and selections Events are reconstructed using a particle-flow (PF) algorithm [26,27].This proceeds by reconstructing and identifying each final-state particle using an optimised combination of all subdetector information.Each event is required to have at least one reconstructed vertex.The primary vertex is chosen as the vertex with the largest value of Σp 2 T of the tracks associated with that vertex.Additional selection criteria are applied to each event to reject events with features consistent with arising from detector noise and beam-gas interactions. The energy of electrons is determined from a combination of the track momentum at the primary vertex, the corresponding ECAL energy cluster, and the energy sum of the reconstructed bremsstrahlung photons associated with the track.The energy of muons is obtained from the corresponding track momentum obtained in a combined fit to information from the inner silicon trackers and outer muon detectors.The energy of charged hadrons is determined from a combination of the track momentum and the corresponding ECAL and HCAL energies, corrected for the suppression of small signals, and calibrated for the non-linear response of the calorimeters.Finally, the energy of neutral hadrons is obtained from the corresponding calibrated ECAL and HCAL energies.As charged leptons originating from top quark decays are typically isolated from other particles, a variable (I rel ) is constructed to select lepton candidates based on their isolation.It is defined as the scalar sum of the p T values of the particles reconstructed within an angle ∆R of the axis of the momentum of the lepton candidate, excluding the lepton candidate, divided by the p T of the lepton candidate.The ∆R is defined as ∆R = (∆η) 2 + (∆φ) 2 , where ∆η and ∆φ are the differences in pseudorapidity and azimuthal angles between the lepton candidate and any other track or energy deposition.A muon or an electron candidate is rejected if I rel is greater than or equal to 0.12 or 0.1, for respective values of ∆R of 0.4 and 0.3. Jets are clustered from the reconstructed particles using the infrared-and collinear-safe anti-k T algorithm [28], with distance parameter R = 0.5, as implemented in the FASTJET package [29].The jet momentum is defined by the vectorial sum of the momenta of all of the particles in each jet, and is found in the simulation to be within 5% to 10% of the true jet momentum, for the entire p T spectrum of interest and detector acceptance [30].Corrections to the jet energy scale (JES) and the jet energy resolution (JER) are obtained from the simulation and through in situ measurements of the energy balance of exclusive dijet and photon + jet events.An offset correction is applied to take into account the extra energy clustered into the jets from pileup.Muons, electrons, and charged hadrons originating from pileup interactions are not included in the jet reconstruction.Missing transverse energy (E miss T ) is defined as the magnitude of the vectorial sum of the p T of all the selected jets and leptons in the event.Charged hadrons originating from pileup interactions are also not included in the reconstruction of E miss T .Jets are classified as b quark jets through their probability of originating from the hadronisation of bottom quarks, using the combined secondary vertex (CSV) b tagging algorithm, which combines information from the significance of the track impact parameter, the jet kinematics, and the presence of a secondary vertex within the jet [31]. To preferentially select tttt events while suppressing backgrounds, events that pass the muon or electron trigger are required to pass baseline selections corresponding to the µ + jets or e + jets channels.The selections comprise a series of criteria applied to the objects in the offlinereconstructed event.The selections require the presence of one well-identified and isolated muon or electron [32,33] with p T > 30 GeV and with respective muon or electron |η| < 2.1 or 2.5.Jets are required to have p T > 30 GeV and |η| < 2.5.All events are required to have the number of selected jets (N jets ) to be at least six.For a jet to be b tagged, it must pass a requirement of the CSV algorithm [31] that provides a misidentification rate of ≈1% for light quark and gluon jets, and corresponds to b tagging efficiencies of 40-75%, depending on jet p T and η.Events are required to have the number of b-tagged jets (N btags ) to be at least two.The requirements on N jets and N tags strongly suppress background events arising from vector boson + jets and single top quark production.The H T of an event is defined as the scalar sum of the p T of all the selected jets in the event.Events are also required to have H T > 400 GeV and E miss T > 30 GeV, which removes the residual background arising from multijet processes.Small corrections are applied to the simulated events to account for the differences between efficiencies in data and simulation for the above lepton and b tagging requirements. Event classification with an MVA algorithm To obtain the greatest possible discrimination between tttt events and the dominant tt background, variables sensitive to the different processes are exploited in a multivariate (MVA) discriminant.The selected variables are grouped into three categories based on the underlying physical characteristics that they exploit: content of top quarks, jet activity, and b quark jet content. Multiplicity of top quarks The presence of multiple jet-decaying top quarks in tttt events can be exploited to distinguish such events from tt background, which contains only a single jet-decaying top quark.The challenge in the kinematic reconstruction of such top quarks in an event containing many jets is to find correct selections of three jets that arise from any single top quark when many incorrect three-jet combinations are possible.Such correctly selected combinations are referred to as "correct trijets", while combinations containing one or more jets not originating from the same top quark are referred to as "incorrect trijets".The large number of incorrect trijets in signal and background events motivates the use of MVA methods as in Ref. [34] to distinguish between correct and incorrect trijets by combining information from a set of input variables. Within the trijet, the dijet system originating from W boson decay is attributed to the two jets with smallest ∆R separation.The invariant mass of this dijet and the invariant mass of the trijet are used as input variables in the MVA.For correct trijets, these variables have respective values close to that of the W boson and of the top quark.The azimuthal separations between the trijet and dijet systems and between the trijet and the jet not selected for the dijet system are also used.These variables typically have smaller values for correct trijets.The ratio of the magnitude of the vectorial p T sum to the scalar p T sum of the trijet system has typically larger values for correct trijets and is included as an input variable.Finally, the discriminant of the CSV b tagging algorithm for the jet not selected in the dijet provides, for correct trijets, values expected for b quark jets and is therefore included.These variables are combined in a boosted decision tree algorithm (BDT trijet ) using the TMVA package [35].Of the simulated tt events that pass the baseline selections, approximately 61% have the trijet with the largest BDT trijet discriminant value for the t → trijet decay. Following the baseline selection, it should be possible to reconstruct multiple all-jet decays of top quarks in tttt events, but not in tt events.The BDT trijet distribution in the highest-scoring trijet discriminant has a similar form for tttt and tt events.However, the second-highest ranking trijet frequently reflects correct trijets in tttt events and incorrect trijets in tt events.Hence, the trijet with the largest value of BDT trijet discriminant is removed from the event, and the BDT trijet discriminant of the trijet of the remaining jets with highest value (BDT trijet2 ) is used to distinguish between tttt and tt events.Distributions in this variable, in N jets and in N btags in data and simulation are shown in Fig. 2. The uncertainty in "scale" in the legend refers to the changes produced through changes of factors of two and one half in the factorisation and renormalisation scales of the calculation, as discussed in Section 6.The successful reconstruction of a third all-jet decay of a top quark is unlikely, and is only possible in the small fraction of events containing at least nine jets.Hence it provides negligible additional discriminating power and is not used. The reduced event (RE) is constructed by subtracting the jets contained in the highest BDT trijetranking trijet.In tt events, the RE will typically contain only jets arising from t → b ν decays of the top quark, initial-and final-state radiation, the underlying event, and pileup interactions.Conversely, a tttt RE can contain up to two all-jet top quark decays, and as a result numerous energetic jets.Two variables based on the RE are (i) H RE T , i.e., the H T of the RE and (ii) M RE , i.e., the invariant mass of the system comprising all the jets in the RE. Jet activity Because tttt events can contain up to ten hard jets from top quark decays, while tt events contain up to four, the following variables based on jet activity of the event possess discrimination power: (i) N jets , (ii) H b T , (iii) H T /H p , (iv) H ratio T , (v) p T5 , and (vi) p T6 .The H b T variable is defined to be the H T of the b-tagged jets.In the H T /H p ratio, H p is the scalar sum of the total momenta of the selected jets.The ratio of the H T of the four leading jets to the H T of the other jets is defined as H ratio T .The p T5 and p T6 variables represent, respectively, the p T values of jets of 5th and 6th largest p T .All these variables are used in the discriminant described in Section 5.4. Multiplicity of bottom quarks The analysis assumes that the top quark decays with the SM branching ratio of B(t → bW) = 1.Hence, tttt events contain four bottom quarks from top quark decays whereas tt events contain only two bottom quarks from top quark decays.Therefore the multiplicity of b-tagged jets is a potential source of discriminating power, and is also used in the discriminant discussed in Section 5.4. Event-level BDT The ten variables described in Sections 5.1, 5.2, and 5.3 are combined using a second, eventlevel BDT (BDT event ).To maximise sensitivity, the events are divided into three categories corresponding to N jets = 6, 7 and, > 7, where the N jets = 6 category is used as a sideband region to constrain the tt background in the calculation of limits on tttt production.In Fig. 3, distributions in the BDT event discriminant are shown in data and in MC for each of these categories. Systematic uncertainties and limits on tttt production The systematic uncertainties considered in this analysis are separated into two categories: (i) those that affect the normalisations of the BDT event discriminant distributions of both signal and backgrounds and (ii) those that affect the form of the distributions of just the backgrounds.The normalisations are affected by the uncertainty in integrated luminosity of the data and the theoretical cross sections of the signal and background processes.An uncertainty in the integrated luminosity of 2.6% is included [36].The uncertainty in the tt cross section is expected to dominate, and is taken from Ref. [5] Figure 2: The distribution in the BDT trijet2 discriminant for the µ + jets and e + jets channels in (a) and (b), respectively, and the same for N jets in (c) and (d), and for N btags in (e) and (f).The ratios plotted at the bottom of each panel reflect the percent differences between data and MC events.The hatched areas show the changes in the calculated predictions produced by factors of two and one half changes in the factorisation and renormalisation scales in the tt simulation.affect the form of the distributions of the BDT event discriminant.As tt is the dominant background, systematic effects on the form of the distributions are considered only for tt events.The impact of contributions from higher-order corrections in the tt simulation is quantified by Figure 3: The distribution in the BDT event discriminant for data and simulation in events with N jets = 6 for the µ + jets and e + jets channels in (a) and (b), respectively, and the same in events with N jets = 7 in (c) and (d), and in events with N jets > 7 in (e) and (f).The ratios plotted at the bottom of each panel reflect the percent differences between data and MC events.The hatched areas reflect the changes in the calculated predictions produced by factors of two and one half changes in the factorisation and renormalisation scales (see Section 6).comparing alternative tt samples that are generated with the renormalisation and factorisation scales simultaneously changed up and down by a factor of two relative to the nominal tt sam-ple.The matching of partons originating from the matrix element to the jets from the parton showers is performed according to the MLM prescription [37].The uncertainty arising from this prescription is estimated by changing the minimum k T measure between partons by factors of 0.5 and 2.0 and the jet matching threshold by factors of 0.75 and 1.5.To evaluate the uncertainty due to imperfect knowledge of the JES, JER, b tagging, and lepton-identification efficiencies, and the cross section for minimum-bias production used in the pileup-reweighting procedure in simulation, the input value of each parameter is changed by ±1 standard deviation of its uncertainty.A systematic uncertainty due to the imperfect knowledge of the contribution from the ttbb component in tt events is also estimated.As mentioned previously, a correction is applied to the tt simulation to reproduce the observed [25] ratio of σ ttbb to σ ttjj .The systematic uncertainty associated with the imperfect knowledge of this ratio is estimated by changing this correction by ±50%. No significant excess of events to represent SM tttt production is observed above the background prediction.Therefore, an upper limit on σ tttt is set by performing a simultaneous maximum likelihood fit to the distributions in the BDT event discriminant for signal and background in the six event categories described in Section 5.The systematic uncertainties in the normalisation and the form of the distributions of the discriminant are accommodated by incorporating the uninteresting nuisance parameters into the fit.The contribution of the nuisance parameters to the likelihood function are modelled using log-normal functions for normalisations and Gaussian functions for forms of the distributions.The functions have widths that correspond to the ±1 standard deviation changes of the systematic sources described in the previous section.Statistical uncertainties in the simulation are taken into account by applying a "lightweight" version of the Beeston and Barlow method [38] where one nuisance parameter is associated with the estimate of the total simulation and the statistical uncertainty in each bin.The best-fit values of the nuisance parameters show only statistically insignificant deviations from their input values.In particular, the best-fit value of the parameter corresponding to the ttbb correction is consistent with the result obtained in Ref. [25]. The modified frequentist CL s approach [39,40] using the asymptotic approximation is adopted to measure the upper limit using the ROOSTATS package [41,42].The limit calculated at a 95% confidence level (CL) on the production cross section σ tttt is 32 fb, where a limit of 32 ± 17 fb is expected.These limits are approximately 25 × σ SM tttt . Summary A search for events containing four top quarks was performed using data collected with the CMS detector in lepton + jets final states at √ s = 8 TeV, corresponding to an integrated luminosity of 19.6 fb −1 .The analysis had three stages.First, a baseline selection was used to select signal events while suppressing backgrounds.Second, to further discriminate between signal and background, an event classification scheme based on a BDT algorithm was defined to exploit differences in the multiplicity of top quarks, jet activity, and the multiplicity of bottom quarks.Third, a simultaneous maximum likelihood fit of the BDT event discriminant distributions was performed, from which an upper limit on σ tttt of 32 fb was calculated at a 95% CL, where a limit of 32 ± 17 fb was expected.These limits are approximately 25 × σ SM tttt .This result raises the prospect of the direct observation of SM tttt in future CMS data at the higher centreof-mass energies of 13 and 14 TeV, where σ SM tttt is predicted to be ≈9 fb and 15 fb, respectively [3,14].Furthermore, this result has the potential to constrain BSM theories producing tttt final states with kinematics similar to the SM process, and enhance the discovery reach of BSM searches where SM production of tttt constitutes a possible background. Figure 1 : Figure 1: Leading-order Feynman diagrams for tttt production in the SM from gluon-gluon fusion (left) and quark-antiquark annihilation (right).
5,479.6
2014-09-25T00:00:00.000
[ "Physics" ]
Toward Understanding the Molecular Role of SNX27/Retromer in Human Health and Disease Aberrations in membrane trafficking pathways have profound effects in cellular dynamics of cellular sorting processes and can drive severe physiological outcomes. Sorting nexin 27 (SNX27) is a metazoan-specific sorting nexin protein from the PX-FERM domain family and is required for endosomal recycling of many important transmembrane receptors. Multiple studies have shown SNX27-mediated recycling requires association with retromer, one of the best-known regulators of endosomal trafficking. SNX27/retromer downregulation is strongly linked to Down’s Syndrome (DS) via glutamate receptor dysfunction and to Alzheimer’s Disease (AD) through increased intracellular production of amyloid peptides from amyloid precursor protein (APP) breakdown. SNX27 is further linked to addiction via its role in potassium channel trafficking, and its over-expression is linked to tumorigenesis, cancer progression, and metastasis. Thus, the correct sorting of multiple receptors by SNX27/retromer is vital for normal cellular function to prevent human diseases. The role of SNX27 in regulating cargo recycling from endosomes to the cell surface is firmly established, but how SNX27 assembles with retromer to generate tubulovesicular carriers remains elusive. Whether SNX27/retromer may be a putative therapeutic target to prevent neurodegenerative disease is now an emerging area of study. This review will provide an update on our molecular understanding of endosomal trafficking events mediated by the SNX27/retromer complex on endosomes. INTRODUCTION Cells communicate with the extracellular environment via cell surface transmembrane proteins that direct processes such as nutrient uptake, cellular adhesion, and intracellular signal transduction. Homeostasis of these molecules is precisely controlled by balancing exocytic, endocytic, and intracellular trafficking pathways. How these pathways are connected and regulated is a question of fundamental and immense interest, both for understanding normal cell physiology and the etiology of important human diseases. Mechanisms that regulate transmembrane cargo sorting within endosomes remain poorly understood. In neurons, proteins and lipids must be exchanged and remodeled at the cell surface to maintain synaptic plasticity and cognitive development (Anggono and Huganir, 2012). Many critical transmembrane proteins and lipids must be internalized, while others undergo selective sorting, either through recycling to the cell surface or trafficking to lysosomes for downregulation or degradation. The endocytic network regulates both structural and functional synaptic remodeling by controlling the trafficking of numerous transmembrane proteins cargoes; examples include cell adhesion molecules, receptors required in signaling pathways, and ion channels (Anggono and Huganir, 2012;Di Fiore and von Zastrow, 2014). Retromer is thought to coordinate cargo sorting in two ways: by selecting cargo based on specific sequences and by promoting membrane remodeling to form tubular carriers enriched in certain cargoes (Cullen and Korswagen, 2011;Seaman et al., 2013;Burd and Cullen, 2014). Cargo recognition and binding use at least two mechanisms. Some cargoes directly bind VPS35 and VPS26 subunits (Seaman, 2007;Tabuchi et al., 2010;Fjorback et al., 2012). Increasing evidence suggests retromer uses various SNX proteins as cargo adaptors (Strochlic et al., 2007): SNX3 (Xu et al., 2001;Strochlic et al., 2007;Burd and Cullen, 2014), SNX5 (Simonetti et al., 2017(Simonetti et al., , 2019, and SNX27 (Temkin et al., 2011) are all implicated in cargo recognition. Mutations in or loss of functional retromer have been increasingly linked with neurological disorders (Willnow and Andersen, 2013;Reitz, 2015). In this review, we provide an update on the current understanding of SNX27/retromer biology with focus on molecular details and the link between SNX27 and retromer in sorting critical cargoes required for human health. Retromer-Mediated Cargo Recognition Following identification as a multiprotein trafficking complex in budding yeast, mammalian retromer has been implicated in sorting hundreds of transmembrane cargoes either to the TGN or to the cell surface by re-routing away from degradation in lysosomes (Cullen and Steinberg, 2018). Retromer recycles many important transmembrane cargoes from the endosome to the TGN, including sortilin (Mari et al., 2008), SorLA (Fjorback et al., 2012), and SorCS1 (Lane et al., 2010). However, in metazoans, FIGURE 1 | Overview of SNX27/retromer pathway in metazoan cells. Metazoan retromer is implicated in three distinct endosomal pathways through direct interactions with SNX proteins to form elongated tubules. The SNX27/retromer pathway is specific to metazoans and mediates cargo recycling from endosomes to the plasma membrane. In this pathway, cargoes including β2 adrenergic and glutamate receptors contain PDZ binding motifs recognized by SNX27. In addition, the SNX27 FERM domain binds NPxY motifs found in transmembrane cargoes. SNX-BAR/retromer and SNX3/retromer pathways occur in both yeast and metazoans. SNX-BAR/retromer retrieves cargoes from endosomes to the TGN, while SNX3/retromer is implicated in sorting receptors like Wntless (WLS) from endosomes to the TGN. retromer was later found to sort hundreds of transmembrane proteins from endosomes to the plasma membrane via an interaction with SNX27, which has become an emerging player implicated in recycling of solute carriers, glutamate receptors, and potassium channels (Lunn et al., 2007;Lauffer et al., 2010;Steinberg et al., 2013;Clairfeuille et al., 2016;Yang et al., 2018). In recent years, the role of retromer in the endosome-to-plasma membrane recycling pathway has emerged as critical to cellular and human health. SNX27 Membrane and Cargo Binding Several studies have shown SNX27 must be directed to endosomes in order to ensure trafficking of protein cargoes (Stockinger et al., 2002;Joubert et al., 2004;van Kerkhof et al., 2005;Czubayko et al., 2006;Lunn et al., 2007;Rincon et al., 2007;Simonetti et al., 2017). SNX27 demonstrates absolute specificity for PI3P headgroup association (micromolar binding affinities), and its PX domain drives localization to PI3P enriched membranes (Stockinger et al., 2002;Joubert et al., 2004;Knauth et al., 2005;van Kerkhof et al., 2005;Rincon et al., 2007;Lee et al., 2008;Donoso et al., 2009). Structure-based mutagenesis established the dependency of the PX-PI3P interaction for membrane recruitment (Misra et al., 2001;. However, synergistic binding of other modules (PDZ and FERM) to membrane anchored cargo proteins promotes cooperativity for membrane localization (Lauffer et al., 2010;Rincon et al., 2011;Ghai et al., 2013Ghai et al., , 2015. This process is referred to as "coincidence detection" and is established as a fundamental physical requirement for the highly specific assembly of transport machineries at different organelles. The SNX27 PDZ domain binds both transmembrane and cytosolic proteins using type-I PDZ binding motifs (consensus sequence: X-S/T-X-, where represents any hydrophobic residue) (Joubert et al., 2004;Lunn et al., 2007;MacNeil et al., 2007;Rincon et al., 2007;Lauffer et al., 2010). Structural studies (Figure 4) have elucidated the molecular basis for PDZbm cargo recognition. The PDZbm sequences are often found in the protein C-terminus, and they possess acidic side chains at the −3 and −5 positions that form an "electrostatic clamp" with a conserved arginine on the SNX27 surface and thereby enhance affinity ( Figure 4A; Clairfeuille et al., 2016). Many SNX27 PDZbms, including those found in NMDARs and β2 adrenergic receptor (β 2 AR), lack these upstream acidic side residues; instead, they possess conserved phosphorylation sites on serine and threonine residues . Crystal structures of SNX27 PDZ domain bound to different phosphorylated peptides showed how Ser/Thr phosphorylation functions to "mimic" the acidic side chains required for high affinity binding (Figures 4B,C; Clairfeuille et al., 2016). The FERM domain comprises three sub-domains called F1, F2, and F3 (Figures 2, 4E). This domain has been proposed to regulate interactions with endosomal cargos and/or to serve as a scaffold for signaling complexes (Ghai and Collins, 2011;. The F1 subdomain contains a predicted ubiquitinlike fold. The F3 subdomain is predicted to have a structure similar to the pleckstrin homology (PH) and phosphotyrosine binding (PTB) domains based on sequence predictions (Stolt et al., 2003(Stolt et al., , 2005. The F1 and F3 modules are somehow oriented by the central F2 subdomain that contains four α-helices. The sequence identity of SNX27 FERM compared with the "canonical" FERM domain from SNX17 and SNX31 is low, and F2 is much smaller than its equivalents . The SNX27 FERM domain can bind Ras GTPases via the F1 module (Ghai and Collins, 2011), while the F3 subdomain binds cargo receptors using short NPxY motifs present in the cytosolic tails of activated signaling receptors . The ability of both the PDZ and FERM domains to bind cargo motifs significantly extends the repertoire of potential cargo molecules. SNX27 also binds negatively charged phosphoinositides via the F3 module, which contains a binding site with high affinity for specific phosphoinositide head groups (Mani et al., 2011;Ghai et al., 2015) enriched at the PM and within late endosomal compartments. This suggests a potential mechanism for activation-dependent redistribution of SNX27 to the plasma membrane. The role of SNX27 at the plasma membrane remains uncharacterized, but its recruitment to the contact zone between T cells and the APC (antigen presenting cell) may be important for maintaining the immunological synapse (IS) by controlling endocytic sorting and signaling downregulation of receptors, such as Disabled homology 1 (Dab1) in reelin signaling (Stolt et al., 2003(Stolt et al., , 2005Ghai et al., 2015). Overall, the distinct endosomal and PM localization of SNX27 may be partly explained by the presence of both phosphoinositide and cargo binding modules in its C-terminal FERM domain. SNX27/Retromer In metazoans, SNX27/retromer has been established as a coat that recycles specific cargoes from endosomes to the plasma membrane. SNX27/retromer cargo recycling is thought to occur on Rab4-positive early endosomes and requires the SNX-BAR complex (Cullen and Korswagen, 2011;Temkin et al., 2011;Steinberg et al., 2013). SNX27 acts as a major trafficking regulator through binding PDZ cargo in mammalian cells (Cao et al., 1999;Lauffer et al., 2010;Temkin et al., 2011;Steinberg et al., 2013;Gallon et al., 2014). In this pathway (Figure 1), SNX27 must first be recruited to endosomes when its the PX domain binds PI3P, perhaps with an additional contribution from the FERM domain (Ghai et al., 2015); data indicate disrupting the FERM/PI3P interaction reduces SNX27 association with endosomal recycling compartments (Ghai et al., 2015). Structural studies have demonstrated a direct interaction between VPS26 and the SNX27 PDZ domain (Gallon et al., 2014). An X-ray crystal structure revealed an exposed β-hairpin on the PDZ domain that binds a conserved groove on the VPS26 surface ( Figure 5). Association between the SNX27 PDZ domain and VPS26 increases affinity for PDZ binding motifs, which hints how cargo sorting may be allosterically regulated by SNX27/retromer complex formation (Gallon et al., 2014). SNX27 Binding Partners Overall, SNX27 has been primarily linked to membrane trafficking of cargo proteins via binding to either PDZbm or NPxY motifs. However, SNX27 has been demonstrated to bind other molecules and perhaps influence their mode of action; understanding these additional protein-protein interactions remains important (Teasdale and Collins, 2012), especially since we lack molecular details for many binding partners. Another established interaction partner for SNX27 is a protein from the WASH complex called FAM21 (or WASHC2) (Temkin et al., 2011;Freeman et al., 2014;Kvainickas et al., 2017). The WASH complex contains five proteins: WASH1; WASHC2 (FAM21); WASHC3 (formerly KIAA1033); WASHC4 (or strumpellin); and WASHC5 (formerly CCDC53) (Derivery et al., 2009). Unlike retromer, the WASH complex is not conserved across evolution and is absent in multiple organisms including yeast. The WASH complex mediates F-actin filament formation on endosomal membranes and is required for endosome-to-cell surface recycling (Seaman et al., 2013). FAM21 has been reported to prevent retrieval of the glucose transporter, GLUT1, to the Golgi and direct it into SNX27/retromer recycling pathway (Lee et al., 2016). In mammals, WASH complex is recruited to endosomes through a direct interaction between FAM21 and VPS35 (Harbour et al., 2012;Jia et al., 2012;Helfer et al., 2013). The interaction between SNX27 and FAM21 may thus be important in the context of SNX27/retromer coat assembly or regulation, but molecular details describing how SNX27 engages FAM21 are currently unknown. Sorting nexin 27 also interacts with the monomeric small GTPase, Ras (Ghai and Collins, 2011;, which is associated with multiple signaling pathways implicated in oncogenic signaling (Herrero et al., 2016). The Ras interaction occurs through the FERM F1 subdomain, which is also implicated in binding NPxY cargo proteins (Burden et al., 2004;Ghai and Collins, 2011;. These data may suggest other FERM domain proteins possess similar binding activity. Krit1 has been identified as an effector for Rap1, another Ras family protein; Krit1 interacts with Rap1 through its FERM domain and stabilizes epithelial junctions (Serebriiskii et al., 1997;Wohlgemuth et al., 2005;Glading et al., 2007;Francalanci et al., 2009). The GIRK (G-protein regulated inward rectifying potassium) class of potassium channels regulates neuronal excitability, and they also depend on the SNX27 FERM domain for localization and trafficking (Balana et al., 2013). In cells expressing dominant negative Ras, SNX27 cannot effectively regulate cell surface levels of GIRK potassium channels (Balana et al., 2013), which suggests a link between Ras regulation and cargo sorting by SNX27. Finally, the deubiquitinating enzyme (DUB) called OTULIN has recently been shown to interact with SNX27 (Stangl et al., 2019). OTULIN specifically hydrolyzes Met1-linked ubiquitin chains. OTULIN binds two separate surfaces on the SNX27 PDZ domain (Figure 5) with high affinity; it is thought to compete non-catalytically for cargo and retromer binding (Stangl et al., 2019). OTULIN contains a conserved class I PDZbm (sequence: ETSL) essential for binding SNX27. An X-ray crystal structure of OTULIN-SNX27 PDZ revealed a second interface in addition to the canonical PDZ-PDZbm interaction. In the second interface, part of the OTULIN catalytic domain (residues 67-79 containing an exposed β3-β4 hairpin loop) is located in close proximity to the SNX27 PDZ domain, which also engages VPS26A. Compared to the PDZbm alone, OTULIN catalytic domain affinity for SNX27 PDZ is increased over 30-fold (∼30 nM). This represents the tightest interaction ever reported for a PDZ interactor. It appears SNX27 cannot undertake simultaneous binding to both VPS26A and OTULIN (Figure 5), because OTULIN and VPS26A use a partially overlapping binding site located in the SNX27 β3-β4 hairpin loop and would experience clashes between atoms. The existence of other secondary interfaces could modulate the affinity, and thus the selectivity, of SNX27 PDZ interactors; it will be interesting to see if others are identified in future work. Open Questions in Cell Biology Sorting nexin 27 has been linked to endosomal trafficking through a direct interaction with retromer. There is good evidence for describing SNX27 as a retromer "cargo adaptor." But multiple important questions remain, and further work must be undertaken to understand functional links between the SNX27/retromer complex and Ras. For instance, does SNX27 associate with Ras, retromer, and/or WASH simultaneously? Biochemical experiments and biophysical assays could address this question in the context of artificial membranes to more closely mimic cellular conditions. Is SNX27 endosomal cargo binding affected or regulated by its interaction with Ras? Or might SNX27 somehow regulate Ras function in different signaling pathways? Overall, the ability of SNX27 to associate with and sort transmembrane receptors, and its potential for interacting with small GTPases on endosomal membranes, implicates SNX27 as a potential hub where both endosomal trafficking and integrated signaling processes meet. It is also important to note how other evidence suggests SNX27 can operate at least somewhat independently of retromer. For example, siRNA-mediated knockdown of SNX27 does not seem to affect VPS35 steady state protein levels, and vice versa. Knockdown experiments indicate SNX27 and retromer only partially overlap in their cargo repertoires (Cullen and Korswagen, 2011;Simonetti et al., 2017;Yong et al., 2020), which further suggests SNX27 can function independently of retromer. Finally, data have suggested a role for SNX27 in internalization from the cell surface, as opposed to the recycling role revealed by studies on β 2 AR (Lunn et al., 2007;Lauffer et al., 2010;Temkin et al., 2011;Hayashi et al., 2012). The role of SNX27 in the intracellular transport of GPCRs, ion channels, and kinases, suggests a possible role in attenuation or propagation of signal transduction, but how SNX27 fulfills both roles remains an important open question. TOWARD A MOLECULAR UNDERSTANDING OF SNX27/RETROMER The SNX27 PDZ domain also directly interacts with the retromer VPS26 subunit (Figure 5; Steinberg et al., 2013;Gallon et al., 2014;Clairfeuille et al., 2016). A β−hairpin located on the SNX27 PDZ domain binds between the two β−sandwich sub−domains of the VPS26 arrestin fold; this binding site is located next to the PDZbm site, but they do not overlap. The interaction with PDZ cargo does not require a "dual recognition surface, " but PDZ cargo binding affinity is enhanced when SNX27 binds retromer. Therefore, cargo sorting can be synergistically coordinated by a specific SNX27/retromer interaction. Structurally, the SNX27 PX domain ( Figure 4D) adopts a globular fold containing three anti-parallel β-strands followed by three α-helices (Seet and Hong, 2006;Chandra and Collins, 2019;Li et al., 2019). Sequence alignments of SNX27 PX domain with other PX family members (Seet and Hong, 2006) indicate multiple conserved regions. This includes specific basic residues, as well as the so-called "PPK loop" located between helices α1 and α2, which contains the consensus sequence defined as PxxPxK ( : large aliphatic amino acids V, I, L, and M) (Seet and Hong, 2006;Chandra and Collins, 2019;. The structure of SNX27 PX domain revealed a shallow and positively charged surface pocket in a location generally considered to be the binding site for negatively charged headgroups (Seet and Hong, 2006;Chandra and Collins, 2019;Li et al., 2019), although this structure did not explicitly contain the head group. In contrast to its N-terminus, we lack experimental structural information about the SNX27 C-terminus. SNX27 belongs to the same PX subfamily as SNX17, and there are two X-ray crystal structures of SNX17 FERM domain bound to NPxY motifs (Pselectin, PDB: 4GXB; and KRIT1, PDB: 4TKN) (Knauth et al., 2005;Francalanci et al., 2009;Ghai and Collins, 2011;Ghai et al., 2013). However, the sequence identity between SNX27 and SNX17 FERM domains is around 25%, so it will be useful to obtain structural information on the SNX27 FERM domain in the presence of its multiple ligands, including NPxY cargo motifs, Ras, and FAM21. Such information would provide key insights into how SNX27 uses its multi-domain architecture to organize a range of binding partners on membranes. Retromer Coats New and emerging structural studies have been invaluable for understanding how retromer assembles into coats on tubules. Recently, multiple new structures of retromer have been published. Thermophilic yeast SNX-BAR/retromer (Kovtun et al., 2018) and both fungal and metazoan SNX3/retromer (Leneva et al., 2020) coats have been reconstituted and visualized using cryo-electron tomography (cryoET). Different oligomers of murine retromer heterotrimer have been observed using single particle cryoEM . We will focus discussion here on implications from the newly determined SNX3/retromer structures because a recent review (Chandra et al., 2020) covered other new structures. Until recently, SNX-BAR proteins were believed to generate the curvature required for tubulation, and there are no reports of retromer alone driving tubulation. SNX-BARs contain amphipathic helices which can insert into lipid bilayers to robustly drive proteins to membranes and to induce membrane curvature (Pylypenko et al., 2007;Bhatia et al., 2009;van Weering et al., 2012b). Tubule extension is promoted by BAR dimerization to form higher-order tubular lattices composed of SNX-BAR complexes (Frost et al., 2008;Mim et al., 2012). The ability of retromer to transport many different cargo proteins has been explained by its ability to bind different adaptors, including sorting nexin proteins that lack membrane-deforming BAR domains; examples include SNX3, SNX12, and SNX27. SNX-BAR/retromer coats exhibit less regularity than do tubules coated with BAR dimers alone (Wassmer et al., 2009;Kovtun et al., 2018;Sun et al., 2020). Recently, the first structures of mammalian SNX3/retromer coats (Leneva et al., 2020) revealed formation of elongated coated tubules. These data demonstrate SNX-BARs are not required for in vitro tubulation. SNX3/retromer coats consist of arch-like units formed by asymmetrical VPS35 homodimers with VPS26 dimers and two SNX3 molecules located at the membrane. SNX3 binds retromer at an interface located between the VPS26 and VPS35 subunits, with its PI3P binding pocket facing the membrane. SNX3 is also attached to the membrane FIGURE 6 | Modeling SNX27/retromer on membranes. Thermophilic yeast SNX-BAR/retromer (PDB ID: 6H7W) (A) and mammalian SNX3/retromer coats (B) (PDB: not yet available) have been reconstituted in vitro. The view in panel (B) was generated using PDB: 5F0J, which approximates the reported SNX3/retromer architecture. Both complexes drive tubulation, and reconstructions indicate retromer forms conserved asymmetrical V-shaped arches across eukaryotes. VPS35 is shown as red ribbons, VPS26 as blue ribbons, VPS29 as green ribbons, Vps5 as yellow ribbons, and the SNX3 PX domain as gray ribbons. On each model, potential locations for SNX27 domains are marked. In the SNX-BAR/retromer model (A), the SNX27 PX domain (gray ribbons) appears to be occluded by BAR dimers, and the PDZ domain would likely be blocked from engaging membrane cargo by the BAR layer. In the SNX3/retromer model (B), the SNX27 PX (gray ribbons) and PDZ (purple ribbons) domains are both located close to the membrane. There are currently no structural data regarding the overall architecture of SNX27, either on its own or as part of a retromer coat, so the location of the FERM domain remains unknown. Both models assume the SNX27 PX occupies a similar location to SNX3 PX. using a membrane insertion loop (MIL), which is a feature found in other membrane binding PX domains (Cheever et al., 2006;Seet and Hong, 2006;Chandra and Collins, 2019). The VPS35-mediated arches in SNX3/retromer coats lack two-fold symmetry: one VPS35 monomer appears more curved, and the two VPS35 subunits form an asymmetric dimer interface using electrostatic residues that were proposed and tested in previous structural and biochemical studies . Asymmetric assembly of arches in SNX3/retromer coats was proposed to impose directionality and stoichiometry of adaptor binding to coats. These newly observed SNX3/retromer coat structures provide an important foundation for understanding retromer assemblies with other SNX adaptors that lack BAR domains, including SNX27, which may remodel membranes using a similar mechanism. This hints toward a generalized concept for retromer function in which retromer arches form a scaffold that contributes to or helps support membrane bending and can help propagate curvature and tubulation over long distances by oligomerization. Open Questions in Structural Biology Sorting nexin 27 has been well-established as a binding partner for retromer (Steinberg et al., 2013;Gallon et al., 2014). A major outstanding experimental question is whether SNX27 alone or together with retromer is sufficient to generate tubular carriers in vitro. It will be interesting to test directly whether SNX27/retromer forms tubules and if or how SNX27 will generate curvature. SNX27 may use its PX domain to orient itself on the membrane, in a manner similar to SNX3 (Leneva et al., 2020). However, SNX3 contains only one structured domain (PX domain), while SNX27 can engage various membrane-embedded ligands through multiple domains. SNX27 may therefore have additional constraints when engaging retromer. If SNX27/retromer forms coats, then what might these look like? Modeling the SNX27 PDZ interaction with VPS26 in the context of reconstituted arches provides some hints (Figure 6) and raises important new questions. It does not appear SNX27 could bind assembled SNX-BAR/retromer ( Figure 6A): the BAR dimers adjacent to the membrane may block or occlude the PDZ domain, which itself needs to bind cargo motifs embedded in the membrane. However, SNX-BAR/retromer has been functionally linked to formation of SNX27/retromer carriers (Steinberg et al., 2013). Could SNX27/retromer somehow "hand off " cargoes to SNX-BAR/retromer, or otherwise engage with SNX-BAR proteins? It is possible SNX27/retromer could assemble arches in a manner reminiscent to SNX3/retromer, since the SNX27 PX domain contains conserved residues that would allow it to dip into membranes in a similar manner to SNX3. In this scenario (Figure 6B), the SNX27 PX and PDZ domains appear to be located close enough for their linkers to connect the two domains. However, we currently lack information about location and orientation of the FERM domain, so this remains an open question. Alternatively, metazoan retromer has been shown to form oligomers in vitro, including longer and flatter chains (Chandra et al., 2020;Kendall et al., 2020) with different and poorly resolved VPS26 links. It remains possible that SNX27/retromer may form a "flatter" coat (Simonetti and Cullen, 2018); superposition of the VPS26/SNX27 PDZ crystal structure onto flat chains reveals the PDZ could in principle be located near membranes, but this has not been observed in the presence of membranes. Overall, reconstitution of SNX27/retromer on PI3P membranes remains an important biochemical and structural target. Reported reconstituted retromer coats appear to assemble tubules with slightly different diameters (Figure 6; Leneva et al., 2020), and this may reflect experimental conditions rather than reality. Could retromer combine with different SNX adaptors to form tubules having different dimensions? This would provide a way for cells to physically direct or sequester cargoes to different tubular carriers originating at the endosome. It will be interesting to see if retromer retains the arches observed in the presence of both SNX-BARs and SNX3; whether retromer can serve as a more adaptable scaffold with the metazoan-specific SNX27; and whether or how SNX-BAR/retromer may be linked structurally to SNX27/retromer. Mouse models revealed SNX27 is required for postnatal growth and survival (Cai et al., 2011). SNX27 −/− embryos are viable and develop during embryonic stages, but they show inhibited postnatal growth, including delayed weight gain, reduced organ size, and early death prior to weaning (Cai et al., 2011). The phenotype may arise from aberrant trafficking of NR2C, an ion channel receptor with a C-terminal PDZ motif that binds SNX27. Cai et al. (2011) report NR2C protein, but not mRNA, levels are higher in SNX27 −/− mice, and NR2C is not robustly endocytosed in SNX27-deficient neurons. This provides an important molecular link between SNX27 and a key transmembrane protein cargo needed during development. Disruption of SNX27/retromer-mediated endosomal sorting is linked to multiple debilitating neurodegenerative disorders, including Parkinson's disease (PD) (Harterink et al., 2011;Gallon et al., 2014), Alzheimer's disease (AD) , and Down's syndrome (DS) (van Weering et al., 2012a). Finally, identification of small molecules that stabilize retromer expression (Mecozzi et al., 2014;Berman et al., 2015;Muzio et al., 2020) underscores the importance of understanding how retromer complexes undergo assembly and regulation. In this section, we briefly highlight the range of cellular pathways influenced by SNX27/retromer. Signaling Sorting nexin 27/retromer has been shown to recycle important signaling receptors, including β 2 ARs (Lee et al., 2008) after ligand-induced endocytosis. Retromer influences cyclic AMP (cAMP) signaling when it recycles PTHR from the early endosome after it dissociates from β-arrestin; this event switches off the signaling pathway (Seaman, 2012;Teasdale and Collins, 2012). Specifically, PTHR has been shown to bind the SNX27 PDZ domain to ensure its recycling (Donoso et al., 2009). SNX27/retromer also trafficks the interferon receptor 2 (IFNAR2) subunit after its endocytosis, a process in which retromer appears to regulate both JAK/STAT signaling termination and gene transcription (Cullen and Korswagen, 2011). SNX27/retromer reduces RANK (receptor activator of NF-κB) signaling in osteoclasts by trafficking RANK in a retrograde pathway to the Golgi (Mim et al., 2012). A study has reported retromer is involved in nucleotide binding-leucine-rich repeat (NB-LRR)mediated signaling implicated in autophagy , but it remains unclear how retromer functions in this pathway and whether SNX27 is also involved. Autophagy Autophagy is the process by which cells degrade damaged organelles, misfolded or damaged proteins, and pathogens by enclosing them in a double membrane-bound structure called autophagosomes; these molecules are then delivered to lysosomes for degradation (Cheever et al., 2006). Autophagy is highly conserved across eukaryotes, and cells require autophagy to cope with stress tolerance and signaling induced by nutrients (Simonetti and Cullen, 2018). The involvement of SNX27/retromer in autophagic processes is emerging; it remains unclear whether its role is indirect and whether it is required for autophagy in certain cells (Belotti et al., 2013). SNX27 knockout cells exhibit impaired mTOR complex 1 (mTORC1) activation, which leads to increased autophagy . The WASH complex regulates trafficking of an essential protein called Atg9 to forming autophagosomes, where it is reported to undertake lipid scrambling and promote autophagosome formation (Maeda et al., 2020); the VPS35 D620N mutation blocks its transport and inhibits autophagy (Fujiyama et al., 2003). Retromer knockdown in Drosophila has been shown to disrupt autophagy, when undigested cytoplasmic and endosomal material builds up in autophagosomes (Pim et al., 2015). Retromer has very recently been indirectly implicated in regulation of mTORC1 (Hesketh et al., 2020). Future research is required to understand how SNX27 and/or retromer function alone or together to influence or regulate both autophagy and nutrient sensing. Neurodegeneration Disruption of the endosomal system, and mutations in genes encoding for proteins that play central roles in endosomal trafficking, contribute to pathologies associated with both AD and Parkinson's disease (PD) (Vilarino-Guell et al., 2011;Wen et al., 2011;Siegenthaler and Rajendran, 2012;Follett et al., 2014;Wang et al., 2014;Reitz, 2015;Small and Petsko, 2015;Mohan and Mellick, 2017;Zhang et al., 2018;Rahman and Morrison, 2019;Vagnozzi and Pratico, 2019;Arbo et al., 2020). SNX27/retromer maintains homeostasis of cell surface receptors, including AMPA and NMDA receptors (Cai et al., 2011;Anggono and Huganir, 2012;Wang et al., 2013;Choy et al., 2014;Hussain et al., 2014;Clairfeuille et al., 2016), in neurons and thus is essential for normal synaptic communication and brain function. Aberrant overactivation of these receptors leads to neuronal hyperactivity and ultimately to seizures commonly associated with epilepsy. In contrast, neuronal hypoactivity can cause synaptic depression linked to many neurodegenerative diseases like AD and PD, as well as to neuropsychiatric disorders like schizophrenia. Collectively, these diseases have an enormous socio-economic impact. A great deal of evidence now supports a direct link between aberrant endosomal trafficking and neurodegenerative disease onset (Chandra et al., 2020). SNX27/retromer clearly plays a critical role in receptor trafficking during synaptic transmission and neuronal function, and thus these proteins have become attractive putative drug targets for brain disorders. Homozygous deletion of SNX27 leads to epilepsy and psychomotor defects; patients typically die within 2 years of birth (Damseh et al., 2015). Here we will highlight specific SNX27 links to DS and AD, since other recent reviews (Wang et al., 2013(Wang et al., , 2014Small and Petsko, 2015;Chandra et al., 2020) have focused on retromer. In DS, chromosome 21 trisomy drives overexpression of a negative regulator (miR-155) of SNX27, leading to decreased SNX27 expression. SNX27 loss in turn leads to NMDA and AMPA receptor dysfunction associated with DS. Importantly, mouse models suggest that the synaptic and cognitive phenotypes associated with DS can be rescued through SNX27 overexpression (Wang et al., 2013). Sorting nexin 27 is linked to amyloid precursor protein (APP) trafficking (Steinberg et al., 2013;Wang et al., 2014;Huang et al., 2016) based on proteomic studies of surface protein levels following siRNA knockdown. The link to APP trafficking may occur through another protein, because no direct interactions have been detected between APP and SNX27 (Steinberg et al., 2013). SNX27 has further been implicated in reducing Aβ generation through interactions with PS1/γ-secretase (Wang et al., 2013(Wang et al., , 2014. SorLA/SorL1, an intracellular sorting receptor, interacts with APP, and changes in SorLA expression or function affects the cellular distribution and processing of APP. There are now multiple links between SNX27, retromer, and SorLA/SorL1. The retromer VPS26 subunit has been shown to interact with SorLA in vivo (Fjorback et al., 2012), but how retromer regulates APP trafficking and processing remains largely unknown. Biochemically, the SNX27 PDZ domain has been shown to bind SorLA with its cytosolic C-terminal FANSHY motif (Nielsen et al., 2007;Huang et al., 2016;Milne et al., 2019), but no structures have been reported. Down-regulation of SNX27/retromer is strongly implicated in AD through increased intracellular production of β-amyloid peptides from endosomal APP breakdown (Nielsen et al., 2007;Lane et al., 2010;Fjorback et al., 2012;Willnow and Andersen, 2013;Huang et al., 2016;Milne et al., 2019). Retrograde transport of APP from endosomes to the TGN involves interaction of SorLA with retromer (Nielsen et al., 2007;Fjorback et al., 2012;Willnow and Andersen, 2013). An endosomal shunt mechanism (Nielsen et al., 2007;Huang et al., 2016) has been proposed to explain how the SNX27/SorLA interaction can shift endosomal APP trafficking toward nonamyloidogenic processing at the cell surface, but molecular details remain elusive. Neither retromer nor SNX27 have been shown to interact with APP directly, and thus SorLA has been proposed as the molecular link between SNX27/retromer function and APP processing. Therefore, it remains critical to obtain structural and molecular details surrounding the crosstalk between SorLA, SNX27, and retromer in APP trafficking and homeostasis. Cancers Sorting nexin 27 is increasingly linked to cancers by mediating multiple protein-protein interactions important in trafficking, protein sorting, and membrane remodeling . The Cancer Genome Atlas database reveals SNX27 is highly expressed in invasive breast cancer tissue (Zhang et al., 2019;Bao et al., 2020;Sharma et al., 2020). Multiple studies have suggested how SNX27 affects tumor growth both in vitro and in vivo (Zhang et al., 2019;Sharma et al., 2020;Yang et al., 2020). SNX27 increases expression of vimentin and claudin−5 proteins, both of which promote tumor growth, and SNX27 has been proposed as a potential breast cancer biomarker (Zhang et al., 2019;Sharma et al., 2020). In breast cancer cells, SNX27 knockdown results in reduced motility, lower proliferation, less colony formation, and upregulated E−cadherin and β−catenin expression levels (Zhang et al., 2019). Additional studies using mouse models report decreased cell proliferation, tumor growth inhibition, and longer survival times (Frost et al., 2008;Pim et al., 2015). Finally, SNX27 may regulate matrix invasion by recycling specific matrix proteins, such as MT1-MMP metalloprotease, through a direct interaction (Bao et al., 2020;Sharma et al., 2020). Understanding the underlying cell biology remains important for uncovering specific mechanisms underlying the role of SNX27 in breast cancer. SNX27 governs glucose transport by an interaction with phosphatase and tensin homolog deleted on chromosome 10 (PTEN); this prevents glucose transporter type 1 (GLUT1) accumulation at the cell surface (Steinberg et al., 2013) and suppresses cancer progression (Shinde and Maddika, 2017;Zhang et al., 2019). SNX27 affects nutrient uptake in cancer cells through recycling of different energy transport receptor proteins. Finally, SNX27 is involved in cellular uptake of specific amino acids like glutamine, as well as mTORC1 activation Zhang et al., 2019), which may affect how cancer cells proliferate. Sorting nexin 27 is also implicated in progression of acute myeloid leukemia (AML) with potential for therapeutic treatment strategies (Wermke et al., 2015). An RNA interference (RNAi) screen in primary leukemia cells linked SNX27 loss to impaired cellular growth and viability; this suggests SNX27 could be considered as a diagnostic target (Wermke et al., 2015). However, the mechanisms by which SNX27 functions in different cancers overall remain unclear. Future exploration should clarify the underlying cellular functions of SNX27/retromer in the context of specific cancer cells. Overall, SNX27 may serve as an important target based on its established roles in promoting tumorigenesis, cancer progression, and metastasis. Viral Pathogenesis Retromer is targeted by numerous pathogens, including bacterial effectors and viral proteins. Viruses and their effectors have evolved many different strategies to target retromer in cells. Viral proteins can recruit retromer and cargo to replication sites to aid infection; the NS5A protein made by hepatitis C virus (HCV) interacts with VPS35 using this strategy (Yin et al., 2016;Elwell and Engel, 2018). Recruiting retromer directly to replication sites may redirect host factors that can be used to drive viral growth; one example is Tip47 (Ploen et al., 2013a,b;Vogt et al., 2013;Elwell and Engel, 2018), which has been reported to interact with CI-MPR (Diaz and Pfeffer, 1998;Orsel et al., 2000). Other viral proteins copy or mimic the motifs found in endogenous retromer cargo proteins; this likely allows them to hijack an important retrograde pathway in order to circumvent lysosomal degradation or to access genetic material in the nucleus. Both influenza virus M2 (Bhowmick et al., 2017) and HIV envelope (Env) proteins adopt this strategy (Groppelli et al., 2014). Another example is the HPV16 L2 major capsid protein that harbors multiple motifs associated with retromer; this includes X (L/M), NPxY, and a non-canonical PDZ-binding motif. Together, these three sequences permit L2 to engage retromer, SNX17, and SNX27, respectively (Bergant Marusic et al., 2012;Pim et al., 2015;Popa et al., 2015;Campos, 2017); engaging all three proteins would substantially increase interaction affinity and allow the viral protein to outcompete cellular cargo. It will be interesting to determine biochemically whether most viral proteins directly bind retromer or one of the sorting nexins now proposed as cargo adaptors. Some viral effectors instead change the activity or localization of retromer. One example is tyrosine kinase-interacting protein (Tip) from Herpesvirus saimiri; this protein redistributes VPS35 from early endosomal membranes to lysosomes (Kingston et al., 2011). The Tip protein is not required for replication, but retromer inhibition by Tip may contribute to transformation observed in T cells (Duboise et al., 1998). Human papilloma virus (HPV) E6 protein interacts with the SNX27 PDZ domain and affects GLUT1 trafficking; this interaction drives substantially increased glucose uptake and has been proposed to explain HPV malignancy (Ganti et al., 2016). The non-essential Vaccinia virus K7 protein interacts with both VPS26 and VPS35 (Li et al., 2017), and this interaction has been suggested to affect virus transport or uncoating (Benfield et al., 2013). Recently, multiple groups used orthogonal methods including CRISPR knockout, RNA interference, proteomics, and smallmolecule inhibitors, to show retromer and SNX27 may be involved in SARS-CoV-2 viral life cycle and infection (Daniloski et al., 2020;Zhu et al., 2020). One group identified host genes required for SARS-CoV-2 infection in a human A549 (lung adenocarcinoma) cell line that overexpresses the ACE2 receptor . Another group identified SNX27/retromer and other trafficking coat complexes as important host factors that influences spike (S) protein sorting in cells. Overall, understanding the fundamental trafficking pathways and mechanisms that govern SNX27/retromer assembly and regulation are likely to provide key insights into how pathogens hijack host cells. PERSPECTIVE Many important studies have now linked SNX27 to physiology and disease. It is now vital for the field to move toward integrating structural and biochemical data with experiments in model systems to understand the molecular role of SNX27 in cellular pathways linked to human disease. The field needs additional biochemical and structural information to judge the suitability of SNX27 as a viable therapeutic target. Structural data have demonstrated how SNX27 engages PDZ motifs, and indirect evidence from a related protein (SNX17) suggests how the FERM domain likely binds NPxY motifs. It is difficult to envision targeting either of these binding pockets with a small molecule, because SNX27 sorts many different important cellular cargoes. Broad disruption of cargo binding, especially for diseases requiring long term intervention, would likely have undesirable physiological effects. Such an approach may have merit for shorter treatments, such as inhibiting pathogen binding. Furthermore, the field needs to determine how SNX27 engages cargos in the context of binding both retromer and membranes. This would allow us to understand what conformation SNX27 adopts when bound to retromer, cargo, and phospholipids. Such studies would also reveal whether (or not) SNX27/retromer possesses interfaces distinct from those found in SNX-BAR/retromer or SNX3/retromer coats. If yes, might specific SNX27/retromer interfaces be stabilized or destabilized by small molecules, depending on disease? These are exciting and important questions to explore in the coming years. AUTHOR CONTRIBUTIONS MC and AK wrote early drafts. MC made figures. LJ conceived the review, assisted in writing structural sections, and undertook editing with input from all authors. All authors contributed to the article and approved the submitted version. FUNDING The authors are supported by NIH R35GM119525. LJ is a Pew Scholar in the Biomedical Sciences, supported by The Pew Charitable Trusts.
9,077.6
2021-04-15T00:00:00.000
[ "Biology", "Chemistry" ]
Mechanism of Coal Burst Triggered by Mining-Induced Fault Slip Under High-Stress Conditions: A Case Study Coal burst disaster is easily triggered by mining-induced fault unloading instability involving underground engineering. The high-static stress environment caused by complex geological structures increases the difficulty in predicting and alleviating such geological disasters caused by humans. At present, the mechanism of coal burst induced by mining-induced slip fault under high-stress conditions still cannot be reasonably explained. In this study, the burst accidents occurring near mining-induced slip fault under high-stress conditions were carefully combined, and the “time–space–intensity” correlation of excavation, fault, and syncline and anticline structure of the mining areas was summarized. On this basis, the rotation characteristics of the main stress field of the fault surface subjected to mining under high-stress conditions and the evolution law of stress were analyzed. Last, based on the spectrum characteristics of mining-induced tremors, the first motion of the P-wave, and the ratio of Es/Ep, the source mechanism behind mining-induced fault slip under high-stress conditions was revealed. The results demonstrate that the coal burst triggered by the fault slip instability under high-stress conditions is closely related to the excavation disturbance and the fold structure. Mining activities trigger the unloading and activation of the discontinuous structural surface of the fault, the rotation of the stress field, and the release of a large amount of elastic strain energy and cause dynamic disasters such as coal bursts. The research results in this study are helpful to enrich the cognition of the inducing mechanism of fault coal burst. INTRODUCTION Coal burst can generally be classified into three types, i.e., the fault-induced type, the coal pillarinduced type, and the strain-induced type (Kaiser et al., 2000), in which fault-induced coal burst is caused by the superposition of the mining-induced quasi-static stress in the fault coal pillar and the seismic-based dynamic stress generated by fault activation (Cai et al., 2020). Coal burst triggered by mining-induced fault slip (CBTMIFS) refers to the dynamic phenomenon that the deep excavation activities lead to the fault's transformation from a locked state to an activated state, consequently resulting in sudden instability accompanied by violent energy release (Pan, 1999). Unlike natural earthquake induced by fault activation, mining activities are a key factor in the occurrence of CBTMIFS (Ortlepp and Stacey, 1992). A strong mining tremor of magnitude 5.2 in 1997 is considered one of the largest seismic events recorded at the Klerksdorp mine in South Africa, and the analysis result of ground motion parameters indicates that the violent earthquake was attributed to an existing fault slip in the region (McGarr et al., 2002). In 2005, 112 shallow earthquakes were recorded during the construction of the MFS Faido tunnel in Switzerland, which were felt strongly on the ground and caused considerable damage to the tunnel. The focal mechanism solution was consistent with the strike and tendency of natural fault (Husen et al., 2013). On November 3, 2011, the F16 thrust fault was activated at the Qianqiu coal mine in Yima, Henan Province, China, causing 10 fatalities and trapping 75 miners. On March 27, 2014, another devastating burst accident of magnitude 1.9 in this coal mine caused 6 fatalities and trapped 13 miners. The accident investigation report pointed out that the key factor of the accident was slip activation of the thrust fault (Cai et al., 2018). The abovementioned dynamic disasters closely related to human mining activities have attracted extensive attention from the media and the public. If the internal mechanism of CBTMIFS can be revealed, important ideas can be provided for predicting and remitting the risk of such engineering disasters. Different from the brittle shear deformation of faults, the fold structures such as syncline and anticline reflect the continuous ductile deformation of rocks under crustal movement and sedimentation (Suppe, 1983). Both faults and folds are widely distributed in nature, often in the same tectonic unit. For largescale crustal movements, multiple fold and fault structures interact and mutually transform through interlayer slip, uplift, and fold during the long historical tectonic movement and sedimentation process, and the specific forms include fault-related fold, fault-transition fold, fault-propagation fold, fault-detachment fold, imbricate structure, wedge structure, and interference structure (Bieniawaki, 1967). For the medium-and small-scale production range of mining areas, the frequent geological movement dominated by ancient stress leads to the complex regional tectonic stress field. Therefore, it will be more difficult to investigate the disaster-triggering mechanism of the mining-induced fault slip under a high-stress engineering background. In order to clarify the occurrence mechanism of CBTMIFS in geological anomaly areas, plenty of studies have been carried out through theoretical analysis, laboratory experiment, numerical simulation, and field experiment,including the mechanical response and mineral composition of fault gouge (Morrow and Byerlee, 1989), hydraulic pressure and stress state of the fault zone (Segall and Rice, 1995), slip and failure criterion of fault (Fan and Wong, 2013), and energy accumulation and release law of the fault surface (Zhao and Song, 2013). On this basis, the key scientific issues condensed include the following: 1) How engineering dynamic disturbances, such as blasting, TBM excavation, hydraulic fracturing, geological drilling and rockburst, natural earthquake, driving load, and continuous explosion, will lead to slip, failure, and even instability of faults in high-stress geological anomaly areas? 2) What response characteristics will be caused to the stress field, vibratory field, and energy field of surrounding rock in the adjacent production area once the fault instability occurs in the high-stress geological anomaly area? Relevant studies suggest that local high-stress concentration is likely to occur and develop when the mining working face or the excavation boundary is close to the fault in the high-stress geological anomaly area, and the corresponding burst risk increases (Cook, 1976;Blake and Hedley, 2003;Yin et al., 2014). When the fault approaches the critical stress state, the normal stress and the shear stress decrease sharply due to the reduction of intergranular force and the contact fracture of particles, and the evolution of fault state depends on the initial stress condition and excavation process (Wu et al., 2017;Yin et al., 2012). Field observations and theoretical analysis show that the development height of mining-induced fault rupture and slip is controlled by the magnitude and direction of principal stress, while the intensity of seismic events is related to the stratum matrix and local fractures involved in the rupture process (Duan et al., 2019). At the same time, many investigations have explored the response behavior of faults to static and dynamic load disturbances by changing stress conditions in laboratory tests. Marone (1998) pointed out that static friction and aging strengthening of faults are systematic responses that depend on loading rate and elastic coupling. Li et al. (2011) simplified the normal behavior of faults to elastic stiffness, adopted the coulomb-slip model to characterize the shear behavior of faults, and conducted a quantitative study on the propagation and attenuation law of seismic waves in discontinuous rock masses. Bai et al. (2021) introduced the displacement-related moment tensor method to reproduce the phenomenon of mining-induced fault slip of coal mine site in numerical simulation. To sum up, the stress distribution and evolution characteristics of conventional fault activation instability have been well researched on. However, there are few studies on CBTMIFS under high-stress environments, and the existing research results ignore the influence of mining quasi-static loading and unloading stress paths and ground motion stress on the fault slip instability. Therefore, it is necessary to further study the mechanism of CBTMIFS under high-stress conditions, for providing guidance for the monitoring and prevention of coal bursts induced by fault instability. Geological Structures Mengcun coal mine mainly mines 4# coal seam, where is located in Binchang coal district, Shaanxi province, China, with a mining depth of 620-750 m. The 401101 working face is the first working face of the Mengcun coal mine, with a length of 2090 m and a width of 180 m. The layered fully-mechanized sub-level caving mining technology is adopted. See Figure 1, the north wing, west wing, and east wing of the working face are solid coal, and there is a 200 m protective coal pillar between it and the main entry group. The development roadway includes five main entries, which are no.2 return air main entry, no.2 belt main entry, band conveyer main entry, no.1 belt main entry, and no.1 return air main entry from north to south, with a width of coal pillars between the entries of 35 m. The average thickness of 4# coal is 20 m, and the average dip angle of the coal seam is 4°. The roof is mainly made of sandy mudstone, fine-grained sandstone, and coarse-grained sandstone, and the bottom plate is mainly made of aluminum mudstone which tends to expand when meeting water. After identification, 4# coal seam has strong burst liability, the roof has weak burst liability, while the floor has no burst liability. Tectonic Parameters Xiejiazui anticline (B2), Yuankouzi syncline (X1), and F29 normal fault occur from west to east in the minefield area. The faults' location between the syncline and anticline structures forms a special geological structure group, thus mainly controlling the gestation, evolution, and occurrence process of coal burst accidents in this region. Detailed geological and tectonic parameters are displayed in Table 1. To further explore the influence degree of geological structure on the distribution of ground stress field in the mining area, three ground stress measurement points were arranged in the areas of central main entries, panel main entries, and the 401101 working face. Meanwhile, the ground stress of hollow inclusion was measured. As shown in Figure 2, the vertical stress in this area is the minimum principal stress, the results of three measurement points are basically consistent with the average stress level of the Binchang coal district and the Chinese mainland. Due to the presence of faults and fold structures, horizontal tectonic stress is the main stress component in the regional high-stress environment. The results of the three measurement points are obviously higher than the average stress levels of both the Binchang coal district and the Chinese mainland, especially the measurement point 3# is closer to the X1 axis and F29 fault, where σ H /σ V reaches 2.1. The results indicate that the closer it is to the fault and synclinal axis, the more abnormal its horizontal stress is. As shown in Figure 3, four geological exploration boreholes, M2-1, M3-2, M4-2, and M5-2, were selected in the main entries and the 401101 working face to analyze the influence of the composite geological tectonic group on regional stratigraphic sediment characteristics. It can be concluded that: 1) The regional strata are mainly composed of alternately deposited fine-grained sandstone, medium-grained sandstone, coarsegrained sandstone, and sandy mudstone, and no thick whole layer of hard sand-gravel rock exists. 2) In the long process of crustal movement and evolution, stratum inversion and deletion occur frequently among the upper overburden strata. Affected by the extrusion tectonic stress in the east and west direction, faults and relative slips occur in weak coal seams, which are upright or inverted and thus form compression-torsion faults. The strata are obviously controlled by the tectonic movement of faults and folds. 3) Relatively, the difficulty of mining increases as the strata near the M4-2 borehole is not only squeezed by horizontal tectonic stress of fold, but also affected by vertical dislocation of the F29 normal fault, which results in the discontinuity of the strata, with stress concentration and energy accumulation. Therefore, unstable stratum deposition and phase transition provide a favorable external environment for frequent coal burst accidents in this region. COAL BURST HISTORY The inducing process of coal burst is very complex, which is not only affected by geological structures such as folds and faults but Frontiers in Earth Science | www.frontiersin.org May 2022 | Volume 10 | Article 884974 4 also closely related to the mining activities involved (Zhang et al., 2017;Yin et al., 2019). Therefore, it is of great significance to clarify the relationship between coal bursts and geological structure and mining activities in this region. The mining of the 401101 working face started in June 2018 and ended in March 2020, during which a total of 10 coal burst accidents occurred. The SOS microseismic system was arranged in working face and main entries, which could monitor the vibratory signal in the mining process. The distance between the seismic source and the fault and the on-site failure range monitored when the burst occurred are presented in Figure 4, and the on-site burst failure is in Figure 5. Figure 4 demonstrates that all previous coal burst accidents have significant common characteristics: 1) Most of the coal bursts occurred in the main entry area, which is obviously inconsistent with the distribution law of mining-induced tremors commonly seen near the mining working face. 2) Most of the coal bursts occurred on the hanging wall of the F19 fault, while there are relatively few occurred on the footwall of the F19 fault, showing an obvious hanging wall effect. This is in agreement with the existing research conclusions: in the field of seismology, the hanging wall's ground motion is stronger than the footwall, its vibration attenuation is weaker than the footwall, and its vibration distribution area is larger than the footwall in natural fault shear slip. 3) The minimum destructive microseismic energy detected during a coal burst is 2.6 E+04 J, and the corresponding failure range of the entry is only 3 m; the maximum microseismic energy is 3.5 E+05 J, and the corresponding failure range of the entry is 55 m, showing that the more elastic energy released during a coal burst, the greater the influence on entry stability. 4) The burst frequency and range of the five main entries are not completely consistent, the situation of <C> band conveyer main entry and <D> no.1 belt main entry is the most dramatic, showing the overall characteristics of multiple and repeated bursts. As shown in Figures 4A-E, considering different actual exposure of each entry of the F29 fault surface, especially when the band conveyer main entry passes through the fault, there is an obvious fracture zone between the hanging wall and the heading wall with a thickness of about 194 m, which has a negative effect on the stability of surrounding rock of the entry, it is speculated that the heterogeneity of coal burst frequency and intensity of each main entry is related to the heterogeneous evolution of shear stress along the F29 fault surface. 5) The occurrence of coal bursts is significantly affected by mining disturbance. Only one coal burst accident occurs after the stoppage of mining activities, suggesting that the frequent occurrence of coal bursts on the fault surface is closely related to mining activities. 6) The occurrence of coal burst is affected by B2 anticline to a certain extent, but it is affected by F29 fault and X1 syncline significantly. Especially in the composite area of F29 fault and X1 syncline, coal burst accidents occur intensively. Figure 5 shows the entry damage caused by the coal burst. The overall damage characteristics can be divided into four categories: floor bulge, roof caving, support failure, and equipment damage. Floor damage can be divided into raising, side lifting, overall drum, and cracking; equipment damage can be divided into belt tilting, tub overturning, platform overturning, and pipeline falling; support failure can be divided into anchor slipping, cable overhanging, bolt shearing, and tray rushing out; roof caving can be divided into step sinking, drossy coal falling, roof separation, and roof falling. Different from conventional coal bursts, this kind of coal burst has significant shear seismic failure characteristics, with a more complex failure type, a larger failure range, and a higher failure degree. Therefore, it is preliminarily inferred that the controlling factors of frequent coal burst accidents in the main entry area are as follows: influenced by bedding or lateral compression of Frontiers in Earth Science | www.frontiersin.org May 2022 | Volume 10 | Article 884974 regional strata, the compound tectonic condition of B1 anticline and X1 syncline provides a high-static-stress environment for the main entry area. The F29 fault with a drop of 15-18 m crosses diagonally both main entries and the 401101 working face. Under the tectonic influence, a large amount of elastic energy is accumulated of the fault surface, which lays an energy foundation for the occurrence of coal bursts. Human activities, including excavation, mining, and entry expansion, lead to different stress adjustment ranges and intensities and result in different degrees of damage in main entries. To further verify the rationality of the hypothesis, the stress evolution law on the fault during the mining process was analyzed by numerical simulation. STRESS FIELD MODELING OF INTERFACE IN F29 Model Setup and Constrains FLAC 3D (Fast Lagrangian Analysis of Continua in 3 Dimensions) was employed to analyze the stress distribution of the fault surface Frontiers in Earth Science | www.frontiersin.org May 2022 | Volume 10 | Article 884974 6 of F29 and in the coal pillar around the main entries of the 401101 working face during the whole stopping period. The geometric dimension of the model is 3,000 × 1,000 × 600 m. Normal displacements are fixed at the base and the sidewalls of the model. In view of the complex layout condition that five main entries cross the F29 fault as well as coal rock seam, highprecision modeling shown in Figure 6 was realized by Rhino software, in which X1 syncline and B1 anticline generated 3D undulated strata through real coal floor contour lines for restoration. F29 fault was generated according to actual fault parameters with a fault surface dip angle of 75°and a height difference of 20 m. The strain-softening criterion was adopted to judge the yield state of materials and determine the physical mechanics parameters of both coal rock mass and fault slip surface. The parameters are listed in Tables 2, 3, respectively. As displayed in Figure 6, four monitoring points were successively arranged along the F29 fault surface in the model. The steps of numerical simulation are as follows: 1) According to the real ground stress test results, 30 MPa horizontal stress and 20 MPa vertical stress were applied to the model to balance the initial ground stress field and complete the excavation of the development roadway and main entries. 2) The 401101 working face was excavated step by step. The stress distribution in the coal pillar near the main entries, and the normal stress and shear stress distribution of the F29 fault surface were monitored. 3) Taking σ h = 30 MPa, σ v = 20 MPa, and the lateral pressure coefficient b = 1.5 as initial values, the effects of different horizontal ground stress on normal stress and shear stress of the fault surface in the mining process were analyzed when b equals 0.5, 1, 1.5, 2, and 2.5, respectively. Influence of Mining Activities on the Stress Distribution of the Fault Surface Figure 7 presents the changes in stress evolution of the fault surface at P1, P2, P3, and P4 when the lateral pressure coefficient b is 0.5. The stress at each monitoring point evolves dynamically with the advance of the working face. The change laws of normal stress and shear stress between the monitoring points are different, indicating that the stress of the fault surface is not evenly distributed due to mining disturbance, which leads to the regional difference in burst risk. As illustrated in Figure 7A, due to mining disturbance, the normal stress and shear stress of the fault surface at P1 on the north side of the working face almost synchronously fluctuate sharply, in which the normal stress drops rapidly first and then recovers to a certain level with a fluctuation value of about 1.2 MPa; the shear stress first increases slightly and then decreases rapidly to a certain level, with a fluctuation value of about 3.0 MPa. As shown in Figure 7B, when the working face continues to advance, both the normal stress and shear stress at P2 on the south side of the working face show an upward trend, with a fluctuation value of shear stress of about 3.0 MPa, while that of normal stress of about 0.4 MPa. It suggests that the remote mining activities still have slight disturbance to the fault surface in a metastable state, which results in the instability of the fault with a quantity of originally accumulated strain energy, thus releasing strain energy outward, and this process is likely to induce coal burst accidents. As illustrated in Figure 7C, when P3, located between no.2 return air main entry and no.2 Belt main entry, is 700 m away from the fault of the working face, the normal stress plunges, while the shear stress increases sharply. It is worth noting that the value of normal stress experiences a "positive-negative-positive" change process, suggesting that the main stress field of the fault surface rotates due to mining disturbance, which can be considered as a precursor of fault slip. As displayed in Figure 7D, due to the long distance between the working face and P4 located in the south of no.1 return air main entry, basically no stress response is generated during the whole mining process of the working face and the burst risk is relatively low, which is consistent with the situation that only one burst accident occurred in no.1 return air main entry. Influence of Ground Stress on the Stress Distribution of the Fault Surface To further investigate the inducing law of high stress caused by the fold structure to mining-induced slip fault, Figure 8 shows the change laws of normal stress and shear stress at P2 located between the 401101 working face and the main entry group with mining advancement under different ground stress conditions (that is, when the lateral pressure coefficient b is 0.5, 1.0, 1.5, 2.0, and 2.5, respectively). As can be seen from Figure 8, the evolution laws of normal stress and shear stress at this measurement point are basically consistent at different ground stress levels. When the horizontal stress is low while b is 0.5 and 1.0, the working face advances to about 400 m of the fault, the normal stress rises slightly, and the shear stress rises more sharply than the normal stress. When horizontal stress further rises, while b is 2.0 and 2.5 m, the normal stress of F29 fault shows a drastic downward trend, on the contrary, the shear stress dramatically increases, and the response time is advanced, showing that a highstress environment provides a good condition for strain energy accumulation of the fault, which leads to a significant expansion of the range affected by mining activities. When the horizontal stress is higher, the value of the normal stress changes more from "positive value to a negative value and then to a positive value," with higher intensity of energy accumulation and release, higher corresponding burst risk, and thus more likeliness of coal bursts. This conclusion reflects the mechanism of the fold structure on mining-induced fault slip. MECHANISM OF COAL BURST REVEALED BY MS EVENTS The focal rupture mechanism of coal rock spontaneous mininginduced tremors resembles that of natural earthquakes. In the past, people mainly used the occurrence mechanism of natural earthquakes to reveal the seismic source rupture process of most mining-induced tremors (McGarr, 1984;Gibowicz and Kijko, 1994). However, the geological structure, excavation environment, and overburden structure of a mine determine the particularity and complexity of the seismic source of a mining-induced tremor. Different causes of mining-induced tremors and different focal rupture mechanisms lead to the disparity in the energy release size of mining-induced tremors and in radiation modes of shake displacement wave field. As displayed in Figure 9, according to the seismic source acting force modes of mining-induced tremors and the relative position relationship between the coal rock failure area and the working face, mining-induced tremors can be simply divided into three types: tension type, implosion type, and shear type (Horner and Hasegawa, 1978;Hasegawa et al., 1989). The tension type and the implosion type are dominant, while the shear-type caused by a dynamic slip of fault is infrequent. At the same time, the waveform of mining-induced tremors contains abundant focal rupture mechanisms, and the application of seismology in the field of coal rock-burst rupture is beneficial for promoting the study of the focal rupture mechanism of mining-induced tremors. This section attempts to discuss the focal rupture mechanism of frequent coal bursts in the main entries from the aspects of spectrum characteristics of shake displacement, P-wave first motion, and the ratio of E S /E P . Waveform and Frequency-Spectra Characteristics of Mining-Induced Tremors The typical clear waveform monitored by the microseismic station of three typical tremor events "2019.09.12," "2020.01.08," and "2020.05.24" were selected to determine the main frequency segment by frequency-spectra analysis. In the process of propagation, the tremor wave carries important information that can reflect the characteristics of the stratum and the source, such as fault, fractured zone, geological acoustics characteristics, and focal mechanism characteristics, which are mainly reflected in the attenuation of seismic wave intensity, frequency structure characteristics, and local singularity of signal. In so many computing methods of fractal dimension, the box dimension index Dq is employed in this study to define the burst waveform. Taking N(Δ) as the minimum number of a square box with a side length of Δ covering a point set, then the box dimension of the point set is defined as ( 1 ) As shown in Figure 10, the spectrum of each burst event is noisy and sharp, the signal is complex, and there is a great difference between each channel, which indicates that the Frontiers in Earth Science | www.frontiersin.org May 2022 | Volume 10 | Article 884974 lower than 10 Hz, and the middle and high-frequency components attenuate notably, reflecting a characteristic of "high energy and low-frequency" of mining-induced tremor waveform. The fractal dimension D of each channel ranges from 0.95 to 1.78, which fully indicates the disorder and complexity of these burst tremors. Different from small and medium energy mining-induced tremors, the burst mininginduced tremor has a more rapid process from fracturing to receiving signal, its amplitude of P-wave first motion is weak and even difficult to identify. Furthermore, its energy release and dissipation are more violent. Another evident feature is that the shear rupture waveform reflecting the tangential deformation of coal rock units is relatively developed, and the amplitude of S-wave first motion is obvious, which accords with the common shear seismic waveform characteristics . Taking stations 1# and 10# in the "2019.09.12" burst event, stations 1# and 17# in the "2020.01.08" burst event, and stations 6# and 8# in the "2020.05.24" burst event as examples, the following phenomena can be observed. The coda waves of mining-induced tremor waveform monitored by them are relatively developed; the spectrum development shows a violent oscillation characteristic; the middle part of the waveform presents a shape of "inverted triangle graben"; and the whole waveform exhibits a nonlinear and multi-period disturbance characteristic. This corresponds to the ultra-low friction effect of faults and the dynamic activation instability of faults under the effect of dynamic load, which is consistent with the propagation characteristics of ultra-low-frequency, low speed, and high energy of over range pendulum-shaped waves under the effect of discontinuous and uncoordinated deformation of faults. In addition, the waveforms monitored by station 10# in the "2019.09.12" burst event, station 14# in the "2020.01.08" burst event, and station 10# in the "2020.05.24" burst event still have secondary vibrations and even frequent vibrations after the end of the mainshock. The phenomenon of "main shock-after shock" reflects that these burst events have waveform characteristics like earthquakes induced by fault slip. Byerly (1928) was the first to use the four-quadrant distribution of the compression and expansion of P-wave first motion to explore the nature of the seismic source, and he believed that the direction of P-wave first motion on the vibration sociogram was directly related to the seismic source force. As the physical image of the waveform was clear and not affected by the crustal velocity structure, it can be employed to preliminarily determine the focal mechanism solution (Herrmann, 1975). The coal rock mass fracture, such as horizontal tension fracture of the roof, longitudinal separation, and roof caving, generates the compression P-wave leaving the seismic source and the front part pushing outward. The P-wave first motion received by the microseismic station is "+," and this kind of tremor belongs to a typical tension type. The vibration, such as roof rotation instability and coal pillar compression fracture, generates the expansion P-wave pointing to the seismic source and its front part pulling outward. The P-wave first motion received by the microseismic stations is "−," and this kind of coal rockinduced tremor belongs to a typical implosion type. In coal rock tremors, such as roof shear rupture, masonry beam structure slip instability, dynamic burst of coal pillar, and mining-induced fault activation, the P-wave first motion is distributed in four quadrants in space, which is in line with the focal rupture mechanism of a typical double-couple source. They can be regarded as shear-type (Gibowicz et al., 1990;Du et al., 2020a). Mining-induced tremors of this type whose failure process is intense with more vibratory energy released and the highest burst risk. P-Wave First Motion of Mining-Induced Tremors Since the arrangement of each microseismic station is affected by the fluctuation of the coal seam and does not have a planar position relationship with the tremor events, the confirmed P-wave first motion of mining-induced tremors is not only simply upward or downward, but also should be determined according to the spatial position relationship between specific tremor events and corresponding stations. The P-wave first motion of previous burst events is displayed in Table 4. It can be seen that: 1) the P-wave first motions of the same station in different burst events are not completely the same, indicating that the microseismic waveform can fully reflect the characteristics of mine geological structure, mining environment, and overburden structure. 2) The P-wave first motions of different stations in the same burst event are different, including compression P-waves and expansion P-waves with roughly the same proportions. Each burst event distributes in four quadrants, which is consistent with the focal rupture mechanism of a typical double-couple source. However, some channels were too far away from the source to receive an effective waveform. Moreover, as some stations were affected by mining activities with much background noise, the accurate direction of the P-wave first motion cannot be recognized. Ratio of Es/Ep of Mining-Induced Tremors The limitation of the P-wave first arrival method is that the closer the source is to the fault discontinuous surface, the weaker the P-wave is and the more difficult it is to identify the direction of the first motion. In the process of focal mechanism research in the Ruhr mining area, Germany, researchers found that the energy ratio of shear wave (S-wave) to compression wave (P-wave) is an important indicator to reveal the rupture mechanism of surrounding rocks. In recent years, with abundant on-site failure cases, many studies have been conducted on the discrimination criterion for determining the mechanism of coal rock mass fracture based on the distribution of the ratio of Es/Ep (Cai et al., 1998;Hudyma and Potvin, 2010;Kwiatek and Ben-Zion, 2013;Li et al., 2014). Subsequent studies demonstrate that the S-wave radiation energy is much larger than the P-wave radiation energy in earthquakes induced by fault slip (Boatwright and Fletcher, 1984), and that this kind of earthquake is dominated by shear failure. The P-wave and S-wave energies detected by the MS system can be calculated by using thefollowing equation (Mendecki, 1997): where E p and E s are the radiation energies of P-wave and S-wave, respectively; ρ is the rock density; v p and v s are the wave velocities of P-wave and S-wave, respectively; R is the distance between the station and the source; t s is the duration of the source; and u · 2 corr is the square of the far-field corrected radiation direction of the velocity pulse. In this study, the burst mining-induced tremors whose Es/Ep is larger than 10 in each channel are regarded as shear rupture; those whose Es/Ep is smaller than three are regarded as tension rupture and those whose Es/Ep ranges from three to 10 are regarded as mixed rupture (Wang et al., 2019;Du et al., 2020b;Wang et al., 2021). Attenuation correction in the frequency domain was carried out for the body wave frequency spectrum detected in each channel of all burst events, and frequency integration was carried out for the velocity power spectrum to estimate the radiation energy fluxes of P-wave and S-wave and to further identify the rupture type. According to the results given in Figure 11, the basic rules are as follows: 1) The distance and path between each station and the source are different, and the difference in attenuation of seismic wave results in different P-wave and S-wave energies at each station. The closer it is to the source, the shorter the vibration time, and the less the attenuation, therefore the higher the energy monitored by the +, compressed P-wave; −, expansion P-wave; ○, no valid waveform is received; •, P-wave first motion cannot be identified due to too much background noise. FIGURE 11 | Es/Ep in each channel of previous burst events. Frontiers in Earth Science | www.frontiersin.org May 2022 | Volume 10 | Article 884974 station; otherwise, the farther away from the source, the lower the energy monitored. The energy of the same source detected in different stations can differ by up to two orders of magnitude. 2) All the E p in each channel of signal waveform of previous burst accidents in Mengcun coal mine are smaller than E s with the value of E p / E s ranging from 3.01 to 65.69. The burst tremor waveform channels whose Es/Ep is smaller than three account for 1.28%, those with Es/Ep that lie in the range of 3-10 account for 19.23%, and those whose Es/Ep is greater than 10 account for 79.49%. Such a result demonstrates that the mining-induced tremors in the main entry area are mainly shear type and mixed type, which accords with the focal mechanism of CBTMIFS. Based on the aforementioned analysis, the cause of the frequent destructive mining-induced tremors in the main entry area can be concluded as follows: The mining stress affects the F29 fault in a closed state, releasing the clamping normal stress of the long-term geological tectonic movement on the normal-line direction of the fault surface, which results in a rapid rise of shear stress of the fault surface and the "activation" of the previously stable fault. The slip dislocation of the fault surface in the main entry area gives rise to shear failure, which leads to slip instability of the fault and dynamic burst failure to the entries. At the same time, it should be noted that different from the common burst failure induced by pushing mining across faults on the working face, this kind of burst failure mainly occurs in the main entries far from the working face, rather than on the working face. Research shows that more than 91% of coal burst events occur in the two lanes ahead of the working face, which seems to be in contradiction with the research object of this study. The coal rock mass in the fault area is weakened by pre-pressure relief measures in advances such as large diameter coal drilling, coal blasting, and roof pre-splitting blasting during entry excavation and mining, which undermines the F29 fault structure to a certain extent. The release of massive accumulated elastic energy lowers the degree of stress concentration and energy accumulation in the process of mining on the working face when crossing fault. However, the main entries, as the development roadway that needs to be used for a long time, lack the condition of frequent construction pressure relief engineering to ensure sufficient entry support effect. Consequently, coal stress becomes highly concentrated at the junction of the main entries and the fault. Under the influence of mining stress and tectonic stress, the hanging wall and the heading wall of the fault slip relative to each other, releasing massive strain energy instantly. In addition, different from the common tension-type mininginduced tremors caused by roof stretching and implosion-type tremors by coal pillar compression, the frequency of fault slip is lower than that of the former two, but the vibration energy released is more and the failure is greater due to the volume of rock mass reaching the limit state at the source is larger (the potential source radius is larger). CONCLUSION Through the establishment of numerical simulation and the analysis of the microseismic signal characteristics of the burst events, the dynamic evolution characteristics of normal stress, and shear stress on the fault surface of the working face during the mining process, the influence of different horizontal stresses on the evolution of the stress field and energy field of fault slip, and the P-wave first motion and focal mechanism revealed by the ratio of E S /E P based on the spectrum characteristics of shake displacement are investigated, respectively. The main conclusions are as follows: 1. The geological structure leads to significant abnormal horizontal stress in the accident area, and the stratum deposition is obviously controlled by the tectonic movement of faults and folds, leading to stratum inversion, and deletion of overburden. The faults under high-stress conditions provide a favorable external environment for frequent coal burst accidents. 2. Different from common coal burst accidents in the working face, coal burst accidents induced by mining fault slip under high-stress conditions have significant shear seismic failure characteristics, i.e., with more complex failure type, larger failure scope, and higher failure degree. The failure characteristics in common are as follows: the hanging wall effect is obvious; the more energy released during a coal burst, the more destructive it will be to the entry; the heterogeneous stress evolution of the fault surface leads to the characteristics of multiple and repeated bursts of the entry. 3. The normal stress and shear stress on the fault surface show a dynamic heterogeneous evolution due to mining unloading, and the normal stress gradually decreases with mining, while the shear stress increases gradually due to shear slip, and the change rate of shear stress is greater than that of normal stress. The value of normal stress experienced a "positive-negativepositive" change process with mining, indicating that the main stress field on the fault surface rotates due to mining disturbance. This can be regarded as the precursor of fault slip. Under different initial ground stress levels, the higher the horizontal stress is, the higher the normal stress and shear stress on the fault surface will be. Besides, the greater the strain energy accumulated before the fault slip and released now of the slip is, the higher the corresponding burst risk will be. 4. The microseismic signals of burst accidents feature "high energy and low frequency," and the value of the fractal dimension D is high, and the S-wave is relatively developed, which accords with the characteristics of the shake displacement spectrum of typical shear mininginduced tremors. The P-wave first motion of each channel is distributed in four quadrants, which conforms to the focal rupture mechanism of a typical double-couple source. According to the E S /E P ratio of burst waveform, the destructive mining-induced tremors are mainly shear and mixed types. Therefore, it can be concluded that frequent destructive tremors in the main entry area are caused by the mining stress affecting the F29 fault in a closed state, releasing the clamping normal stress of the long-term geological tectonic movement in the normal-line direction of the fault surface. Resultantly, the shear stress of the fault surface rises rapidly, and the previously stable fault becomes activated. The slip dislocation of the fault surface in the main entry area gives rise to shear failure, which leads to slip instability of fault and dynamic burst failure of the entries. The research conclusions disclose the mechanism of CBTMIFS under high-stress conditions, which is of great significance to further enriching the cognition of the inducing mechanism of fault coal burst. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS JB contributed to conceptualization, methodology, software, data curation, validation, writing-original draft, and funding acquisition. LD contributed to data curation, writing-review and editing, and funding acquisition. JL contributed to supervision, writingreview and editing, and funding acquisition. KZ contributed to writing-review and editing and funding acquisition. JC contributed to writing-review and editing. JK contributed to funding acquisition and writing-review and editing.
9,472.4
2022-05-27T00:00:00.000
[ "Engineering", "Geology", "Environmental Science" ]
Research on the Semantic Composite Network Recognition Method of Power Distribution DTU Automation Acceptance Virtual Seat System Aiming at semantic inaccurate recognition problem caused by the lack of professional vocabulary in the existing corpus, a composite network recognition method for the semantics of the power distribution DTU acceptance virtual agent system is proposed. This method is used to extract and merge the entity, attribute and relationship information of power equipment from multiple data sources to construct a power distribution DTU acceptance knowledge graph. In addition, the graph database is the main body and the SQL database is the extension, and the corpus is stored in a distributed manner. The acceptance test results show that the overall recognition rate of the system platform for the instructions issued by the acceptance personnel in the complex environment is 92.8%, which reduces the acceptance time by more than 70%. INTRODUCTION DTU (Data Transfer Unit) is the key equipment of the distribution network, and its automated acceptance efficiency determines the delivery cycle of the distribution network upgrade [1]. At present, the automatic acceptance of distribution terminals is still generally based on the mode of communication and cooperation among field operation, maintenance personnel and the staff of the main station to test the trigger signal of electronic control. Statistics show that the acceptance of column opening and 6-interval ring network, etc. , takes 20 and 90 minutes, respectively, and opening and closing the cell change acceptance takes longer [2]. In the stage of extensive construction of intelligent distribution network, the manual intervention method of the existing distribution terminal acceptance has been difficult to meet the speed requirements of rapid distribution network construction. Intelligent and efficient configuration and automatic acceptance of DTU construction has become a key technical problem that needs to be solved urgently in the development of intelligent distribution networks. In view of the current situation that the installation process of DTU is still generally used by operation and maintenance personnel, it is more important to realize the intellectualization of acceptance links that the main station acceptance is intellectualized. Through intelligent extraction of communication semantic information and automatic comparison with DTU telemetry information, the construction quality and efficiency assessment can be effectively completed [3]. The intelligent virtual agent technology, which integrates intelligent speech recognition, text recognition, and multi-round human-machine dialogue, has the advantages of repetitive tasks and automatic execution of massive transactions, and is widely used in customer service, call centers, sales transactions and other fields. If the potential of virtual agent technology can be fully tapped to form an automated acceptance mode for virtual agents as shown in Figure 1. to assist on-site personnel, it will greatly promote the application and verification of data-driven concepts in the construction of intelligent terminals in distribution networks. Scholars in domestic and foreign fields have begun to use virtual agent technology in the field of electric power customer service. For example, literature [4]~ [6] proposes to build a special map for electric power communication services for power communication network management, operation inspection, risk monitoring, business scheduling, and knowledge accumulation Provide human intelligence services. In reference to the problem of repetitive command issuing in power grid operation and dispatching, literature [7] constructed a power operation dispatching map and proposed a smart assistant program for power grid regulation and control operations. Literature [8] performed historical operation information based on the BP model data mining, realized the power system of professional dictionary library of structures. General corpus not included in the electric power business involves specialized vocabulary of semantic recognition accuracy is difficult to meet the needs of engineering applications, so the use of virtual agent technology to achieve the most important issue of the acceptance and distribution automation DTU is to build a dedicated language materials library. Reasonable semantic extraction network and corpus storage structure have great influence on the accuracy and speed of speech recognition. In reference [9], hidden Markov model is used to segment the existing power equipment defect records, and cosine similarity of discriminant vector is used to eliminate co referential words, which improves the accuracy of semantic retrieval. In reference [10], semantic enhancement method based on topic comparison is used to reduce the redundancy of entity recognition results of LSTM network and improve the speed of speech recognition of virtual power customer service system. The above research provides important theoretical support and reference for the construction of data transfer unit acceptance knowledge graph. However, current research on knowledge graphs in the power field mainly focuses on the construction of customer service and corpus and virtual customer service system, while the special corpus and its construction method for DTU acceptance of distribution network have rarely been reported. Based on this, this paper proposes a semantic composite network identification method for distribution DTU automatic acceptance virtual agent system according to the command joint triggering scenario and requirements of power DTU automatic acceptance business. The purpose of this paper is to identify and extract the professional words in the operation procedures, alarm information and historical operation logs of distribution DTU automatic acceptance business, construct the special corpus of distribution DTU acceptance, and form the operation mode of semantic search and joint trigger based on virtual agent. MODEL BUILDING In the process of implementing DTU automatic acceptance business, the first problem to be solved by virtual agent system is to build a knowledge graph of distribution terminal automatic acceptance domain (as shown in Figure 2.) which includes all acceptance professional vocabulary and has higher retrieval efficiency, so as to improve the accuracy and recognition speed of key words in acceptance speech by virtual agent system. In this paper, the composite network and ESIM model are used to complete the recognition, extraction and fusion of professional words from multiple data sources. The distributed database model is used to store the corpus for the nodes with different entity correlation degrees. While improving the recognition rate of key words, the redundancy of nodes in graph database is reduced, and the retrieval speed is improved. 2.1.Compound network knowledge extraction model The objects of knowledge extraction are the semi-formatted and non-formatted sentences in the distribution network regulations, historical alarms, and operation logs. The difficulty of extraction is that it is difficult to align the data by using graph mapping to obtain the relationship between power equipment from the circuit diagram topology (semi-formatted data) [11]. And the other difficulty of extraction is that when using traditional machine learning methods such as conditional random fields, support vector machines, and decision trees extracts information from texts (unformatted data) such as distribution network regulations and historical operation information, due to the high degree of discretization of differences part in co-referential vocabulary and the sparse distribution of differences part in coreferential vocabulary in the NLP data set, the accuracy and coverage of knowledge extraction are difficult to meet the requirements of engineering applications [12]. The deep neural network has powerful vector distance analysis capabilities, which can effectively identify the same vocabulary in unformatted text. However, there are obvious differences in the extraction effects of different parts of speech texts on various networks. The CNN network determines the length of the sliding extraction window according to the number of convolution kernels, which can effectively extract device attribute information with a compact input structure and a certain length. The RNN network composed of the information backward linear conduction structure has a better effect on the information extraction of the acceptance business entity with greater uncertainty in the input length. The BiLSTM network introduces a direction control factor to extract the relationship information that has a strong correlation with the context [13]. The F1 scores of these three neural network sub-modules on the NLP general data sets of three different websites are 91.7%, 99%, and 87%. The accuracy and coverage of keyword extraction from text data sources are relatively high. The core idea of knowledge extraction is using deep neural networks to format the unformatted data in the text in order to distinguish the entities, relationships, attributes and other knowledge elements in the unformatted data [11]. This part is divided into three sub-tasks including named entity recognition, attribute extraction and relationship extraction. The three sub-task modules are implemented by CNN, RNN and BiLSTM. The data obtained by knowledge extraction contains a lot of repetitive information. For example, 3U0, ABC-phase zero-sequence voltage, and three-phase zero-sequence voltage refer to the same objects, and distribution DTU and distribution DTU devices have the same meaning. The repetitive co-referencing text has a greater impact on the uniqueness of the keyword reference in the automatically generated acceptance report and the standardization of the report. In this paper, the relationship extens- Figure 2. Knowledge Graph Architecture in the Field of Automated Acceptance ion calculation is used to achieve common reference resolution. The filtered data is not stored in the map, which can be stored in a relational database and then linked with entities in the knowledge graph [14]~ [17]. The following takes BiLSTM network output result processing as an example to show the process of coreference resolution. The processing methods of CNN and RNN network output results are similar. Using ESIM co-referential resolution model is the output of the BiLSTM network. 1 and -1 represent the direction in which the LSTM recognizes the context. x i represents the attention coefficient, which is the accumulation of the weights of the co-referential words in a sequence span i. The ESIM co-referential resolution model is shown in Figure 3. After the weighted summation of the output x * of the BiLSTM network, the sequence integrated into the attention mechanism can be obtained, denoted by x . The sequence is comprehensively evaluated on its accuracy, recall, and harmonic F value through a three-layer scoring structure. The sequence with the highest core is considered to be the target sequence that needs to be stored in the map. The data source information is classified and modularized according to the length of the input text and the contextual relevance. The repeated information in the extraction results is resolved by the ESIM model, which improves the accuracy and coverage of keyword extraction, and eliminates the repetitive common reference. The influence of the text on the standardization of the acceptance report saves the time cost of node traversal in the graph database and improves the keyword retrieval speed. The data source information is classified and mod- Figure 3. ESIM co-referential resolution model ularized according to the length of the input text and the contextual relevance. 2.2.Distributed storage model of joint related database and relational database The power equipment of the distribution network is deeply coupled and has a high degree of data correlation. When the SQL-type database is used to store the target sequence obtained from knowledge extraction, it is necessary to establish a complex relationship between different rows of tables and multiple tables to characterize the coupling relationship between multiple power devices. The index relationship of the Entity Relations is difficult to construct, and the retrieval time is long, which cannot meet the requirements of real-time extraction of keywords in the on-site acceptance speech. Graph database is a kind of NoSQL database, which is different from the storage form of twodimensional tables in relational database. The data in graph database is stored in the form of node-relation (N-R) graph [18]~ [20]. The topological relationship among the power equipment participating in the joint debugging is accustomed to be represented by circuit diagrams in engineering. It has a natural similarity with the data storage form of graph database. This can effectively reduce the complexity of relationship representation. An instruction in the automatic acceptance of a power distribution DTU usually contains multiple interrelated entities. Compared with relational databases that store data in tabular form, native graph databases represented by Neo4j have built-in more efficient traversal algorithms. When retrieving a group of highly related entity nodes, there is no need to establish links between multiple data tables. Just specify the starting node and the traversal direction. The time and complexity of data retrieval operations are greatly reduced. Therefore, it is more convenient and quicker to "add, delete, modify, and check" data. However, after repeated access tests on the data with different attribute richness in the two databases, it is found that for entity classes with rich entity attribute information and some frequently updated data (such as events, logs, fault records, etc.), use Relational database storage can update the attribute information of an entity multiple times without traversing all associated entities in the database. For example, a group of entities has a relevance of 10, and each entity contains data of 3 attributes as experimental samples. One attribute of any entity is updated 100 times. The result shows that the data is updated when the same memory is occupied. The speed is increased by about 25%. In order to ensure the real-time performance of keyword retrieval during on-site speech recognition, and take into account the data update speed when the corpus is used for case recording, reference, reasoning and other in-depth applications, a distributed storage technology of joint graph database and relational database is proposed. The pivot nodes selected by the extended calculation of entities and relationships with an entity relevance greater than 10 can be found in the knowledge graph by creating a formatted index label in the attribute column of the pivot node of the graph database to point it to the corresponding table in the relational database. Link to data. The distributed storage structure with entity association as the extension calculation trigger condition simplifies the relationship construction process. While retaining all the results of knowledge extraction, it solves the problem of low voice recognition efficiency caused by high coupling of equipment and improves the data update speed. 2.3.Semantic search model of aggregation engine based on sentence pattern matching and knowledge graph The situation of the distribution DTU automation acceptance site is complex, and the question and answer sentences are changeable. Therefore, this paper proposes a dual-engine semantic search technology based on sentence matching and the distribution DTU automation acceptance knowledge graph. First, through text analysis, clarify the relationship between the node to be identified and the object to be searched. For "Hello, I am XXX", "Dial XXX", "Correct, please start", the more fixed command responses are based on ESIM. The model is matched with the sentence patterns in the predefined corpus, and the predetermined operation in the template library is executed when the results are consistent. For a long sentence scenario that contains multiple equipment operating parameters, the knowledge graph is directly called to form the answer. In the process of virtual agent-assisted acceptance of DTU, the search method of aggregation engine reduces the return time of speech recognition results and improves the efficiency of acceptance. EXPERIMENTAL RESULTS AND ANALYSIS Based on the Distributed DTU automation acceptance business scenario, with the help of the Open5200 distribution automation management platform of a State Grid electric power company, the IVCA system is used to test the accuracy of the semantic recognition of the distribution DTU acceptance corpus constructed by this method and the rapidity of the virtual agent-assisted acceptance mode. 3.1.Accuracy and analysis In an environment of 40dB background noise, using a sampling frequency of 11.025KHz, 16-bit quantized numbers, and mono wav format audio, the IVCA system was subjected to an acceptance rule increase test. The acceptance rule is a data set composed of four voice data sets: Mandarin data, accent Mandarin data, Mandarin natural conversation data and real network customer service voice data. The result is shown in Figure 4. When only using Mandarin speaking data for training, the word error rate of the speech recognition model is 28.42%, after superimposing the accent in Putonghua, Putonghua natural conversation, and live online customer service voice in the data hall, the word error rate WER was reduced to 7.2%. The overall system recognition accuracy rate ACR can be expressed as The word error rate WER is defined as follows 100 % S D I WER N Among them, S, D, I are the number of words added, replaced, and deleted in the recognition result, and N is the total number of words. Applying aggregate AI scheduling engine technology driven by knowledge, the overall recognition rate ACR (calculated according to the word error rate) of the IVCA system is 92.8%. In the multi-bay ring network DTU acceptance service, the working hours of the main station operation and maintenance personnel are reduced from h-level to m-level, which can improve the problem of low efficiency of traditional mode acceptance under the background of massive power DTU access in the process of distribution network upgrade and transformation. 3.2.Rapidity and analysis 3.2.1 Rapidity when the acceptance result is normal: On September 15, 2020, the acceptance personnel used the IVCA system to conduct an automated acceptance testing on the distribution DTUs of 10 switch stations (the green part in Figure 5) between the two main power distribution stations of Fuqian and Xixing. Take the acceptance process of DTU (equipment name new1Tq1) as an example, The on-site acceptance personnel Figure 5. 10 DTUs between Fuqian substation and Xixing substation Figure 6. Acceptance report measured the opening and closing status of interval 1 switch (equipment name 1Tqkg1) and interval 2 switch (equipment name 1Tqkg2). After the three-phase current of the network bus is distributed, the IVCA is input by voice After confirming that the voice recognition result is correct, the system will report it to the power distribution master station. The system automatically generates an acceptance report as shown in Figure 6. The aggregate AI confirms that the return value in the acceptance report is consistent with the acceptance rules (the remote signal is correct, and the remote measurement accuracy is within the operating range). After that, make a prompt for the DTU acceptance of the next switch station. After the system is put into operation, under the condition that the on-site collection and telemetry results are consistent, the work time of the main station operation and maintenance personnel the acceptance of a single switch station DTU is reduced from the original 20 minutes to 0 minutes, which greatly improves the automatic acceptance of the power distribution DTU Efficiency, significantly eases the work pressure of the main station operation and maintenance personnel. Rapidity when the acceptance result is abnormal: When the return value in the acceptance report is inconsistent with the acceptance rules, the aggregate AI dispatch engine sends alarm information to the main stations' operation and maintenance personnel, according to the acceptance rules, the distribution master station observes the main station graphics, alarm window information, and check the actual position of the interval switch, the actual value of the three-phase current of the distribution bus, the voltage of the DTU battery of the switch station and other information with the on-site acceptance personnel. Then modify the terminal debugging plan and execute it in accordance with the processing flow of the DTU terminal data abnormality. During this process, the operation and maintenance staff of the main station reduced the working time for acceptance from the original 20 minutes to 5 minutes. Alleviate the problem of a large gap in the joint commissioning of distribution network automation effectively. In addition, the handling of power distribution DTU acceptance abnormalities is more refined and standardized. CONCLUSION This paper proposes a composite network recognition method based on the semantics of the virtual agent system for automatic acceptance of power distribution DTU. For professional vocabulary information in unformatted texts such as distribution network regulations, a composite network extraction and distributed storage method is adopted to construct a special corpus for the field of power distribution DTU acceptance. The virtual agent assists the field staff in the power distribution DTU acceptance test, and the recognition rate of the field staff's acceptance speech in the complex system environment is extremely high, reducing the main station operation and maintenance staff's acceptance work time by more than 70%. The joint debugging test results show that the method can significantly improve the efficiency of the automatic acceptance of distribution DTUs. During the upgrading and transforming of the distribution network, the method also can effectively cope with the problems of the access of a large number of newly added power distribution terminals, the cumbersome maintenance of old terminals which leads to insufficient use of the traditional acceptance mode, and the increase of the shortage of personnel in the automation and joint debugging of the distribution network. The research on the construction method of special knowledge graph for distribution network automation management is still in its infancy, and effective mining methods for historical operation logs, alarm information, fault records and other internal laws have not yet been formed. Manual intervention is still necessary to change the acceptance plan after the acceptance report is abnormal. Therefore, how to fully mine the inherent laws contained in the data source and guide the adjustment of the acceptance plan will be the future research direction.
4,883.2
2022-02-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Multi level perspectives in stock price forecasting: ICE2DE-MDL This study proposes a novel hybrid model, called ICE2DE-MDL, integrating secondary decomposition, entropy, machine and deep learning methods to predict a stock closing price. In this context, first of all, the noise contained in the financial time series was eliminated. A denoising method, which utilizes entropy and the two-level ICEEMDAN methodology, is suggested to achieve this. Subsequently, we applied many deep learning and machine learning methods, including long-short term memory (LSTM), LSTM-BN, gated recurrent unit (GRU), and SVR, to the IMFs obtained from the decomposition, classifying them as noiseless. Afterward, the best training method was determined for each IMF. Finally, the proposed model’s forecast was obtained by hierarchically combining the prediction results of each IMF. The ICE2DE-MDL model was applied to eight stock market indices and three stock data sets, and the next day’s closing price of these stock items was predicted. The results indicate that RMSE values ranged from 0.031 to 0.244, MAE values ranged from 0.026 to 0.144, MAPE values ranged from 0.128 to 0.594, and R-squared values ranged from 0.905 to 0.998 for stock indices and stock forecasts. Furthermore, comparisons were made with various hybrid models proposed within the scope of stock forecasting to evaluate the performance of the ICE2DE-MDL model. Upon comparison, The ICE2DE-MDL model demonstrated superior performance relative to existing models in the literature for both forecasting stock market indices and individual stocks. Additionally, to our knowledge, this study is the first to effectively eliminate noise in stock item data using the concepts of entropy and ICEEMDAN. It is also the second study to apply ICEEMDAN to a financial time series prediction problem. INTRODUCTION Forecasting stock prices or market trends, positioned at the confluence of finance and computational technology, stands as a captivating subject of keen interest among both researchers and investors.The allure stems from the potential of proposed forecasting models to yield successful predictions, aiding investors in optimizing their investment returns.Nonetheless, the inherent complexity of the stock market, influenced by myriad unpredictable factors like corporate strategies, political developments, investor sentiments, and overall economic conditions, renders accurate prediction a challenging endeavor.In the literature, numerous forecasting models have been proposed within the scope of a stock market index or stock price forecasting, in other words, financial time series forecasting.These models encompass a variety of techniques, including statistical methods like autoregressive integrated moving average (ARIMA) (Babu & Reddy, 2015) and generalized autoregressive conditional heteroscedastic (GARCH) (Ariyo, Adewumi & Ayo, 2014), machine learning methods such as support vector machine (SVM) (Wen et al., 2010), random forest (Lin et al., 2017), artificial neural networks (ANN) (Altay & Satman, 2005), and deep learning methods like long-short term memory (LSTM) (Akşehir & Kılıç, 2019), gated recurrent unit (GRU) (Gupta, Bhattacharjee & Bishnu, 2022), and convolutional neural network (CNN) (Akşehir & Kiliç, 2022).Although some success has been observed with prediction models utilizing statistical methods, their unsuitability for stock forecasting arises from the assumption that time series data is linear and stationary, these characteristics are not often present in financial time series.This occurs because these forecasting models presume linearity and stationarity within time series data, whereas the financial data utilized in this context demonstrate non-linear and non-stationary attributes.To address the constraints of statistical methods in stock forecasting, machine learning and deep learning-based models have been introduced.Nevertheless, despite their superiority over statistical methods, these approaches are susceptible to the intricate and dynamic characteristics of financial time series.Given the dynamic, non-stationary and nonlinear characteristics of financial market data, attaining dependable results proves challenging with a singular machine learning or deep learning approach for forecasting.Research has stated that these disadvantages can be overcome with a hybrid forecasting model, as opposed to the basic forecasting model, in which a single machine learning or deep learning method is used for stock forecasting (Cui et al., 2023;Lv et al., 2022;Chopra & Sharma, 2021;Kanwal et al., 2022;Zhang, Sjarif & Ibrahim, 2022). Literature review In literature, studies within the scope of stock prediction have emphasized that the proposed hybrid models outperform basic models, and this has been proven with experimental results.Based on this inference, Cui et al. (2023) conducted research on the proposed hybrid models in the realm of stock forecasting, and as a result of this research, they grouped the hybrid models into three categories: • Pure machine/deep learning hybrid model: This hybrid model combines deep learning methods with machine learning or other deep learning techniques.In this context, various stock forecasting models are included in the literature.Chaudhari & Thakkar (2023) introduced three feature selection approaches; top-m, k-means, and median range based on the coefficient of variation for stock and stock market index forecasting. Various deep learning and machine learning methods, such as back-propagation neural networks (BPNN), LSTM, GRU, and CNN, have been used to evaluate the proposed feature selection approaches.Albahli et al. (2023) proposed a forecasting model to predict the future trend of stocks and to help investors in the decision-making process based on this forecast.Stocks' historical price data were obtained for the suggested prediction model, and 18 technical indicator values were calculated from this data.Then, the auto-encoder method was applied to 18 technical indicators to decrease the feature count and obtain a leaner set of technical indicators.The stocks' historical price data and the reduced technical indicator data were combined and given as input to the DenseNet-41 model, and the prediction was realized.In another study conducted by Albahli et al. (2022), the same prediction framework was used, and the only difference between the two prediction frameworks is the use of the 1D DenseNet model instead of the DenseNet-41 model.In another study (Yoo et al., 2021) where the auto-encoder method was used, the DAE-LSTM hybrid prediction model consisting of denoising auto-encoder (DAE) and LSTM model was proposed to predict change points in stock market data.In the proposed model, after removing irrelevant features from financial market data with DAE, training was conducted using the LSTM model.Chen, Wu & Wu (2022) stated that most of the stock forecasting studies in the literature use a single stock data.It was emphasized that this causes the situations of influencing each other of the stocks in the similar group to be ignored.Accordingly, they proposed a new deep learning-based hybrid model to improve stock prediction performance.In the proposed KD-LSTM model, 16 bank stocks with similar trends listed on the Chinese stock market were clustered using the k-means method with a dynamic time warping (DTW) distance metric.According to the clustering results, four bank stock data, determined to belong to the same cluster each time, were used to train the LSTM model.Rekha & Sabu (2022) proposed a cooperative hybrid model combining deep learning methods, deep autoencoder, and sentiment analysis for stock market prediction.In this proposed model, first of all, the noise in the stock data is effectively eliminated by the deep auto-encoder method.Then, sentiment analysis was performed on the news data related to the stock using VADER, and the sentiment index was obtained.The denoised historical price data and sentiment index were combined and given as input to the LSTM/GRU model for prediction.Tao et al. (2022) introduced a hybrid deep learning model focusing on the correlation between stocks and mutation points.This proposed hybrid model consists of three subnetworks, and different features about the stock were obtained in each subnetwork.Then, these attributes were combined for the prediction of the stock's closing price.Polamuri, Srinivas & Mohan (2022) stated that generative adversarial network (GAN) is not preferred in stock prediction because hyperparameter tuning is difficult.They considered this as a primary motivation and proposed a hybrid prediction framework using GAN to predict the stock market.This model used LSTM and CNN for the generator and discriminator, the two essential components of GAN, respectively.Additionally, Bayesian optimization and reinforcement learning were used to overcome the hyperparameter tuning difficulty of the GAN.In this model, an auto-encoder was used for feature extraction from stock market data, XGBoost was used for feature selection, and the PCA method was used for dimensionality reduction.Zhou, Zhou & Wang (2022) introduced a hybrid prediction model termed FS-CNN-BGRU, integrating feature selection, Bidirectional Gated Recurrent Unit (BiGRU), and CNN techniques for stock prediction.Another investigation introduced the BiCuDNNLSTM-1dCNN hybrid approach for stock price forecasting, employing Bidirectional Cuda Deep Neural Network Long Short Term Memory and 1-D CNN methodologies (Kanwal et al., 2022).The hybrid model detailed above enhances the accuracy of prediction results. However, much like its counterpart, it is susceptible to constraints stemming from the sample-dependency issue.Various stock market prediction models in the literature are based on this structure.Indeed, it has been observed that in these hybrid models, wavelet transform (Qiu, Wang & Zhou, 2020;Bao, Yue & Rao, 2017;Tang et al., 2021;Wen et al., 2024), Fourier transform (Song, Baek & Kim, 2021), and decomposition algorithms (Cui et al., 2023;Lv et al., 2022;Liu et al., 2022a;Wang, Cheng & Dong, 2023;Yan & Aasma, 2020;Liu et al., 2022b;Rezaei, Faaljou & Mansourfar, 2021;Cao, Li & Li, 2019;Wang et al., 2022;Liu et al., 2024;Nasiri & Ebadzadeh, 2023;Yao, Zhang & Zhao, 2023) are utilized with deep learning.Tang et al. (2021) applied the denoising approach consisting primarily of wavelet transform and singular spectrum analysis (SSA) on the financial time series to predict the Dow Jones Industrial Average Index (DJIA).Then, the LSTM model was trained on the denoised data.The Fourier transform employed in this context decomposes the time series into frequency components, but this transformation cannot describe time and frequency scale changes.Therefore, it is not effective in analyzing time-varying signals.To overcome this disadvantage of the Fourier transform, wavelet transform is also used with deep learning methods in stock forecasting models (Qiu, Wang & Zhou, 2020;Bao, Yue & Rao, 2017;Tang et al., 2021).Nevertheless, the efficacy of the wavelet transform hinges on parameter selection, including parameters like the count of layers and the selection of the fundamental wavelet function, thereby constraining the predictive model's performance.To overcome these disadvantages of Fourier and wavelet transform, decomposition approaches were proposed that decompose the time series into distinct frequency spectrums.In the proposed deep learning-based hybrid models, it was observed that decomposition approaches such as variational mode decomposition (VMD) (Cui et al., 2023;Liu et al., 2022a;Wang, Cheng & Dong, 2023;Liu et al., 2024;Nasiri & Ebadzadeh, 2023), empirical mode decomposition (EMD) (Rezaei, Faaljou & Mansourfar, 2021;Yao, Zhang & Zhao, 2023), complete ensemble empirical mode decomposition (CEEMD) (Yan & Aasma, 2020;Liu et al., 2022b;Rezaei, Faaljou & Mansourfar, 2021), complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) (Lv et al., 2022;Cao, Li & Li, 2019) and improved CEEMDAN (ICEEMDAN) (Wang et al., 2022) were more preferred than wavelet and Fourier transform.The decomposition approaches used in stock prediction models enable a better understanding and modeling of the complex structure of financial time series.Therefore, this contributes to more accurate predictions of future values.However, the type of decomposition approach to be used in these prediction models also directly affects the model's prediction performance.After examining literature studies, it was noted that the VMD and CEEMDAN approaches were preferred more than other decomposition approaches.The utilization of these approaches demonstrated more successful results compared to both basic and other types of hybrid models. Additionally, ICEEMDAN, one of the mode decomposition approaches, was proven to achieve significant success in removing noise in time series data used in various fields, such as biomedical signal processing (Colominas, Schlotthauer & Torres, 2014), solar irradiance forecasting (Sibtain et al., 2021), and traffic flow prediction (Gao, Jia & Yang, 2022).WWhen analyzing the stock prediction models in existing literature, it was noted that despite this success of ICEEMDAN, it has only been used in one study (Wang et al., 2022). Following the above analyses, this study presents a novel hybrid model named ICE2DE-MDL for predicting the closing value of the stock and stock market index.The proposed model, ICE2DE-MDL, comprises a two-level decomposition based on the ICEEMDAN algorithm, along with entropy and deep learning/machine learning methods.In the proposed ICE2DE-MDL model, techniques such as LSTM, LSTM-BN, GRU, and SVR are preferred for specific reasons: • LSTM was chosen for its proficiency in capturing temporal dependencies in time series data. • LSTM-BN was favored for its ability to expedite training and enhance generalization performance through batch normalization. • GRU was selected because it can capture temporal dependencies similar to LSTM while utilizing fewer parameters. • SVR was included for its effectiveness in handling complex relationships and mitigating overfitting. Integrating these techniques aims to enhance the ICE2DE-MDL model's ability to cope with the complexities inherent in stock market prediction tasks. Motivation and contributions This study addresses the challenges in predicting stock market index and stock closing prices, a crucial task for investors and financial institutions amidst the volatility of financial markets.Enhancing the accuracy of these predictions necessitates integrating not only traditional statistical methods but also novel techniques such as deep learning and machine learning.In this context, this study aims to solve this challenge with the developed ICE2DE-MDL model. The ICE2DE-MDL model amalgamates various innovative techniques to tackle the complexity inherent in financial time series data.Initially, the secondary decomposition of financial data is performed using the ICEEMDAN algorithm, facilitating data segregation into more meaningful and noise-free components.Subsequently, components are classified using entropy metrics like sample and approximate entropy, aiding in identifying highfrequency and noise-free components.Following this, different deep learning and machine learning models are separately trained for each component, ensuring better alignment with the unique characteristics of each component.This approach enables the model to predict fluctuations in financial data more effectively. The contributions and innovations of this study are outlined as follows: 1.The literature review revealed the effectiveness of the ICEEMDAN method in eliminating noise from various time series.However, despite its successful application in diverse domains, including one study on financial time series data, the significance of applying ICEEMDAN to this specific data type remains noteworthy.Therefore, as far as we are aware, this study represents the second application of ICEEMDAN to a financial time series prediction problem, expanding the current understanding of its potential within this domain.2. This study introduces an innovative denoising approach designed to remove noise in stock data.Recognizing the crucial role of noise reduction in enhancing the accuracy of stock market analyses, our proposed method utilizes ICEEMDAN-based secondary decomposition along with sample and approximate entropy.Previous forecast models lack a denoising approach, specifically incorporating both ICEEMDAN and entropy concepts.Therefore, to our knowledge, this study is the first to propose a denoising approach combining ICEEMDAN and entropy concepts, providing a unique and effective method for eliminating noise in stock market index and stock data.Our proposed denoising approach introduces a second decomposition step instead of directly eliminating high-frequency components from the dataset.This decision is based on the recognition that these components may contain valuable information, and a secondary decomposition allows for a more nuanced and effective noise elimination process.3. Previous studies proposing decomposition based on deep learning hybrid models usually train the IMFs or subseries, obtained from decomposition, with the same model.However, since each IMF possesses distinct characteristics, it is imperative to identify and train the most suitable model for each IMF individually rather than applying a uniform model for all intrinsic mode functions (IMFs).The noiseless IMFs were trained with four different models: LSTM, GRU, LSTM with batch normalization (LSTM-BN), and support vector regression (SVR).The model achieving the lowest error metric among these options was subsequently chosen as the optimal prediction model for the corresponding IMF. 4. Our proposed ICE2DE-MDL model has demonstrated superior performance to both fundamental models and decomposition based on deep learning hybrid models, as evidenced by compelling experimental results.The enhanced accuracy and predictive capability of the ICE2DE-MDL showcase its effectiveness in addressing the challenges posed by existing models, marking a notable advancement in predictive modeling for specific applications or domains. Organization The subsequent sections of this paper are organized as follows: The ''Related Methodology'' section discusses the methodologies.In the section titled ''The ICE2DE-MDL Prediction Model,'' the framework of the proposed hybrid model is outlined.Details regarding the dataset utilized, hyperparameter settings of the model, selected benchmark models, and performance evaluation metrics are provided in the ''Experimental Settings'' section.The ''Results and Discussion'' section comprehensively analyzes the experimental results obtained in this investigation.Ultimately, the conclusion and recommendations for future research are delineated in the concluding section. RELATED METHODOLOGY This study introduces the ICE2DE-MDL prediction model, a novel framework that combines ICEEMDAN-based secondary decomposition, entropy, and LSTM, GRU, LSTM-BN, and SVR models to enhance the accuracy of stock index predictions.The subsequent section outlines the fundamental principles behind each method integrated into the proposed ICE2DE-MDL model. Improved complete ensemble empirical mode decomposition with adaptive noise The Empirical Mode Decomposition (EMD) technique, pioneered by Huang et al. (1998), dissects a time series into IMFs characterized by distinct frequencies and scales.Nonetheless, EMD is susceptible to 'mode mixing,' where similar oscillations may manifest across distinct modes or exhibit varying amplitudes within a single mode.To address this limitation, Ensemble Empirical Mode Decomposition (EEMD) was introduced as a solution (Wu & Huang, 2009).This method describes the actual IMF components by taking the average of multiple trials.It is also an approach that adds white noise to each trial, thus enabling cleaner/less noisy IMFs to be obtained compared to EMD.Despite these advantages, EEMD has challenges, like the high computational burden and inadequate elimination of white noise.In light of these challenges, Torres et al. (2011) presented the CEEMDAN technique, aiming to rectify the deficiencies of EEMD and provide a more efficient decomposition approach.In addition to effectively eliminating the mode mixing issue, CEEMDAN presents the benefits of minimal reconstruction error and substantially decreased computational cost.The CEEMDAN marks a notable advancement over EEMD by achieving minimal reconstruction error and resolving the challenge of variable mode counts across different signal-plus-noise realizations.Nonetheless, CEEMDAN still exhibits certain aspects requiring enhancement: (i) its modes retain some residual noise, and (ii) the signal information seems ''delayed'' compared to EEMD, manifesting some (i) ) ← t -th EMD component of the wn (i) ; Stage 1: Obtain the signal to be decomposed and calculate the first residue Stage 2: Calculate of the first IMF (t = 1) Stage 4: For t = 3, . .., T calculate the t th residue Entropy Entropy, grounded in information theory principles, serves as a metric to quantify the intricacy or irregularity observed within a time series.Understanding the unpredictability and speculative nature of price movements in financial markets is crucial.Although various entropy metrics exist in the literature, two widely utilized types are sample entropy (Richman & Moorman, 2000) and approximate entropy (Pincus, 1991).Approximate entropy quantifies the likelihood that similar patterns of observations will persist unchanged over consecutive data points.In contrast, sample entropy quantifies the regularity or predictability of fluctuations in a time series.The computation of these metrics is outlined in Algorithm 2. In financial time series, elevated entropy values may signal speculative and unpredictable price movements.These higher entropy levels suggest increased complexity and irregularity in market dynamics, potentially prompting investors to take risks.However, such unpredictability also carries the risk of financial losses.Consequently, assessing a financial time series's entropy value becomes crucial when formulating a robust investment strategy. Long short term memory and long short term memory with batch normalization Long short-term memory is a variant of recurrent neural network (RNN) architecture tailored to address the vanishing gradient issue commonly experienced by conventional RNNs, especially when processing extensive sequential data.While both LSTM and traditional RNNs share the characteristic of having repeating modules, their architectures differ significantly.The primary distinction lies in handling sequential information and mitigating the vanishing gradient problem. In a standard RNN, a single layer processes input sequences sequentially using a hyperbolic tangent (tanh) activation function.However, the challenge of capturing long-range dependencies effectively persists due to the vanishing gradient problem.In contrast, the LSTM architecture (see Fig. 1) tackles this issue by introducing a more intricate structure.This complex structure comprises four interconnected layers within each repeating module: the cell state, output gate, input gate, and forget gate.These components work in tandem to selectively retain or discard information, update the cell state, and produce the output for each time step.This intricate design allows LSTMs to • Assess the frequency of patterns and regularity within the specified tolerance r: • Calculate the average of the logarithm of C d k (r): • The ApproximateEntropy can be characterized as: Stage 4: Compute sample entropy • Calculate the pair of coefficients and determine their sum: • The SampleEntropy can be characterized as: capture and remember long-term dependencies in sequential data more effectively than traditional RNNs. The LSTM transaction equations are given as follows (Hochreiter & Schmidhuber, 1997): The equations above outline the functionality of an LSTM unit at time step l for an input vector (X ).Here, input l represents an input gate, forget l denotes a forget gate, output l signifies an output gate, Cell l represents a memory cell, hidden l symbolizes a hidden state, W denotes the weight matrix, b indicates the bias vector, and σ represents the activation function.The default connections between these units are illustrated in Fig. 1. The long short-term memory with batch normalization (LSTM-BN), proposed by Fang et al. (2023) for predicting financial time series movement, enhances the conventional LSTM architecture by incorporating a batch normalization (BN) layer.Incorporating the batch normalization layer into this architecture is intended to improve the network's performance.The batch normalization layer makes the network faster and more stable during training. Gated recurrent unit The gated recurrent unit (GRU) (Cho et al., 2014) is a form of RNNs engineered to tackle the vanishing gradient issue and capture long-range dependencies within sequential data, akin to LSTM.The GRU, illustrated in Fig. 2, is recognized for its simplified architecture compared to LSTM.It is achieved by combining the memory cell and hidden state into a single state, thus reducing the number of parameters. The GRU comprises the update gate (update l ) and the reset gate (reset l ).These gates manage the flow of information within the unit, enabling selective updates to its hidden state.The following equations define the computations within a GRU: As illustrated in the equations presented, σ represents the sigmoid activation function, while tanh signifies the hyperbolic tangent activation function.The operator signifies element-wise multiplication.Moreover, [hidden l−1 ,X l ] represents the concatenation of the current input X l and previous hidden state hidden l−1 .W denotes the weight matrix, and b signifies the bias vector. Support vector regression Support vector regression (Cortes & Vapnik, 1995), mainly employed for solving regression problems, belongs to the support vector machines (SVM) family.SVR aims to create a hyperplane and optimally fit data points onto this plane.Like classification problems, SVR's success in regression tasks is grounded in the concept of margin the distance between data points and the hyperplane.SVR enhances its generalization ability by maximizing this margin.The main parameters of SVR are as follows: • Kernel type: Specifies the kernel function used to transform data points in a highdimensional space.Different kernel types include polynomial, radial basis function (RBF), and linear kernels. • C (cost): It is a hyperparameter that controls the width of the margin.Larger values of C result in a narrower margin but can also increase the model's tendency to overfit. • Epsilon ( ): Another hyperparameter that controls the width of the margin.Larger values result in a wider margin. THE ICE2DE-MDL PREDICTION MODEL The study presents a novel hybrid model, termed ICE2DE-MDL, which is tailored for the prediction of closing prices in both stock market indices and individual stocks.The architectural layout of the model is visually shown in Fig. 3. Following that, a detailed breakdown of the model's intricacies is outlined, delineating each step comprehensively: noiseless IMFs are used in the training, the high-frequency IMFs are forwarded to the next stage, the second decomposition.2. Contrary to the direct discarding of IMFs, which are determined as high-frequency in the first step, a second decomposition process is applied, considering their possibility of containing useful information.For this purpose, firstly, IMFs determined as highfrequency are summed, and the ICEEMDAN method is performed on the obtained time series.The IMFs acquired from the secondary decomposition are categorized as high-frequency and noiseless, as in the first step.The high-frequency components determined as a result of this classification are discarded, while the noiseless components are used for training.In this regard, the noise in this data has been effectively eliminated by applying the first two steps on the financial time series.The algorithmic flow of this proposed ICEEMDAN and entropy-based denoising method is given in Algorithm 3. 3.After removing the noise, training is carried out separately on the noiseless IMFs obtained from both decompositions.Since the characteristics of each IMF are different, the best training model for them has been tried to be determined, rather than training them with a uniform model.For this purpose, each IMF is trained with four different models: LSTM, LSTM-BN, GRU, and SVR.The error metrics are compared, and the model exhibiting the lowest error metric is determined to be the best training model for that IMF.This process is performed at the Best Model Selection Unit in the framework provided in Fig. 3. 4.After training each noiseless IMF with its best model, the prediction results obtained from these IMFs are combined hierarchically.In this hierarchical combining approach, the prediction results of the noiseless IMFs derived from the second decomposition are initially combined, followed by the fusion of this result with the prediction outcomes of the noiseless/noise-free IMFs derived from the initial decomposition.Thus, the final prediction result of the ICE2DE-MDL model is obtained.This process is performed at the hierarchical unit in the framework provided in Fig. 3. EXPERIMENTAL SETTINGS The proposed ICE2DE-MDL model was employed to forecast the next day's closing prices for both stock market indices and individual stocks.Various experiments were then conducted on a computer equipped with an Intel i9-12900K processor to evaluate the predictive capabilities of the ICE2DE-MDL model.Initially, the proposed denoising method was applied to remove noise from the closing values of both indices and individual stocks.Subsequently, the prediction process utilized the resulting noiseless components.Within this section, we introduced the stock market index and stock datasets used in the study, followed by a discussion on the hyperparameter settings of the proposed ICE2DE-MDL model.Finally, the section concluded with details regarding the statistical metrics employed to evaluate the prediction model. Dataset The prediction model's performance was evaluated and compared with current studies in the literature using 11 datasets comprising stock market indices and individual stocks.The The statistical analysis outcomes, comprising data quantity, minimum and maximum values, mean, and standard deviation for each dataset concerning both stocks and stock indices, were presented in Tables 1-2.When the statistical analysis results were examined, it was observed that there was a substantial gap between the maximum and minimum values, as well as high standard deviation values for both the stock market index and stock datasets.This observation suggests that the chosen stock market indices and individual stocks demonstrate considerable volatility and possess non-stationary characteristics. Hyperparameter settings of the proposed model The hyperparameter values for the ICE2DE-MDL forecasting model were meticulously determined through a series of experiments, and the resulting values are detailed in Table 3. Accordingly, for the LSTM training model, three LSTM layers were utilized, each containing 128, 64, and 16 neurons, respectively.Between these LSTM layers, a Dropout layer with a ratio of 0.1 was included. For the LSTM-BN network, a structure with two LSTM layers including 128 and 32 neurons, respectively, was adopted.After each LSTM layer, a batch normalization layer was incorporated, and a Dropout layer with a ratio of 0.1 was introduced following the final batch normalization layer. The training model based on GRU architecture consisted of four layers with varying neuron counts: 128, 64, 32, and 8 neurons, sequentially.Following each GRU layer, a Dropout layer with a ratio of 0.2 was incorporated.Furthermore, across the LSTM, LSTM-BN, and GRU training models, the Adam optimizer was selected, the mean square error was chosen as the loss function, and the time step value was set to 10. Activation functions included ReLU for the LSTM and LSTM-BN models and tanh for the GRU model. In the SVR model, the kernel function was selected as linear.For the cost and epsilon parameters, 10 and 0.001 were used, respectively.In addition, 5-fold cross-validation was performed.The maximum epoch number was determined to be 200 for the four models used in the training, and early stopping was implemented to mitigate potential overfitting issues during training.The noise-free IMFs used in the training of the models were scaled with a standard scaler, and the training-test dataset was obtained by dividing each IMF dataset by 75-25%. In calculating sample and approximate entropy values for the obtained IMFs, the embedding dimension and tolerance values were set to 2 and 0.2, respectively.To classify the obtained IMFs as either high-frequency or noiseless, a threshold for the entropy ratio was established at 20%.IMFs with approximate or sample entropy ratios exceeding 20% were designated high-frequency, while those below the threshold were categorized as noiseless. Benchmark models selection Four different prediction models were selected to compare the performance of the proposed ICE2DE-MDL forecasting model with current studies in the literature.Details about the selected models are given below: (Wang, Cheng & Dong, 2023): The proposed hybrid model, integrating VMD, sample entropy (SE), Gradient Boosting Decision Tree (GBDT), Bidirectional Gated Recurrent Unit (BiGRU), and XGBoost methods, was introduced for stock market index prediction.In this model, the index data is initially decomposed into subseries using VMD, followed by the calculation of sample entropy for each subseries.Subseries with similar sample entropy values were grouped and restructured.Next, the most influential price data and technical indicators affecting the index were determined using the GBDT method.The restructured subseries and the features identified by GBDT were then fed into the BiGRU model for prediction.The prediction results obtained from the BiGRU model were combined using the XGBoost method, and the final prediction result was obtained.The proposed VMD-SE-GBDT-BiGRU-XGBoost model was applied to the CSI 500, NASDAQ 100, FTSE 100, and France CAC 40 indices. • IVMD-ICEEMDAN-ALSTM (Wang et al., 2022): The hybrid prediction model, consisting of secondary decomposition, multi-factor analysis, and attention-based LSTM (ALSTM), was proposed to forecast the stocks' closing prices.In the proposed model, two decomposition approaches, ICEEMDAN and VMD, were employed to eliminate noise in the financial time series.The ALSTM method was used as the training model.The proposed hybrid model was applied to the SSE, NIKKEI, KOSPI, and SET indices. • MS-SSA-LSTM (Mu et al., 2023): The MS-SSA-LSTM hybrid model was proposed for predicting stock closing prices by integrating two data types.In this proposed hybrid model, comments related to stocks were initially gathered, and sentiment analysis was Table 4 The statistical metrics used in the analysis of prediction results. RMSE Root Mean Square Error performed to obtain a sentiment score.Subsequently, stocks' historical price data and sentiment scores were combined and given as input to the LSTM model optimized by the Sparrow Search Algorithm (SSA).The proposed MS-SSA-LSTM model was applied to six stocks in the Chinese market, including PetroChina, CITIC Securities, Guizhou Bailing, HiFuture Technology, Xinning Logistics, and Zhongke. Evaluation metrics We evaluated the proposed forecasting model using commonly employed metrics in the literature.The metrics calculations were defined in Table 4. Within these mathematical expressions, n stands for the quantity of data points, P symbolizes the predicted value, A denotes the actual value, and A represents the average of the actual values. RESULTS AND DISCUSSION In this section, we examined the outcomes of experiments conducted on different stocks and stock market indices to evaluate the predictive performance of the proposed ICE2DE-MDL model.Due to the page limit, it is impossible to show the proposed forecast model's detailed results on all stock market indices and stocks considered in this study.For this purpose, the KOSPI index dataset was selected as an example to detail the stages of the ICE2DE-MDL model. As a first example, the denoising approach given in Algorithm 3was applied to the closing prices of the KOSPI index.In this process, 10 IMFs were obtained using the ICEEMDAN method as a result of this first decomposition.Figure 4 shows the KOSPI indices' closing value and the IMFs obtained from the first decomposition.After calculating the sample and approximate entropy ratios for these IMFs, the first two IMFs, which exhibited ratios surpassing a predefined threshold, were identified as high-frequency components.After summing the first two IMFs identified as high-frequency components, the process continued by applying the ICEEMDAN method for a second decomposition.Following this second decomposition, 10 IMFs were obtained.Evaluating the entropy ratios, it was determined that the first four IMFs from this set were classified as high-frequency components.The summation of the high-frequency components identified from the first decomposition and the IMFs obtained from the second decomposition were illustrated in Fig. 5. Furthermore, Table 5 includes the entropy values and ratios of the IMFs obtained from both the first and second decompositions (Imf − i (j) means the i-th IMF as a result of the j-th decomposition).The values highlighted in bold in the table represent the IMF components classified as high-frequency. After completing the first and second decomposition steps, the denoised IMFs were fed into LSTM, LSTM-BN, GRU, and SVR models for training.Initially, each denoised IMF obtained from the two decomposition steps underwent individual training using these four methods.The model with the lowest error metrics, such as mean squared error, was Comparison with other models To assess the effectiveness of the ICE2DE-MDL forecasting model, we conducted a comparative analysis with existing stock forecasting models detailed in the preceding section.The performance comparison results with four benchmark models were provided conditions.This indicates that the model's predictive capabilities could be enhanced in future studies through more effective hyperparameter tuning. When compared to the IVMD-ICEEMDAN-ALSTM model, the ICE2DE-MDL prediction model exhibits the lowest values for all three error metrics, indicating a positive impact on model performance from both our denoising approach and the individual training of each IMF with an appropriate model.These results underscore the effectiveness of reducing noise in stock data and optimizing each IMF component separately, particularly enhancing the accuracy of the prediction model. When compared to the sentiment analysis-based MS-SSA-LSTM prediction model and the P-FTD-RNN/LSTM/GRU model employing the padding-based Fourier transform denoising approach, the ICE2DE-MDL model has demonstrated superior performance across all stocks and stock market indices.This observation indicates that our suggested approach, as an alternative to sentiment analysis-based models and denoising methods, effectively enhances the overall prediction accuracy.The success of our model supports its robust and reliable forecasting capabilities under various conditions. CONCLUSION AND FUTURE WORKS This study introduced a novel hybrid model, ICE2DE-MDL, integrating decomposition, entropy, machine, and deep learning techniques to improve stock prediction accuracy.ICE2DE-MDL was deployed to forecast the closing prices of eight stock indices and three individual stocks for the upcoming day.Performance assessments were conducted by comparing the ICE2DE-MDL with various hybrid models based on machine/deep learning methods in stock market predictions from the existing literature.These comparisons reveal that the ICE2DE-MDL outperformed current models on stock index and individual stock forecasting.A two-level denoising approach based on the ICEEMDAN effectively removed noise from financial time series without sacrificing valuable information.Notably, our experimentation also demonstrated that training IMFs with the most appropriate model, as opposed to a uniform model, significantly enhances overall model performance. The ICE2DE-MDL model exhibited notable advancements in forecast accuracy when contrasted with alternative models.Nonetheless, opportunities for further refinement and optimization of this proposed hybrid approach persist.Future research endeavors could explore the impact of leveraging successful deep learning methods from various domains in stock prediction, aiming to elevate the overall performance of the prediction model. Figure 6 AkFigure 7 Figure 8 34 Figure 9 Figure 6 Graph depicting the performance of the proposed prediction model on the test dataset of stock market indices.Full-size DOI: 10.7717/peerjcs.2125/fig-6 • Time series-deep learning hybrid model: This hybrid model combines deep learning with traditional time series methods like ARIMA and GARCH.While there are numerous stock prediction models in the literature, there are relatively fewer studies on the time series-deep learning hybrid model compared to the hybrid models that fall into the other two categories.Rather, Agarwal & Sastry (2015) introduced a hybrid model that combines linear models, including autoregressive moving average (ARMA) and exponential smoothing, with a non-linear model, recurrent neural network (RNN), for predicting stock returns.The HAR-PSO-ESN model, introduced by Ribeiro et al. (2021), integrates heterogeneous autoregressive (HAR) specifications with echo state networks (ESN) and particle swarm optimization (PSO) metaheuristic techniques to enhance the prediction accuracy of stock price return volatility.This type of hybrid prediction models addressing traditional time series methods allows for separating nonlinear and linear components within the data.Still, it is ineffective enough in filtering out excessive noise, such as unusual spikes and jumps. Stage 5: Calculate the t th IMF IMF wn (i) )) t = residue t −1 − residue t Stage 6: Proceed to Stage 4 for the subsequent t Colominas, Schlotthauer & Torres (2014)ses of decomposition.To further enhance the method,Colominas, Schlotthauer & Torres (2014)introduced an improved version known as Improved CEEMDAN (ICEEMDAN), which mitigates the residual noise issue and addresses the 'delayed' signal problem.The fundamental steps of ICEEMDAN are outlined in Algorithm 1. Table 2 Statistical description of stock closing price. selected stock market indices included the Standard and Poor's 500 Index (S&P 500), France CAC 40 Index, Tokyo NIKKEI Index (Japan), Shanghai Stock Exchange Composite Index (SSE), SET Index (Thailand), FTSE 100 Index (London), Seoul KOSPI Index (Korea), and NASDAQ 100 Index (USA).Additionally, the study considered individual stocks from China's A-share market, specifically Xinning Logistics, Zhongke Electric, and CITIC Securities.This diversified dataset aims to comprehensively assess the model's predictive capabilities across various financial instruments and markets.The daily closing prices of stocks and stock market indices from January 1, 2010, to January 1, 2020, were gathered from the website https://finance.yahoo.com/. : The proposed hybrid forecasting model combined padding-based Fourier transformation with deep learning methods to predict the stock market index.This model uses padding-based Fourier transformation to reduce noise in financial time series data.Following this, denoised data was used to train GRU, RNN, and LSTM models.The proposed model was applied to S&P 500, SSE, and KOSPI index data. Table 9 Performance comparison with IVMD-ICEEMDAN-ALSTM (Wang et al., 2022) forecasting model. Values highlighted in bold represent the best prediction results. Table 10 Performance comparison with MS-SSA-LSTM (Mu et al., 2023) forecasting model. Notes.Values highlighted in bold represent the best prediction results.
8,855
2024-06-24T00:00:00.000
[ "Computer Science", "Business" ]
The Increasing Trend in Commercial Real Estate Lending by Community Banks: The Role of Deliberate Risk-Taking, 2001-2017 Much attention focuses on the role of real estate lending by banks as a precipitating factor in past financial crises, and especially with respect to the 2007-2008 crisis. Over the past five years, U.S. banks have increased their commercial real estate lending dramatically, raising concern among regulators about the potential for another financial crisis. In this paper, we analyze post-recessionary trends to determine whether the same dangerous pre-recessionary risk-taking trends are emerging. Regulators devote most of their attention to the banking sector with little regard to the role played by its various subgroups. This may explain why there is little research analyzing the specific role of community banks in sparking a financial crisis. In this study, we present a disaggregated analysis that focuses on the potential risks of increased commercial real estate lending from a comparative perspective, examining community banks vis-a-vis larger banking institutions, paying particular attention to the role of deliberate bank risk-taking as a causal factor in increased community bank commercial real estate CRE lending since the Great Recession. Introduction Most U. S. bank lending categories have seen only modest growth since the end of the Great Recession. An exception is Commercial Real Estate (CRE) lending (Note 1) (Regehr & Sengupta, 2016). As CRE loan growth has surged in recent years (see Appendix A, Figure 1), the total volume of CRE loans outstanding, which declined significantly during and in the immediate aftermath of the Great Recession, has rebounded sharply, especially since 2013. If this trend continues, some banks could be vulnerable should economic conditions deteriorate, particularly those with high concentrations of CRE loans and reliance on risky funding sources. Regulators today fear a repeat of the widespread commercial real estate failures that roiled the banking sector in the 1980s, 1990s, and late 2000s. As a result, they encourage lending institutions to maintain strong risk management oversight, especially regarding their CRE lending risk-taking practices (Federal Reserve Board of Governors, 2017). The Boards of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, and Office of the Comptroller of the Currency issued two guidelines in 2006 to address their concern that financial institutions were over-extended in the CRE loan market (Bassett & March, 2006). In particular, their concern was that concentration in CRE loans had reached a level that could lead to unstable outcomes in the event of a significant economic downturn. More recently, the same federal agencies once again urged caution in a December 2015 interagency statement that noted substantial growth in many CRE loan markets. Spurred by increased competitive pressures, the report portended an easing of CRE loan underwriting standards (Note 2). Such exposure to the real estate sector is a legitimate cause for concern, especially when it coincides with rapidly changing property prices (Kiyotaki & Moore, 1997). Guidelines emphasize the banking sector overall and pay less attention to the roles played by its various subgroups. Over-weighting by the largest national institutions tends to obscure impacts of smaller banks from aggregate statistics, leaving open many questions. An unintended consequence is but a scant amount of extant research analyzing the specific role of community banks (Note 3) in this regard. Our research aims to help move this oversight in a different direction by exploring two main concerns. The first is the degree to which increased CRE lending has exposed community banks to financial risks associated with a potential downturn in the commercial restate market. The second is whether we observe the same disturbing CRE lending pre-recession trends emerging in the post-recession era. Our inquiry, guided by primary consideration of the role of deliberate community bank risk-taking as a causal factor in increased CRE lending, presents a comparative perspective of the potential risks of increased CRE lending by analyzing differences in the risk orientations of community banks versus their larger, non-community bank counterparts, focusing on two distinct periods: the pre-recessionary The focal point of our analysis is the role of risk, as a deliberate choice of action, in the loan banks that exhibit a higher tolerance for risk should not only be willing to make higher risk loans such as commercial real estate loans, but should also be willing to incur other forms of concurrent risk as well. We contend that risk orientation is an inherent trait. Consequently, we expect a high degree of correlation between a bank's decision to invest higher portions of their assets in CRE loans and desires to engage in other forms of risk. Bank risk-taking involves decisions about the riskiness of the bank's loan portfolio relative to the quantity and type of funds used to make loans and capital reserve constraints. Given that riskier loan portfolios result from the discretionary actions of bank managers, we contend that such decisions are the result of deliberate risk-taking. While risk is important to consider, bank risk management should not focus exclusively on reducing exposure. Banks should take on good risks by undertaking activities that have an expected positive return on a standalone basis (Stulz, 2016). Broadly speaking, banks must address four risk categories: operational, business, event, and financial (Van Greuning & Brajovic Bratanovic, 2009). Yet, it is not always easy to categorize some types of risk. For example, nontraditional risks such as ATM failures or employee fraud come under the rubric of operational risks (DeYoung & Torna, 2013;Lopez, 2002). Similarly, the Basel Committee on Banking Supervision defines operational risk as the risk of monetary losses resulting from failed internal processes, people, and systems (Basel Committee on Banking Supervision, 2015b) (Note 5). Taking actions that reduce risk can be costly. Taking lower risks means avoiding potentially profitable investments that may come with higher risks (Note 6). Banks differ from nonfinancial firms in general because they can create value through their liabilities, yet each does so differently, depending on their particular risk-taking profiles (DeAngelo & Stulz, 2015). Bank managers, along with bank regulators, have long sought to understand the determinants of risk-taking, which often proves difficult due to the many types of risks banks face (Apelado & Gies, 1972;Asea & Blomberg, 1998;Christoffesen, 2011;Cohen, 1970;Van Greuning & Bratanovic Brajovic, 2003;Kaplan & Mikes, 2012). Business risks stem from a bank's business environment and exposure to external regulatory policies and macroeconomic factors (Acharya & Naqvi, 2012). Event risks occur exogenously. A military conflict for example, could jeopardize a bank's operations (Bessis, 2011). In any one year, Business and event risks impact all banks simultaneously and are therefore not likely to account for any resulting variation in realized financial ratios among banks. Our study, however, focuses specifically on financial risks. Altman and Saunders (1997) highlight three critical financial risks: credit, liquidity, and interest rate. A bank's primary business is to generate returns on its assets which it does through the loans it makes. To accomplish this, banks must strike a delicate balance between investing in high-and low-risk loans, managing credit risk proactively. Banks must not overlook liquidity risk because their abilities to meet demand obligations can influence customer and shareholder confidence about not just profitability but continued viability. Because potential increased interest rate exposure risks (Rosenberg & Schuermann, 2006). Credit risk relates to a bank's inability to recoup the money it loans or invests. It is a direct function of the quality of a bank's loan portfolio. Alternatively put, credit risk is the risk that a borrower will fail to repay its loan in part or in full, resulting in diminished bank asset value. Liquidity risk rises as liquid assets decrease. Sharp decreases can place banks in the untenable positions of forced liquidation or acquisition to meet obligations. Liquidity risk results from a mismatch in size and maturity of assets and liabilities (for an in-depth discussion on liquidity policies and their implications for systemic risk, see Adrian & Boyarchenko, 2013) (Note 7). Interest rate risk occurs when an unexpected interest rate change affects the market value of a bank's assets, potentially threatening solvency (Feldman & Schmidt, 2000). Interest rate risk is particularly important to bank regulators, who place great emphasis on the evaluation of interest rate risk associated with individual banks. Arguably, emphasis has increased in importance since the implementation of risk-based capital charges recommended by the Basel Committee (Van Greuning & Brajovic Bratanovic, 2009). Managing the level of bank risk-taking involves numerous decisions manifesting themselves in objective bank financial measures. Though we have discussed each of the risks separately, there is an inherent interaction between all forms of risk which means that banks need to manage all of them simultaneously (Stulz, 2016). All the while banks must be aware that efforts to reduce one kind of risk may increase another. For example, loan sales can reduce interest rate and credit risk, but this could force banks to rely more heavily on income from off-balance sheet activities. A generalized method to measure a bank's overall level of realized risk-taking is the risk-weighted asset ratio. This composite measure incorporates credit, liquidity, and interest rate risks (Das & Sy, 2012). Banks with higher ratios tend to invest more money in loans and comparatively less in safe, short-term liquid assets such as U.S. Treasury securities. Higher risk-weighted asset ratios typically reflect riskier asset composition. Banks would also tend to rely more on volatile sources of funds and would be less willing to back their assets with equity capital. The percentage of bank assets invested in loans and government securities are two important measures of asset risk exposure (Shrieves & Dahl, 1992). Widely considered to be risk-free, U.S. Treasury securities provide banks with both safety and liquidity while loans held expose banks to the highest risks. Intuitively, risk-taking decisions leading to higher loan-asset ratios and correspondingly lower security-asset ratios suggest a bank's preference for taking higher risks with their assets. Because negative effects on liquidity can be critical, ensuring adequate liquidity is one of the most important tasks in the management of a bank. Recent research indicates that inadequate liquidity is often one of the most important signals that a bank is in serious financial trouble (Duttweiler, 2011). To be sure, there are trade-offs between ensuring adequate liquidity and seeking high profitability. The more resources a bank devotes to its liquidity needs, the lower its expected profitability. Storing liquidity in the form of short-term assets and relying on borrowed liquidity to meet cash demands are two strategies available banks to meet short-term liquidity needs. Achieving liquidity by investing in short-term assets is a less risky strategy than relying on borrowed funds; however, lower risks are also less profitable (Adrian & Shin, 2009). Borrowing as a source of liquidity is the riskiest approach to solving a bank's liquidity problems (Darrat et al., 2004); higher yields await, but money market interest rate volatilities can prove problematic. Banks which rely on large, volatile sources of funds such as negotiable certificates of deposit, and other liabilities with short-term maturities, are more likely to have unanticipated deposit outflows (Matz, 2007). It follows that banks whose strategies are to accept lower net liquidity ratios, either because they hold a smaller fraction of liquid assets or because they rely more heavily on volatile sources of funds are those which also should be inclined to accept higher levels of risk. Another common risk estimation measure examines the ratio of core deposits to total deposits. Core deposits are total deposits less time deposits over $100,000 (Sheehan, 2013). They are not particularly interest rate sensitive and consist of small-denomination accounts from local customers who are unlikely to withdraw on short notice. The risks of withdrawal for large negotiable certificates of deposit and other open market-purchased funds are much greater than for core deposits obtained from local customers (Horcher, 2011). It may be possible for a bank to acquire more assets and earn higher average profits by relying more on volatile funds and less on core deposits (Dam, Escrihuela-Villar, & Sanchez-Pages, 2015). However, purchased funds tend to be more responsive to changes in interest rates and, hence, may provide a less stable source of funds to banks than do demand deposits. For that reason, banks with high ratios of core deposits to total assets, and conversely, low ratios of volatile funds to total assets are more likely to be risk-averse than banks with low core deposit to total asset ratios. Our study utilizes the ratio of net liquid assets to total assets to determine a bank's ability to meet unanticipated cash demands. Net liquid assets are the difference between short-term liquid assets and highly volatile borrowed funds. Although a bank can strengthen its liquidity position by holding more liquid assets, it will not necessarily be in a strong position if the demands for liquidity made against it are excessive. A final form of concurrent risk stems from decisions to adopt low capital-asset ratios (Peek & Resengren, 1995). Lower ratios provide less cushion against any potential loss and create incentives for banks to make loans with higher probabilities of default. The incentive to increase asset and bankruptcy risk as capital-asset ratios decline can explain why banks may very well choose to hold much riskier loan portfolios than they would have with higher capital-asset ratios. Bank owners have less to lose in the event their investments perform poorly if capital ratios fall. At the same time, they also have much to gain if the higher risk loans perform well. Several studies suggest that poorly capitalized institutions actively seek to take additional risk ( Nichols, 2011). It follows then that those banks anxious to avoid losses should also be averse to making higher risk loans and investments. Regulatory agencies specify acceptable capital-asset ratios and pressure banks to maintain them, even in face of external pressures beyond their control. Problems occur when it appears the capital-asset ratio is a result of managerial choice. Banks have certain flexibilities under current risk-based capital guidelines; they can reduce their ratios by pursuing safer investments, and vice versa. Either way, because banks have more freedom in deciding how to meet their capital requirements, their risk-taking activities are of interest. Our risk-taking profile measure takes into consideration the riskiness of a bank's investment activities. Method To examine the increasing trend in CRE lending, we focus on two periods: 1) a pre-recessionary period, lending risk-taking trends in the post-recessionary period witnessed in pre-recessionary years. More specifically, we ask whether significant risk-taking trend differences exist between community and non-community banks during each period. We use both univariate and multivariate procedures to examine the role of deliberate risk-taking as a casual factor in differing levels of CRE lending among banks. In our univariate analysis, we use a series of financial ratios constructed from a bank's balance sheet to determine the extent to which risk-taking tendencies influence the prevailing levels of CRE lending (see Appendix B, Table 1). We select 16 financial ratios as proxies for concurrent risk measures. We test for mean differences in our set of concurrent risk measures between bank groups across two distinct time-periods (see Appendix B, Table 2). We use principle component analysis to avoid multicollinearity problems between the financial ratios. We develop a single principle component that serves as our risk-taking profile measure. By using principle component analysis, we can determine how the various financial ratios correlate with our risk-taking profile measure. By examining the eigenvector scores for each ratio (see Appendix B, Table 3). Higher scores represent higher degrees of correlation with a given principle component. For any variable, a positively related score indicates that higher financial ratios are associated with higher levels of risk. Thus, heavier reliance on brokered deposits (positive loading) would indicate higher levels of risk-taking. A negatively related score indicates the opposite. For example, heavier reliance on core deposits (negative loading) would indicate lower levels of risk-taking. Our procedure reduces our 16 financial ratios to four principle components as predictors of CRE lending. The first, (PosUse), is composed of ratios representing uses of funds that are positively associated with our risk-taking profile measure. The second, (NegUse), is composed of ratios representing uses of funds that are negatively associated with our risk-taking profile measure. The third and fourth measures, (PosSou) and (NegSou), represent variables that load positively and negatively on our risk-taking profile measure, respectively. We estimate the following equation for each set of banks within and across each time-period: Specifically, we test the following hypotheses: Next, we test differences in coefficients across two specific equations for community banks and non-community banks. We estimate two equations, one each for community and non-community banks over each specific interval of time. We then test across both equations to determine differences in the corresponding coefficients (see Appendix A, Figure 3). Specifically, we test the following hypotheses: Result regulators because high CRE/RBC ratio banks experienced a higher likelihood of failure or forced acquisition. The study notes that during the 2008-2012 period, 35% of those banks with CRE/RBC ratios higher than 400% experienced either failure or forced acquisition. Nationwide during the same period, failure and acquisition rates were 5% and 13%, respectively. In recent years, only a handful of community banks have had ratios exceeding 400%. Another question the Richmond Fed study considers is whether certain characteristics within this group (+400% banks) can help predict its probability of survival or failure (Fessenden & Muething, 2017). It finds that that those banks that failed in the recession following the 2007-2008 financial crisis had significantly higher CRE loan growth rates in the years leading up to the recession than was the case for the banks that survived. The study shows that banks holding between $1 billion and $10 billion in assets not only maintained the highest concentrations of CRE loans but also had the fastest growth rates. It also reports that 50% of community banks have mean CRE/RBC ratios below 130% in the post-recession era while only 5% have ratios above 400% (see Appendix A, Figure 4). In stark contrast, the respective ratios for non-community banks are just over 200% and 395%. Effects of Geographical Region Given that the financial crisis was more severe in certain areas of the country, we also examine the effect of geographical region on the CRE/RBC ratio during the three periods of our study using each of the 12 Federal Reserve Districts as proxies (see Appendix A, Figures 5-7). With a lone exception, in every period, non-community banks have higher CRE/RBC ratios. In the San Francisco region community banks have higher ratios than do their larger bank counterparts. Differences in Individual Ratios Our univariate analysis aligns with the theoretical basis for using concurrent risk measures as proxies for inherent risk tendencies. Remarkably, results are consistent across all three time periods (see Appendix B, Tables 2-6). Community banks hold significantly more investments in stable, non-risky sources of funds such as demand deposits and core deposits and significantly less in riskier sources of funds such as volatile liabilities, short-term liabilities, and brokered deposits. They also have significantly higher capital ratios as indicated by higher equity ratios and higher ratios of core capital. It is also clear that relative to non-community banks, community banks invest higher portions of their assets in less risky assets and lower portions in riskier assets. With significantly higher levels of short-term assets and lower levels of short-term liabilities, community banks take on less liquidity risk than do non-community banks. However, as measured by their higher negative interest rate gap (Note 9), community banks take on significantly more interest rate risk than do their non-community bank counterparts. While community bank loan loss provision ratios are much less than for non-community banks, community banks also have significantly lower rates of noncurrent loans and leases. Whether analyzing sources or uses of funds, community banks appear to take on less risk than do non-community banks. Community banks hold significantly less of their assets in CRE loans than non-community banks in all three periods. They also hold fewer CRE loans in the post-recession period than in the pre-recession period. Regarding community bank post-recession risk-taking activities relative to their performance in the pre-recession era we find no consistent disturbing trends. In the post-recession era, community banks do hold more brokered deposits, but they also hold higher proportions of safer sources such as demand deposits and core deposits as well. Yet at the same time, community banks also hold significantly higher proportions of risky assets such as commercial and industrial loans but lower proportions of CRE loans (see Appendix B, Table 5-6). Principle Component Analysis We use principle component analysis to provide robust, objective analysis of our set of concurrent risk measures. This method allows us to determine how the various financial ratios correlate with an overall risk profile. Eigenvector scores indicate how each ratio loads on our risk-taking profile measure (see Appendix B, Table 7). Higher absolute scores, whether positive or negative, represent higher loadings and thus higher degrees of correlation with a given principle component. Some of the financial variables represent sources or uses of funds and load either positively or negatively in our risk profile analysis. Positive loading indicates a higher degree of bank risk-taking. For instance, a positively correlated loan ratio means that banks holding a higher portion of CRE loans are undertaking more risk. The opposite is true for U.S. Treasuries; a negative loading indicates a lower risk orientation. In the same vein, banks that rely more heavily on core deposits (negative loading) and less so on brokered deposits (positive loading) are those with lower risk profiles. As most all the variables load in the expected direction, these results give us confidence in our various measures of concurrent risk. represents the 2011-17 period and the value of 0 represents all prior years. Using stepwise regression procedures, the only significant geographical region is San Francisco, which we then test in isolation and as we expect, the result is a positive and significant coefficient. Surprisingly, the coefficient for the post-recessionary period is also positive and significant. Relative to the pre-and post-recessionary time periods, the estimated coefficient indicates that after controlling for all other variables, there were more CRE loans made in the post-recessionary period. Regression Analysis Tests for differences in coefficients across both sets of equations reveal significant differences in the risk orientation between the two groups of banks. Community banks tend to make safer investments than do non-community banks. However, our regression results show that an equal percentage increase in risky investments for both sets of banks will cause a significantly larger percentage increase in CRE loans among community banks. Community banks also rely less on volatile and unstable sources of funds than do non-community banks. Nonetheless, again, results indicate that an equal percentage increase in unstable funding sources will cause a significantly larger percentage increase in CRE loans among community banks. Thus, our single-and multi-variate results give us somewhat of a mixed reading. Community banks hold significantly less risky investments and rely on less risky sources of funds than do non-community banks. However, it seems clear that community banks also need close monitoring. As measured by the risk-weighted asset ratio, community banks hold significantly less risky assets than do non-community banks as a percentage of their assets. At the same time, however, they also invest more heavily in longer-term assets and unstable funding sources. Discussion Despite the regulatory concerns expressed in late 2015, the concentrations of CRE loans continued to rise at many banks across the nation in 2016 and 2017. Regulators use the risk-based capital CRE/RBC ratio to assess how much capital a bank has on hand to protect itself against operating losses. A study by the Richmond Federal Reserve Bank looked at banks with especially high concentrations of CRE loans, defined as having a CRE/RBC ratio of more than 400% (Fessenden & Muething, 2017). For community banks, we find that these rates are much lower than is the case with the nation's largest banks. Compared to the pre-recession period, we find that after the recession CRE lending community banks only increased their mean CRE/RBC ratios by an average of 12 percentage points from 144% to 166%. For non-community banks, ratios increased over two and a half times more, rising from 170% to 200% over the same period. Given the recent rise in CRE lending and in loan concentrations nationally, a key question becomes how bank risk-taking trends today compared to the years before the financial crisis. The Richmond Fed study found that banks' CRE loan exposures are still not as elevated as they were in 2007-2008. For example, FIGURE 3 -TESTING FOR DIFFERENCES IN COEFFICIENTS ACROSS EQUATIONS Published by SCHOLINK INC. Financial Ratios Var Definition Total assets asset The sum of all assets owned by the institution including cash, loans, securities, bank premises and other assets. This total does not include off-balance-sheet accounts. Total risk weighted assets adjusted rwajt Total risk weighted assets are assets adjusted for risk-based capital definitions which include on-balance-sheet as well as off-balance-sheet items multiplied by specified risk-weights. A conversion factor is used to assign a balance sheet equivalent amount for selected off-balance-sheet accounts. Net loans and leases lnlsnet Total loans and lease financing receivables minus unearned income and loan loss allowances. Commercial real estate other nonfarm nonresidential lnrenres The amount of nonfarm nonresidential real estate loans that are not secured by owneroccupied nonfarm nonresidential properties. Commercial and industrial loans lnci Commercial and industrial loans. Excludes all loans secured by real estate, loans to individuals, loans to depository institutions and foreign governments, loans to states and political subdivisions and lease financing receivables. Total securities sc Total investment securities (excludes securities held in trading accounts). U.S. Treasury securities scust Total U.S. Treasury securities held-to-maturity at amortized cost and available-for-sale at fair value not held in trading accounts. Asset-Backed Securities scabs The amortized cost of held-to-maturity of available-for-sale for all asset-backed securities (other than mortgage-backed securities), including asset-backed commercial paper, not held for trading. Loan loss allowance lnatres Each bank must maintain an allowance (reserve) for loan and lease losses that is adequate to absorb estimated credit losses associated with its loan and lease portfolio (which also includes off-balance-sheet credit instruments). Noncurrent loans and leases nclnls Assets past due 90 days or more, plus assets placed in nonaccrual status. Short-run Liquid assets asset_st (1) Cash and balances due (chbal) (2) Federal funds sold (frepo) (3) Other short-term assets (idoa) Volatile liabilities on a consolidated basis includes: (1) Federal funds purchased and securities sold under agreements to repurchase, (2) Demand notes issued to the US Treasury and other borrowed money with remaining maturity of 1 year or less. (4) Foreign office deposits (5) Trading liabilities less trading liabilities revaluation losses on interest rate, foreign exchange rate, and other commodity and equity contracts. Interest sensitive lialibilities int_sen_liab Interest-bearing deposits (depi) Interest rate gap int_gap Interest-sensitive assets minus Interest-sensitive liabilities Bank equity capital eqv Total bank equity capital (includes preferred and common stock, surplus and undivided profits). Tier one (core) capital rbct1j Tier 1 (core) capital includes: common equity plus noncumulative perpetual preferred stock plus minority interests in consolidated subsidiaries less goodwill and other ineligible intangible assets. Total risk weighted assets adjusted rwajt Total risk weighted assets are assets adjusted for risk-based capital definitions which include on-balance-sheet as well as off-balance-sheet items multiplied by specified risk-weights. A conversion factor is used to assign a balance sheet equivalent amount for selected off-balance-sheet accounts. Brokered deposits bro Brokered deposits represent funds which the reporting bank obtains, directly or indirectly, by or through any deposit broker for deposit into one or more deposit accounts. Fully insured brokered deposits are brokered deposits that are issued in denominations of $100,000 Demand deposits ddt Total demand deposits included in transaction accounts held in domestic offices.
6,445.6
2021-02-22T00:00:00.000
[ "Economics", "Law" ]
Slipping properties of ceramic tiles / Quantification of slip resistance Regarding the research and application of ceramic tiles there is a great importance of defining precisely the interaction and friction between surfaces. Measuring slip resistance of floor coverings is a complex problem; slipperiness is always interpreted relatively. In the lack of a consistent and clear EU standard, it is practical to use more method in combination. It is necessary to examine the structure of materials in order to get adequate correlation. That is why measuring techniques of surface roughness, an important contributor to slip resistance and cleaning, is fundamental in the research. By comparing the obtained test results, relationship between individual methods of analysis and values may be determined and based on these information recommendations shall be prepared concerning the selection and application of tiles. Floor covering Floor covering is responsible for ensuring the quality, properties and aesthetic appearance of flooring regarding its intended use. Some of the main requirements are sufficient strength, resistance to abrasion, volume stability, sufficient flexibility, walking comfort, slip resistance, cleanability and fire resistance, etc. A wide range of products is available on the market; products with different appearance (color, surface texture, decoration, etc.), different technical characteristics associated with different expected performance levels. One has to consider all the tile characteristics that are relevant to a specific application. The selection of materials is a fundamental step in the design process of flooring, since it can significantly influence the achievement of a satisfactory compliance with the essential requirements of Construction Products Regulation (EU) No 305/2011 (CPR):  Mechanical resistance and stability  Safety in case of fire  Hygiene, health and the environment  Safety and accessibility in use  Protection against noise  Energy economy and heat retention  Sustainable use of natural resources Construction works as a whole and in their separate parts must be fit for their intended use, taking into account in particular the health and safety of persons involved throughout the life cycle of the works. Subject to normal maintenance, construction works must satisfy these basic requirements for construction works for an economically reasonable working life. The construction works must be designed and built in such a way that they do not present unacceptable risks of accidents or damage in service or in operation such as slipping, falling, collision, burns, electrocution, injury from explosion and burglaries. In particular, construction works must be designed and built taking into consideration accessibility and use for disabled persons [1]. Among the 7 general criteria my study deals with safety and accessibility in use, especially the slip resistance of ceramic floorings. Ceramic tiling Ceramic tiles belong to the third possibility of the declaration of conformity; basically initial type testing and factory production control is done by the manufacturer. Since the publication of harmonized standard EN 14411 for ceramic tiles, number of expertises has increased especially in the issue of slip resistance which is the most controversial feature of these products. There is no requirement, but according to this standard tiles intended to use on floors need to be tested. In Hungary there is no clear instruction, guide or unified testing method for determining slipperiness of ceramic tiles. EN ISO 10545-17 was never published, but it would have been an aid for testing, because it contained three suitable test methods. Importance of slip resistance Slip resistance of ceramic tiling is an existing and important problem. The risk of slipping depends not only on choice and performance of tiles, but also human and environmental factors. This property is often incomplete or inaccurate, the question of it along with the knowledge of cleanability appear in all part of life. It was apparent that many suppliers/architects did not consider slip resistance to be an important property, however in some workplaces, there is an increased risk of slipping due to work with slippery materials. Accidents are affected greatly by the surface structure of floors and possible contamination; water, oil, grease, soap, dust and sand, etc. It can be stated that the multiple testing of properties resulting from surface patterns, fixing, maintenance and cleaning is essential. Frictional mechanism, slipperiness Friction is the force needed to pull a body across a surface. It resists the relative motion of solid surfaces in contact. Coulomb friction is governed by the equation: This approximation provides a threshold value for this force. The coefficient of friction depends only on the material, especially on the roughness of the surface (figure 1). It is a value which describes the ratio of the force of friction between two bodies and the force pressing them together. Testing methods and devices of slip resistance In order to determine slipperiness, it is necessary that these values can be measured. The purpose of using the appropriate device is to try to imitate walking, nevertheless to ensure repeatability. In the last 50 years, many equipments and methods were developed for use in laboratory and also portable ones. Picking a favorite test method to assess slip resistance and using a single result to select a product is no longer appropriate [2]. I do research on a composite solution, where multiple testing of slipping properties can provide more precise definition of usability. It is advised to use more methods in combination, rather than to rely on a single result. For determining results obtained by different procedure our laboratory applied the following methods: • Inclined ramp test • Skid-resistance test (SRT) • Floor friction test Inclined ramp test The classical method for determining the static coefficient of friction is the inclined plane shown in figure 2. The so-called ramp test based on methods of DIN 51097 and DIN 51130 is widely recognized. These tests involve a subject walking back and forth on a contaminated test panel. The angle of inclination of the panel is gradually increased until the test subject slips. The average angle at which slip occurs is compared to a classification range. This angle is a measure for the coefficient of friction. Testing on oily surface, the scale runs from R9 to R13. In Germany BGR 181 regulation introduces requirements for different uses in this case. When surface is contaminated with water, scale runs from A to C. GUV-I 8527 (former GUV 26.17) gives precise instructions for areas where people walk barefoot, such as swimming pools and spas. Skid-resistance test Skid-resistance tester (SRT) illustrated in figure 3 was developed to provide the skid resistance of wet road surfaces. It operates by the principle of the Charpy pendulum. On the swinging arm the slider is allowed to fall from a certain angle, and it rubs against the surface that is being tested. The measured value is proportional to the absorbed potential energy of the slider. The pendulum tester has advantage in that it can give reliable results in both wet and dry conditions and its portability means it can be used on site as well as in the laboratory. This proves that this method is used in several cases for stones and concrete products. Figure 3. Skid Resistance Tester. Assessment of testing methods Based on the evaluation of test methods it is very difficult to draw direct comparison and convert one test result to another. Furthermore it is important that the acceptance of any method for a specific application should be based on subjective experience. The purpose of slip resistance testing should be to define the minimum result that is likely to be obtained [3]. In my research principles of available previous studies, regulations and international standards were used. By adapting their classifications presented in table 1-4 and taking into account test results of products distributed in Hungary the system of evaluation has been clarified and will be completed with the combined results of all three methods in the future. Cleanability Risk of slipping on flooring is mostly affected by the presence of contamination. Slip resistance of ceramic tiling can change with use. Different material behave differently against various chemicals, they also differ in tendency of staining and cleanability. The required or specified slip resistance can be maintained by frequent effective cleaning with appropriate detergent and cleaning tools. The question of resistance to chemicals and staining regarding ceramic tiling is necessary, so the surfaces of these tiles need to be examined in order to get adequate correlation. Surface roughness Surface roughness, an important contributor to slip resistance and cleaning, is significant due to the fact that production technology defines the surface quality of a flooring material. Roughness is a measure of the texture of a surface, thus it is quantified by the vertical deviations of a real surface from its ideal form. During measurement a diamond stylus ( figure 5) with a very small (scale of μm) tip radius is moved in contact with a sample for a specified distance and contact force. It scans unevenness of surface and a filtered roughness profile is used for evaluation. The remaining profile is partitioned into adjacent segments, where height is assumed to be positive in the up direction. The value of surface roughness depends on the scale of measurement [4]. There are many different roughness parameters in use for describing the surface, each of them is calculated using a formula. Most common parameters are the arithmetical mean deviation (R a ) and the maximum height of the assessed profile (R z ). By comparing the obtained test results, relationship between the two parameters can be determined. A profilometrous surface roughness-meter, Surftest SJ-301 can measure small surface variations in vertical stylus displacement as a function of position. The approach is to measure and analyze the surface texture in order to be able to understand how the texture is influenced by its history (manufacture, wear) and how it influences its behavior (adhesion, friction). Surface roughness measurements provide an additional indication of slip resistance potential [5]. Summary Slip resistance of a ceramic floor in service depends on the characteristics of its surface and these may change over the lifetime. Tendency of slipperiness and efficiency of applicable cleaning detergent can be determined accurately by quantification and consideration of roughness. Polished surface of ceramic tiles has low coefficient of friction, therefore dangerous in term of slip resistance. Roughening of a surface has positive effect on slip resistance, but at the same time prevents the easy removal of dirt. For ceramic tiles to give satisfactory service, it is necessary to be selected and installed competently, and to receive appropriate initial treatment, protection and maintenance. Conclusions There are no official requirements covering the slip resistance of floors in Hungary, but an acceptable safety shall be pursued and achieved, therefore authorities have to specify requirements for different usage. This does not mean that from now on many types of tiles shall be forbidden to be installed. A possible solution will be a development of a useful guideline like Annex N for abrasion resistance in EN 14411. In my research it is not relevant whether an approach is examined to be better than the other, but with the knowledge of collective test results obtained by using available equipments a suitable threshold value shall be determined for a variety of application areas. In Hungary, as a member of the European Union, CPR for ensuring reliable information on construction products in relation to their performances has already entered into force. The main parts of its substantial Articles shall apply first from 1 July 2013. Where an intended use requires threshold levels on relation to any essential characteristic to be fulfilled by construction products in Member States, those levels should be established in the harmonized technical specifications. As far as I am concerned determining if a tile is safe or not regarding the influence of its surface is not adequate. Placing more emphasis on the quantification of slip resistance would allow customers to make comparisons and help them to select the most appropriate product for their needs. Manufacturers shall take responsibility for the conformity of their product with its declared performance, therefore providing them with a recommendation can help reducing the risk of slipping.
2,802.8
2013-12-13T00:00:00.000
[ "Materials Science" ]
The economic burden of urinary tract infections in women visiting general practices in France: a cross-sectional survey Background Urinary tract infections (UTIs) are among the most common bacterial infections. Despite this burden, there are few studies of the costs of UTIs. The objective of this study was to determine the costs of UTIs in women over 18 years of age who visit general practitioners in France. Methods The direct and indirect costs of clinical UTIs were estimated from societal, French National Health Insurance and patient perspectives. The study population was derived from a national cross-sectional survey entitled the Drug-Resistant Urinary Tract Infection (Druti). The Druti included every woman over 18 years of age who presented with symptoms of UTI and was conducted in France in 2012 and 2013 to estimate the annual incidence of UTIs due to antibiotic-resistant Enterobacteriaceae in women visiting general practitioners (GPs) for suspected UTIs. Results Of the 538 women included in Druti, 460 were followed over 8 weeks and included in the cost analysis. The mean age of the women was 46 years old. The median cost of care for one episode of a suspected UTI was €38, and the mean cost was €70. The annual societal cost was €58 million, and €29 million of this was reimbursed by the French National Health Insurance system. In 25 % of the cases, the suspected UTIs were associated with negative urine cultures. The societal cost of these suspected UTIs with negative urine cultures was €13.5 million. No significant difference was found between the costs of the UTIs due to antibiotic-resistant E. coli and those due to wild E. coli (p = 0.63). Conclusion In the current context in which the care costs are continually increasing, the results of this study suggests that it is possible to decrease the cost of UTIs by reducing the costs of suspected UTIs and unnecessary treatments, as well as limiting the use of non-recommended tests. Electronic supplementary material The online version of this article (doi:10.1186/s12913-016-1620-2) contains supplementary material, which is available to authorized users. Background Urinary tract infections (UTIs) are among the most common bacterial infections [1] and affect nearly half of all women at least once in their lives [2]. Women are more affected than men and exhibit two incidence peaks, i.e., early in the period of sexual activity and in the postmenopausal period [3]. Among those aged 18 years and over, 10.8 % of women reported having at least one UTI within the past 12 months [4]. Escherichia coli (E. coli) is the most common urinary pathogen and is found in 74 % of outpatient UTIs [5]. Antimicrobial resistance is increasing and varies between countries, and this variation is strongly related to antibiotic prescription practices [6][7][8][9]. Initial E. coli UTI episodes are followed in 44 % of cases by recurrence within 12 months [10]. Despite this burden, few studies have examined the costs of UTIs. In 1997, an American study estimated that the burden of UTIs represented 100,000 hospitalizations, 7 million visits and 1 million admissions to emergency services [11]. In 1995, UTI costs were estimated at $1.6 billion in the USA ($659 million in direct costs and $936 million in indirect costs) [4]. The direct cost per patient has been estimated to be between 112 and 172 dollars [12]. In France, these costs are unknown. The main objective of this study was to calculate the direct and indirect UTI costs (including cystitis and acute pyelonephritis) in women over 18 years of age who visit general practices in France. The secondary objectives were to calculate the costs of suspected UTIs with negative urine cultures and to compare the costs of UTIs due to antibioticresistant E. coli with those of UTIs due to wild E. coli. Population The data were collected during the Drug Resistance in Community Urinary Tract Infection (Druti) survey. The Druti was a national cross-sectional survey that was conducted in France between January 2012 and February 2013 by general practitioners (GP) of the Sentinelles network [13]. The aim of this survey was to estimate the annual incidence of UTIs due to antibiotic-resistant bacteria in women who visited GPs for suspected UTIs [14]. A two-stage sampling design that has been described elsewhere [14] was applied. The eligible patients were females over 18 years old who presented within the previous 7 days with at least one of the following symptoms: dysuria and frequent or urgent of urination (Additional file 1). The patients who agreed to participate and had not taken antibiotics within the prior 7 days were included. Data available For each patient, a urine sample was collected, and urine cultures were performed on all samples in the same laboratory. The urine samples were analyzed, and the antimicrobial susceptibilities were tested. The bacteriological methods are described elsewhere [15,16]. The GPs were blinded to the urine culture results. When needed, the GPs prescribed another urine culture. The included patients completed an inclusion questionnaire that contained the patients' characteristics (i.e., age, clinical status (chronic diseases and comorbidities) and socio-economic data) (Additional file 2). The women completed a questionnaire within 8 weeks following inclusion in which they specified the daily presence or absence of symptoms within the first 14 days (Additional file 3). The women provided information about their health care usage (e.g., physician visits, diagnostic tests, prescription drugs and hospitalization) and sick leave from the baseline time point to 8 weeks. A research assistant contacted by phone the women at two and 8 weeks to collect the data. Costs The direct and indirect costs of clinical UTIs were estimated from the societal perspective, the French National Health Insurance perspective and the patient perspective (prior to private complementary health insurance participation) [17]. French National Health Insurance covers the costs of general and specialized medical visits, prescription drugs, diagnostic tests and hospitalizations. In cases of sickness, the insurance also provides daily allowances for economically active persons, insured and unable to work. Private health insurance can be utilized to reimburse patients for health-related costs that are not covered by social security. For the most disadvantaged, state-run programs provide universal health coverage. The patient contribution corresponds to the costs that are not covered by the French National Health Insurance and the patient's private health insurance. Only the costs of the initial UTI episode and associated relapses were taken into account. The costs related to reinfection were not included. The definitions of relapse and reinfection were based on those in the literature [2,18,19]. All costs were calculated based on the reported data declared by the women. The costs are presented in euros. In 2012, the Purchasing Power Parities (i.e., the rates of currency conversion that eliminate the differences in price levels between countries) were $1.1718 and ₤0.8145 for €1 [20]. Direct costs Direct costs include direct medical costs related to physician visits, diagnostic tests, prescription drugs and hospitalizations [21]. -Physician visits. All physician visits were considered including GP visits at baseline. In 2012, the average cost for a physician visit for a woman was determined based on the General Sample of Beneficiaries (EGB), which is permanent representative sample of the population that is protected by the French National Health Insurance [22]. This cost was estimated according to medical specialty and department of residence and was available for the societal, French National Health Insurance and patient perspectives. The French National Health Insurance paid for 70 % of the costs of physician visits. -Diagnostic tests. Only tests performed for UTIs were considered. The costs of the urine cultures that were performed for the incidence study were not included in the analysis. The Nomenclature of Medical Biology Acts (NABM) was used to determine the costs of bio-medical analysis, and the Common Classification of Medical Acts (CCAM) was used to determine the costs of medical imaging procedures. When a patient did not provide the exact title of the diagnostic test, the weighted mean of the cost of same family of investigations (e.g., blood tests or ultrasound) was used. The French National Health Insurance paid for 60 % of the costs of the bio-medical analyses and 70 % of the costs of the medical imaging procedures. -Prescription drugs. Only treatments related to UTIs that were prescribed by a physician and partially (65, 35 or 15 % according to the drug) or totally paid for by the French National Health Insurance were considered. Over the counter drugs dispensed by pharmacists were not taken into account. Two French National Health Insurance databases were used, i.e., the drug's database (which contains baselines for allopathic medicines that are reimbursed by health insurance) and the MEDICAM (which contains detailed information about reimbursed drugs) [23,24]. These databases contained the costs of each box of drug (per molecule and by strength, packaging and laboratory), the number of boxes sold and the amount paid by the French National Health Insurance system in 2012. Pediatric and injectable (except third-generation cephalosporin) drugs were removed before the analysis. The average costs weighted by the number of boxes sold in 2012 per molecule and by strength and packaging were calculated. The prescriptions provided the physicians at baseline were used to determine the average cost of a prescription per molecule. This cost was then related to the patient-declared drug consumption. -Hospitalizations. Only admissions related to UTIs were considered. The hospitalizations cost was defined based on the Hospital Stay-Related Group (GHS), which is classification of hospital stays that is based on the care delivered to patients. A tariff order defined by the government was used to determine the cost of the GHS [25]. The GHS were determined based on the patient's age, disease and medical history [26]. This information was recovered from hospitalization reports that were obtained directly from hospital after acquiring the patient's consent. The French National Health Insurance reimbursed 80 % of the GHS. Indirect costs The indirect costs included only morbidity costs (loss of productivity due to absenteeism) [21]. The friction costs method was used to account for the ability of a company to adapt to the absence of an employee [27]. An elasticity of 0.8 was applied. The daily productivity lost (or gross daily pay) for each women by socio-professional category was obtained by multiplying the gross hourly pay in 2010 based on data from the French National Institute of Statistics and Economic Studies (INSEE) [28] with the number of hours worked per day by a full time equivalent [29]. Next, the employer's contributions (32.85 % of gross pay) were added [17,30,31]. In 2010, the average gross hourly pays were null for non-economically active persons (i.e. students, unemployed person and retired person), €19.42 for manual workers, €21.06 for clerical workers, €29.77 for intermediate occupations and €42.57 for managers. The French National Health Insurance pays patient daily allowances that represent 50 % of the gross daily pay [32,33] only from the fourth day of the sick leave until the end of the sick leave. The daily allowance amounts were also calculated based on the women's gross hourly pay according to socio-professional category [28]. On the first of January 2012, the daily allowances were capped at €42.77. The patient loss of income was taken as the net daily pay for the first 3 days off, and the difference between the net daily pay and the daily allowances for the following days. Economic and statistical analysis The sampling design (stratification, stages and sampling weights) was taken into account in all of the analyses to make inferences about the population and has been described elsewhere [14]. The average costs of clinical UTIs in France were calculated according to expense items (physician visits, diagnostic tests, prescription drugs, hospitalizations and productivity losses). The total costs according to expense items were calculated by multiplying the average costs by the estimated number of visits to general practices for UTIs in 2012. The mean costs of suspected UTIs that were confirmed and unconfirmed based on urine cultures were compared with Student's t-tests as were the mean UTIs costs due to wild and antibiotic-resistance E. coli. For the analysis, the E. coli were classified as resistant based on disclosed resistance or intermediate susceptibility to a particular antimicrobial agent; otherwise, the isolate was classified as susceptible. Multi-resistance was defined as acquired resistance to at least three classes of antibiotic [34]. The data management and analyses were performed using the R version 2.10.1 software especially the Survey package [35,36]. Population and urine cultures During the study, 87 GPs enrolled 1,569 women who visited with symptoms of UTI. Among these women, 538 were included, and 460 were followed for 8 weeks (Fig. 1). The mean age of the GPs was 53 years (ranging from 33 to 65), and 12 % were women. The mean ages of the eligible women and the women who were followed for 8 weeks were 47 and 46 years, respectively. The women who were followed for 8 weeks had more dysuria and pelvic or flank pain than the eligible women (Table 1). There were more clerical workers and managers and less non-economically active persons in our population than is the general population ( UTIs costs Physician visits After inclusion, 14 % (95 % CI: 10-20 %) of the women had further visits with a GP, an urologist or a gynecologist ( Hospitalization Only one patient in this study was hospitalized for pyelonephritis. This hospitalization cost was €1240.67 ( Table 3). The mean costs were patient was €1.13 (95 % CI: €0-3.28) from the societal perspective and €0.86 (95 % CI: €0-2.51) from the French National Health Insurance perspective. Indirect costs Nine percent of the women (95 % CI: 7-13 %) took sick leaves. Among these women, 15 % (95 % CI: 6-32 %) took sick leaves longer than 3 days and received daily allowances from the French National Health Insurance. The mean sick leave duration was 2.39 days (95 % CI: Discussion This initial study conducted in France on the costs of community UTIs in women over 18 years of age estimated a total cost from the societal perspective of €58 million in 2012, €44 million for direct costs and €14 million for indirect costs. Half of this cost was supported by the French National Health Insurance, and half was supported by the patients (before private complementary health insurance participation). Visits represented the largest expense item followed by sick leave and prescription drugs. Although very expensive, hospitalizations were rare and therefore represented the smallest expense item. The costs for 75 % of the women were below the mean cost. Additional visits with specialist physicians, ultrasounds, hospitalizations and sick leave concerned less than one quarter of the women. The important costs of these additional health care procedures clearly increased the mean UTI cost. An important strength of this study was the use of a sampling design. This allowed to correct the bias due to drop-outs and geographical repartition and to generalize with caution the cost of UTIs to the population of women over 18 years of age who visit GPs for presumed UTIs. Furthermore, to estimate the possible costs according to expense items as accurately as possible, the maximum amounts of data from the French National Health Insurance and the French National Institute of Statistics and Economic Studies were used. The use of the EGB allowed for the accounting of possible differences in care consumption according to gender and excess fees according to medical specialties and French departments, particularly in terms of the costs of physician visits [37]. The systematic collection of urine sample from all women who visited their GPs for suspected UTIs permitted the distinction between clinical UTIs with positive urine cultures from clinical UTIs with negative urine cultures and the estimation of their related costs. Furthermore, to estimate the real cost, the women' declarations were preferred to the GPs' declaration to account only for prescriptions that were actually utilized or purchased. This study has some limitations that might have resulted in the over-or underestimation of the costs of UTIs. First, the cost generalization should be interpreted with caution: women suffering from UTIs without dysuria, frequent or urgent of urination were not included, which could decrease the estimated cost of UTIs; patients included had more symptoms than eligible patients, which could influence the achievement of diagnostic test, especially lower back pain with suspected pyelonephritis; the socio economic status was not available for eligible women, preventing to compare eligible and included patients on this point. However, in our study, there were more workers and less non-economically active persons than in the general population. This is concordant with the French inequalities of health care recourse: unemployed and retired persons seek less care than economically active persons [38]. Second, the costs calculated in this study were for women who visited GPs and not for the general population. The estimation of the costs of UTIs among the general population would have required the estimations of the costs of women who visited hospital emergencies departments and specialists (i.e., urologists and gynecologists). Third, the data used to estimate the non-medical direct costs (i.e., time lost and monetary expenses), intangible costs (i.e., loss of well-being for the patient and her close family and friends) and presenteeism costs (i.e., the loss of productivity due to a UTI while the patient was working) were not available. Fourth, the friction cost method was chosen to determinate the indirect costs. The cost's results should be interpreted with caution because this method is controversial in cases of short-term disease. The human capital approach overstates the production lost because the sick employee's colleagues can offset the absence via increased productivity [27]. Consequently, the estimates of friction costs should represent the upper bound of the estimates of the short-term indirect costs [27]. However, in cases involving teamwork, the absence of an employee can also reduce the production of several employees [39]. Fifth, although the study was designed to exclude the costs of reinfection, the women who were symptomatic at 2 weeks might have experienced a reinfection between the second and the eighth weeks, and these costs were taken into account because differences between relapses and reinfections could not be identified during this period. Sixth, the costs of long-term symptomatic failure (i.e., the persistence of symptoms after 8 weeks) could not be taken into account because the follow-up period stopped after 8 weeks. Seventh, from the patient perspective, the costs could have been overestimated because some companies might have paid their employees during their sick leaves; or these costs could also have been underestimated because there were no data from which excess fees for medical imaging procedures could be assessed. Another point from the patient perspective is the costs of over the counter drugs, which could not still have been accounted for because costs differ between retail outlets. None of these data were available. As last limitation, the results of diagnostic tests prescribed by physicians were not collected. According to Foxman [4], 10.8 % of women over 18 years of age experience at least one UTI per year, which represents more than 2.5 million people in France as of 1 January 2012 [40]. In our study, the estimated number of women who visited a GP for a UTI was estimated to 832,073 (95 % CI = 623,614-1,040,532) in 2012. However, the rate of care seeking for UTIs in France is unknown. Some women with UTI symptoms might have spontaneously recovered healed [41], visited other specialists (e.g., gynecologists and urologists) or emergency departments. The yearly number of emergency visits for UTIs is estimated to 410 000 (2.3 % of the 18 million of emergency visits), half less than in primary care [42,43]. In Italy, the mean annual direct cystitis cost (i.e., physician visits, diagnostic tests and prescription drugs) from the Italian National Health Service (NHS) perspective was evaluated to be €229 per patient between January 2007 and December 2010 [44]. Each patient had an average of 4.5 episodes per year. The cost for the Italian NHS was higher than the cost for the French National Health Insurance. However, the women [45]. The cost differences between this study and the present French study may have been related to the systematic one-month follow-up (with no differences between relapse and reinfection) used in the English study and the higher remuneration of English GPs (approximately 30 % greater than that of French GPs in 2008 [46]). In the United States in 2010, the annual direct and indirect UTI cost was estimated at $2.3 billion [47]. Considering that the American population was approximately five times greater than the French population in 2012, the American cost was six times greater than the French cost. This difference could be explained by the costs of physician visits and diagnostic tests, which are three to ten times more expensive in the United States than in France [48]. Cost-effectiveness studies have found that most cost-effective treatment is the empirical use of antibiotics that are effective against E. coli [12,49,50]. Furthermore, in the present French study, the mean UTI cost due to wild E. coli was not significantly different from the mean UTI cost due to antibiotic-resistant E. coli. This lack of a difference could have resulted from our use of systematic urine analyses. Even without antibiotics, 20 % of women recover from uncomplicated lower UTIs within 3 days and 26 % recover within one week [41,51]. The effect of inadequate antibiotic prescription should be studied in greater detail and might have little influence on the consumption of care by women who visit GPs for UTIs. Approximately one quarter of the suspected UTIs seen in general practice had negative urine cultures. This illustrates the limits of the clinical diagnosis of urinary tract infections. The probability of having an UTI for a woman with urinary symptoms is 48 % [52]; this probability increases if dysuria, urinary frequency or hematuria is present [53]. This suspected UTIs with negative urine cultures had a significant influence on cost, i.e., more than 23 % of the overall UTI cost. The environmental influence of antibiotic therapy for these women cannot be neglected due to the risk of the selection of resistant bacteria [54]. More than half of diagnostic tests performed for UTIs were prescribed outside of the recommendations as previously reported by other authors [55]. It is interesting to consider how to the prescription of potentially unnecessary and environmentally harmful treatments the administration of potentially inappropriate diagnostic tests can be prevented. Conclusions In the current economic context in which the costs of care are continually increasing, the present study estimated that the cost of UTIs among women who visited their GPs was €58 million from the societal perspective, and half of this value was reimbursed by the French National Health Insurance. This study provides new perspectives regarding the possibility of reducing the cost of the management of this pathology without reducing the quality of care, particularly via the prescription of diagnostic tests. For women with negative urine cultures, the development of new effective diagnostic tools could reduce antibiotic prescriptions and the costs of these UTIs. Another study should be performed to estimate the total UTI costs in France that includes the costs of women who visit hospital emergencies departments and specialists (e.g., urologists and gynecologists). Ethics approval and consent to participate The Druti and its ancillary studies obtained research authorization from the French independent administrative authority protecting privacy and personal data (CNIL) under number 911 485 and from the local human investigation committee of Ile de France V. Written informed consent was obtained from all study participants.
5,464.2
2016-08-09T00:00:00.000
[ "Medicine", "Economics" ]
A comprehensive analysis of the germline and expressed TCR repertoire in White Peking duck Recently, many immune-related genes have been extensively studied in ducks, but relatively little is known about their TCR genes. Here, we determined the germline and expressed repertoire of TCR genes in White Peking duck. The genomic organization of the duck TCRα/δ, TCRγ and unconventional TCRδ2 loci are highly conserved with their counterparts in mammals or chickens. By contrast, the duck TCRβ locus is organized in an unusual pattern, (Vβ)n-Dβ-(Jβ)2-Cβ1-(Jβ)4-Cβ2, which differs from the tandem-aligned clusters in mammals or the translocon organization in some teleosts. Excluding the first exon encoding the immunoglobulin domain, the subsequent exons of the two Cβ show significant diversity in nucleotide sequence and exon structure. Based on the nucleotide sequence identity, 49 Vα, 30 Vδ, 13 Vβ and 15 Vγ unique gene segments are classified into 3 Vα, 5 Vδ, 4 Vβ and 6 Vγ subgroups, respectively. Phylogenetic analyses revealed that most duck V subgroups, excluding Vβ1, Vγ5 and Vγ6, have closely related orthologues in chicken. The coding joints of all cDNA clones demonstrate conserved mechanisms that are used to increase junctional diversity. Collectively, these data provide insight into the evolution of TCRs in vertebrates and improve our understanding of the avian immune system. Scientific RepoRts | 7:41426 | DOI: 10.1038/srep41426 viruses and carries all 16 haemagglutinin and 9 neuraminidase subtypes 16 . Typically, ducks do not show apparent signs of disease upon infection with many strains of highly pathogenic H5N1, making them a Trojan horse for the maintenance of H5N1 in nature 17,18 . Recently, the molecular basis of the natural resistance of ducks to influenza infection has become a hot topic in avian immunology. Numerous innate immune-related genes, such as RIG-I (also called DDX58) 19 , RNF135 20 , and gene families of IFITM, BTNL, and β-defensin 21,22 , as well as the repertoire, expression and function of Ig isotypes 23 , have been extensively studied in ducks, but little is known about the duck TCR genes. In this study, we report the detailed genomic organization and repertoire diversity of all TCR loci in White Peking duck, including three conventional TCR loci (TCRα /δ , TCRβ and TCRγ ) and the recently discovered TCRδ 2 locus, providing a theoretical basis for further understanding of the avian adaptive immune system as well as the evolutionary relationships of TCRs in vertebrates. Results Genomic organization and germline repertoire of duck TCR genes. TCRα/δ locus. Based on the mallard TCRα cDNA sequence (accession number AF323922), we first identified a Cα gene-positive BAC clone, DHS1503D01. According to the end sequence of DHS1503D01, another BAC clone, DHS1008P13, was found to overlap the 5′ portion of DHS1503D01 and contain the Cδ gene. An analysis of the two BAC sequences showed that the δ locus was located within the α locus, resembling the genomic organization of the TCRα /δ locus in other tetrapods (Fig. 1a). The duck Cα and Cδ genes were encoded by three exons that successively encoded the Ig domain, connecting peptide (Cp), and transmembrane-cytoplasmic (Tm-Ct) domain, all of which contained the three conserved cysteines required for intra-and inter-chain disulfide bond formation and the conserved lysine and arginine residues responsible for the interaction with other TCR dimers ( Fig. 2a and b). A comparison of the amino acid sequences of duck Cα and Cδ with the corresponding sequences of other vertebrate species revealed maximum identity levels (71.6% for Cα and 66.2% for Cδ ) between the duck and chicken, but less than 35% identity between the duck and other animal species. One and five potential N-glycosylation sites were identified in duck Cα and Cδ , respectively ( Fig. 2a and b). At least 68 functional Jα segments were identified between the Cδ and Cα genes, and at least two Dδ and two functional Jδ segments were found upstream of the single Cδ gene (Supplementary Fig. S1A and B). Within the BAC sequences, we further identified 33 V segments, all of which were located 5′ upstream of the first Dδ segment (Fig. 1a). When the nucleotide identity was compared with the V segments defined in other species, the V segments could be further categorized into one of two distinct groups, 9 Vα at the 5′ end and 24 Vδ located downstream of the Vα group. Of the 33 V segments, four were found to be pseudogenes due to non-sense mutations (Vα 2.4 and Vα 3.1), frameshift (Vα 1.1) or the absence of the exon encoding the leader peptide (Vδ 2.6). Using 5′ RACE, 40 extra Vα and 6 extra Vδ segments were detected in the cDNA clones, indicating that the current TCRα /δ locus is incomplete. Based on the criterion that V segments belonging to the same subgroup should share 75% or greater nucleotide identity 24 , a total of 49 (9 + 40) Vα segments could be grouped into three subgroups (Vα 1 to Vα 3) (Table 1) (Supplementary Fig. S2A), and the total of 30 (24 + 6) Vδ segments were categorized within five subgroups (Vδ 1 to Vδ 5) ( Table 1). The Vδ 2 appeared to be the largest Vδ subgroup, consisting of 20 Vδ segments ( Supplementary Fig. S2B). Members within a Vα or Vδ subgroup exhibited more than 76.4% or 75.1% nucleotide identity. Within a subgroup, each Vα or Vδ segment cloned from 5′ RACE displayed 76.4% to 96.9% or 75.1% to 96.0% nucleotide identity with the remaining Vα or Vδ segments, respectively. The only exception is Vα 3.16, which displayed 97.1% nucleotide identity with the pseudogene Vα 3.1. Since the Vα 3.16 is functional, it was considered as a novel Vα segment. Dot plot analyses indicated that both Vα and Vδ regions had undergone multiple duplications ( Supplementary Fig. S4A). The current incomplete Vα region originated from tandem duplications of a homology unit containing one Vα 2 and one Vα 3 segment ( Supplementary Fig. S4B). The Vδ region contained several ~4 kb repeated units, which were composed of V segments from Vδ 2 and Vδ 3 subgroups ( Supplementary Fig. S4C). We also performed genomic Southern blotting using probes from C and selected V subgroups. The detection of only one band with the Cα probe verified that there was only a single copy of the Cα gene in the duck genome ( Supplementary Fig. S3A). However, one dark and two light bands were detected when the enzyme Pst I and the Cδ probe were used, indicating that another Cδ -like gene might be located outside the TCRα /δ locus ( Supplementary Fig. S3B), resembling the second TCRδ locus identified in chicken and zebra finch, as discussed later. The number and intensity of hybridizing bands substantiated the presence of larger number of Vα 3 and Vδ 2 segments in the genome. However, compared with the number of V segments obtained thus far, more bands were detected using the Vα 1 and Vδ 5 probes, suggesting the presence of additional germline members within the Vα 1 and Vδ 5 subgroups. (Supplementary Fig. S3A and B). TCRβ locus. According to the mallard TCRβ cDNA sequence (accession number AY039002), a Cβ gene-positive BAC clone, DHS0801D24, was identified and sequenced. Analysis of the BAC sequence revealed that the duck TCRβ D, J, and C genes were organized in a unique pattern, Dβ -(Jβ ) 2 -Cβ 1-(Jβ ) 4 -Cβ 2 (Fig. 1b), in contrast to the tandem-aligned D-(J) n -C clusters in most mammals or the translocon organization with a greater number of Jβ genes in some teleosts. Both Cβ 1 and Cβ 2 genes consisted of four exons. The first exon of the two Cβ genes, which encoded the Ig domain, was highly conserved with only three amino acid changes. However, the following exons were substantially divergent, with only 33% identity at the amino acid level. Maximal differences in length and nucleotide composition have been observed in exon 2, which was found to encode Cp. Exon 2 of Cβ 1 encoded as many as 14 amino acids, whereas exon 2 of Cβ 2 encoded only six amino acids. In Cβ 2, both Tm and the cytoplasmic Ct domain were encoded by exon 3, and exon 4 contained only the 3′ untranslated region (3′ UTR). By contrast, exon 4 of Cβ 1 encoded eight extra amino acids, forming a longer Ct domain. In addition to the canonical cysteine required for intra-and inter-chain disulfide bond formation, Cβ 1 encoded three extra cysteine residues, one in the Cp domain and the other two in the Tm domain. The Ct domains of both Cβ genes contained a lysine residue that was involved in the interaction with the CD3 complex. Two potential N-glycosylation sites were identified in both Cβ 1 and Cβ 2 (Fig. 2c). Southern blotting analysis further corroborated the presence of two Cβ genes in the duck genome ( Supplementary Fig. S3C). The single Dβ segment had a 13-bp G-rich coding region that could be productively read in all three frames ( Supplementary Fig. S1C). All six Jβ segments were functional and shared less than 60% amino acid sequence homology ( Supplementary Fig. S1C). Upstream of the Dβ gene, we identified ten Vβ segments. Among them, three were pseudogenes due to in-frame stop codon. As in mammals, a single Vβ gene (Vβ 4) with an inverted transcriptional orientation was located 3′ downstream of Cβ 2 (Fig. 1b). The current TCRβ locus is also incomplete because two extra Vβ Fig. S2C). Members within a Vβ subgroup shared more than 91.4% nucleotide identity. The two Vβ segments cloned from 5′ RACE exhibited 91.4% and 95.3% nucleotide identity with the remaining Vβ 3 members, respectively. Dot-plot matrix showed two regions containing tandem duplications, one corresponding to Vβ 3 subgroup and the other comprising of three copies of a homology unit, in which a PRSS2 gene and a Vβ 1 segment are located ( Supplementary Fig. S4D). Southern blotting analysis substantiated the presence of larger number of Vβ 3 segments and smaller number of Vβ 2 segments in the genome ( Supplementary Fig. S3C). TCRγ locus. The BAC clone DHS0702G12 was isolated using primers designed to amplify the first exon of the mallard Cγ cDNA (accession number AF378702). BAC end sequencing demonstrated that this clone likely encompassed most of the duck TCRγ locus. Shotgun sequencing of this BAC clone provided three contigs, contig 14 (5,853 bp), contig 34 (5,335 bp), and contig 53 (183,893 bp), which were located sequentially 5′ to 3′ but did not overlap. The duck TCRγ locus exhibited a translocon organization. A single Cγ gene containing three exons was identified in BAC clone DHS0702G12 and was also detected in the genomic Southern blotting assay ( Fig. 1c and Supplementary Fig. S3D). Exon 1 encoded the extracellular Ig domain, which contained two conserved cysteine residues that were required for intra-chain disulfide bond formation and three N-glycosylation sites. Exon 2 encoded a short Cp containing the single conserved cysteine that formed the inter-chain disulfide bond with TCR Cδ , and exon 3 encoded the Tm, a positively charged Ct, and the 3′ UTR regions. As expected, pairwise alignments showed that the duck TCR Cγ chain exhibits the highest amino acid identity (67.1%) with the chicken TCR Cγ chain, but low amino acid identity (less than 30%) with those of other vertebrates (Fig. 2d). Thirteen Vγ segments were identified in BAC clone DHS0702G12. Of them, five were pseudogenes due to an in-frame stop codon (Fig. 1c). Two extra Vγ segments, designated as Vγ 1.6 and Vγ 3.4, respectively, were cloned by 5′ RACE PCR, suggesting that there are at least two germline Vγ segments located 5′ upstream of the Vγ 3.3 in the genome. All 15 duck Vγ segments could be divided into six subgroups (Vγ 1 to Vγ 6) based on the same criterion applied for TCRα /δ and TCRβ (Table 1) (Supplementary Fig. S2D). Members within a Vγ subgroup shared more than 79.3% nucleotide identity. The Vγ 1.6 or Vγ 3.4 displayed 81.0% to 93.3% or 78.9% to 89.6% nucleotide identity with the remaining Vγ 1 or Vγ 3 segments, respectively. Dot plot analysis indicated no duplicated units longer than 2 kb in the current Vγ region ( Supplementary Fig. S4E). The results of Southern blotting of the representative Vγ subgroups are shown in Supplementary Fig. S3D. The number and intensity of hybridizing bands substantiated the presence of larger number of Vγ 1 segments and smaller number of Vγ 5 segments in the genome. Between Vγ 1.1 and Cγ , five functional Jγ segments were identified ( Supplementary Fig. S1D). TCRδ2 locus. Based on the previously reported cDNA sequence encoded by the duck TCRδ 2 locus (accession number AF415216) 10 , we obtained the complete genomic sequence of this locus in a BAC clone, DHS0901N17. The duck TCRδ 2 locus spanned approximately 16 kb and had a conserved organization similar to that of chicken, containing a single cluster of VHδ , Dδ , Jδ , and Cδ genes (Fig. 1d). The VHδ gene was flanked by typical 3′ 23-RSS, and the D gene had 5′ 12-RSS and 3′ 23-RSS, which were canonically used in the TCRδ . The Jδ gene had 5′ 12-RSS and a conserved splice site at the 3′ end. The Cδ 2 gene consisted of five exons. Exon 1 encoded the extracellular Ig domain, exon 2 encoded a short Cp, and the last three exons together encoded the Tm and a long Ct containing Vδ 3 6 (6) - 62 amino acids, as well as the 3′ UTR regions (Fig. 2b). To determine whether duck has more than one TCRδ 2 locus in its genome, Southern blotting was performed using one probe from VHδ and one probe from exon 1 of Cδ 2. Because the enzyme sites of Pvu I and Pst I were located in the VHδ sequence, one dark and one light band were detected using the VHδ probe ( Supplementary Fig. S3E). We also found one dark and one light band using the enzyme Pst I and the Cδ probe ( Supplementary Fig. S3E). The single dark band corresponded to the actual Cδ 2 gene, but the single light band seemed to be the conventional Cδ gene, in which exon 1 shares 59.5% nucleotide identity with that of the Cδ 2 gene (Fig. 2b). Phylogenetic analyses of duck Vα, Vδ, Vβ, and Vγ gene segments. As shown in Fig. 3a, the duck Vα 2 and Vα 3 subgroups were closely related to the chicken (and zebra finch) Vα 1 and Vα 2 subgroups, respectively, and orthologous genes have also been found in mammals. The duck genes from the Vδ 1, Vδ 2, Vδ 3, and Vδ 5 subgroups fell in the same phylogenetic clade with the Vδ 1 subgroup of chicken as well as the Vδ 1 and Vα 3 subgroups of zebra finch, but this clade was distinct and specific for birds. However, the duck Vα 1 and Vδ 4 subgroups did not clearly cluster with any Vα or Vδ genes from other birds or mammals (Fig. 3a). Although duck Vβ genes belonging to subgroup Vβ 2 and Vβ 3 were classified as distinct subgroups, both subgroups fell in the same phylogenetic clade as the chicken Vβ 1 subgroup, and all were derived from a common ancestral gene that was also present in amphibians. The duck Vβ 1 subgroup lacked orthologues in chicken and mammals but demonstrated a clear relationship with amphibian Vβ genes. Conversely, the duck Vβ 4 was closely related to the Vβ genes from chicken Vβ 2 and many mammalian Vβ subgroups but lacked a known orthologue in amphibians, suggesting its emergence after the separation of amphibians (Fig. 3b). In contrast to Vβ genes, all duck Vγ subgroups showed a high specificity to birds, except the Vγ 6 subgroup, which formed a weakly supported group (72% support) with the clade containing all mammalian Vγ genes. The duck Vγ 1 and Vγ 2 subgroups clustered with chicken Vγ 2, and the Vγ 3 and Vγ 4 subgroups clustered with chicken Vγ 3 and Vγ 1, respectively. The Vγ 5 subgroup appeared to have evolved separately in duck or anseriform species because it did not clearly cluster with any Vγ genes from other tetrapods (Fig. 3c). Expression of duck TCR genes in various tissues. The expression pattern of duck TCR genes in different tissues was assessed by quantitative real-time PCR. TCRα , γ , and δ 1 were highly expressed in the thymus and spleen, and relatively weakly in the large intestine, lung, and bursa, but they were barely detectable in other tissues (Fig. 4a,b and d). TCRβ was only expressed at high levels in the thymus; it was expressed at much lower levels in other tissues, including the spleen (Fig. 4c). Unexpectedly, TCRδ 2 was expressed at the highest levels in the lung but relatively weakly in lymphoid tissues, including the spleen, small intestine, thymus, and bursa (Fig. 4e), indicating that the TCRδ 2 may play a crucial role in the tolerance of ducks to avian influenza viruses. Diversity of conventional TCR transcripts in duck. Based on the results of 5′ RACE PCR, a total of 142 TCRα , 76 δ , 42 β 1, 43 β 2, and 102 γ cDNA clones were sequenced, and after removing duplicates, 134 α , 75 δ , 42 β 1, 43 β 2, and 102 γ remaining clones were considered unique. These clones were analysed for the use of V, D, and J gene segments and overall CDR3 diversity. TCRα. Of 134 unique TCRα cDNA clones, 112 clones were deemed potentially functional based on their complete ORFs. In general, members of subgroup Vα 3 (75 clones) appeared to be more frequently utilized than those of subgroups Vα 1 (29 clones) and Vα 2 (30 clones). However, excluding Vα 3.4, none of the germline Vα segments presented in the BAC sequence were found in the 5′ RACE clones (Supplementary Fig. S5A). For the potentially functional clones, the length of CDR3 was 28.4 ± 6.2 bp, encoding 4 to 16 amino acid residues with an average of 9.5 residues ( Supplementary Figs S5A and S6A). TCRδ. Among 75 unique TCRδ cDNA clones, 57 clones had an intact ORF. Forty-nine clones utilized 20 Vα segments, of which nine were also used by TCRα . Notably, none of the functional members belonging to subgroup Vα 1 were used in the TCRδ rearrangement, and in contrast to TCRα , all of the germline Vα segments identified in the BAC sequence, excluding the pseudo Vα 3.1, participated in TCRδ rearrangement, indicating that TCRα and TCRδ have different usage preferences for the Vα segments ( Supplementary Fig. S5B). In the remaining 26 clones containing Vδ segments, members of the subgroup Vδ 2 (18 clones) were more frequently used, whereas members of the subgroup Vδ 4 were not observed ( Supplementary Fig. S5B). There appeared to be a Jδ usage preference. The Jδ 1 segment, which has a more conserved heptamer in its RSS, accounted for more than two-thirds (57 clones) of the expressed Jδ repertoire (Supplementary Fig. S5B). Most VJ junctions contained either one (10 clones for Dδ 1 and 27 clones for Dδ 2) or both (31 clones) Dδ segments. Among them, N and P nucleotide additions between different gene segments were common. However, the remaining seven clones demonstrated evidence for N nucleotide additions but no D segment incorporation, indicating extensive trimming of D or direct Vα /δ -Jδ recombination during rearrangement. For the potentially functional clones, the length of CDR3 was 34.2 ± 9.0 bp and encoded 5 to 19 amino acid residues, with an average of 11.5 residues ( Supplementary Figs S5B and S6B). TCRβ. Of the 42 unique TCRβ 1 and 43 β 2 cDNA clones, 74 clones were considered to be potentially functional. Both TCRβ 1 and β 2 showed a similar usage pattern of Vβ segments. A total of 55 clones (24 of β 1 and 31 of β 2) used the V segments from subgroup Vβ 3, which contained the most germline members. Notably, the most frequently used V segment was Vβ 2, the single member of subgroup 2, accounting for more than 20% of the expressed Vβ repertoire in both β 1 (12 clones) and β 2 (9 clones), whereas Vβ 1.1, the only functional segment from subgroup 1, was not observed in the cDNA of either β 1 or β 2 ( Supplementary Fig. S5C). Similar to TCRδ , TCRβ also demonstrated a biased usage of Jβ segments, especially β 1, which utilized Jβ 1.2 more frequently than Jβ 1.1 (37 vs. 6 clones) (Supplementary Fig. S5C). Due to the single Dβ segment, the CDR3 length of TCRβ was 30.9 ± 7.1 bp, encoding 5 to 17 amino acid residues (average of 10.2 residues) (Supplementary Figs S5C and S6C). The features described above for TCRδ junctions were also found in the TCRβ junctions ( Supplementary Fig. S5C). TCRγ. Among 102 unique TCRγ cDNA sequences, 93 clones displayed an intact ORF. All potentially functional Vγ segments identified in the BAC sequence, excluding Vγ 5, were found in the cDNA clones. Members of subgroup 1 (49 clones) and 3 (37 clones) were preferentially used, especially Vγ 1.6 and Vγ 3.4, which were not located on the BAC sequence but each contributed to approximately 20% (20 clones) of the expressed Vγ repertoire ( Supplementary Fig. S5D). For all potentially functional clones, the average length of CDR3 was 24.1 ± 8.4 bp, encoding 2 to 16 amino acid residues with an average of 8 residues ( Supplementary Figs S5D and S6D). Table S1) were used in RT-PCR to investigate the junctional diversity of the duck TCRδ 2 transcripts. A total of 18 TCRδ 2 cDNA clones were sequenced, and after removing the duplicates, the remaining 16 clones were considered unique. The junctional diversity of the duck TCRδ 2 repertoire was characterized by clear P nucleotide additions to the 3′ ends of both V and D regions in almost all TCRδ 2 clones. For 13 productive rearranged clones, the average length of CDR3 was 36.6 ± 6.1 bp, encoding 9 to 14 amino acid residues with an average of 11.5 residues ( Supplementary Figs S5E and S6E). Discussion Compared with TCRα /δ and TCRγ gene loci, the germline repertoire of the TCRβ locus has been extensively studied in many vertebrates. Among all mammals studied to date, the genomic organization of the TCRβ locus is highly conserved, with a pool of Vβ genes positioned at the 5′ end and several tandem repeated Dβ -(Jβ ) 4~7 -Cβ clusters followed by a single V gene with an inverted transcriptional orientation located at the 3′ end [25][26][27][28][29][30][31] . Cβ genes within each mammalian species maintain a high degree of sequence similarity in the coding region but present high divergence in the 3′ UTR, indicating that the Cβ genes have undergone concerted evolution by intra-species homogenization using gene conversion 28,30,32,33 . However, the genomic organization of the TCRβ locus and concerted evolution of the Cβ genes that seems to be conserved in mammals are not present in other vertebrate species, especially in teleosts. The TCRβ locus of zebrafish resembles that observed in mammals, but the D gene is absent from the second Dβ -(Jβ ) n -Cβ cluster 34 . The TCRβ locus of channel catfish (Ictalurus punctatus) is arranged in a typical translocon organization containing a single Dβ gene followed by a total of 29 Jβ genes and two tandem Cβ genes 35 . Notably, the sequence similarity of Cβ isotypes within a single teleost species varies considerably. In the Japanese flounder (Paralichthys olivaceus) and Atlantic cod (Gadus morhua), different Cβ isotypes show more Scientific RepoRts | 7:41426 | DOI: 10.1038/srep41426 than 85% amino acid identity 36,37 . Conversely, in both zebrafish and catfish, the sequences of two Cβ isotypes are substantially different, sharing only 36% identity at the amino acid level 34,35 . Such multiple divergent Cβ isotypes have also been observed in bicolour damselfish (Stegastes partitus) 38 , as well as an urodele amphibian Mexican axolotl (Ambystoma mexicanum) 39 . Before this study, chicken was the only other bird for which the sequences of the TCRβ D-J-C region had been reported. The locus contains a single Dβ , 4 Jβ genes and a seemingly single Cβ gene 40 . In this study, we determined the complete sequence of the duck TCRβ locus, which is arranged in an unusual fashion, similar to that of the zebrafish, with a single Dβ gene followed by two tandem-aligned (Jβ ) n − Cβ clusters. The absence of the 2nd Dβ gene in ducks may have occurred as an independent event and happens to form a functional genotype that is similar to that of zebrafish. Another attractive feature of duck TCRβ lies in the sequence conservation of each domain between the two Cβ genes. The Ig domains of the two Cβ are well-conserved, whereas the following Cp, Tm, and Ct domains differ remarkably. This special distribution of Cβ identity has not been reported in any other vertebrates, in which the sequence identity is high (> 80%) or low (< 50%) throughout the whole coding region of the different Cβ genes. Furthermore, the coding sequence of the Ct domain is entirely included within exon 3 of Cβ 2 but separated into two exons by intron 3 in Cβ 1, suggesting that the two Cβ genes might be the result of an ancient duplication that occurred long before the speciation of Anas. The birth/death hypothesis has been postulated as an evolutionary mechanism of V genes from both Ig and TCR loci 41 . Recently, a phylogenetic analysis of genomic V-gene repertoires, which were extracted from mammals and reptiles with available WGS sequences, indicated that V genes from Ig and TCR loci might have markedly different evolutionary pathways. The Ig V genes undergo more pronounced birth/death processes, thereby permitting the frequent duplication of specific V subgroups that could directly recognize rapidly changing antigens in the external environment. By contrast, the V genes from the TCRα and TCRβ loci, which consist of multiple subgroups (Table 2) with relatively low duplication permissiveness throughout evolution, appeared to have undergone a co-evolution process with MHC molecules, resulting in natural evolutionary pressures [42][43][44] . As shown in Table 2, the most striking feature of duck Vα and Vβ genes is the presence of fewer subgroups in comparison to mammals. The same feature are also observed in the Vα and Vβ genes of chicken and zebra finch 10,45,46 . According to the co-evolution hypothesis, there might be some evolutionary connections between the diversity of Vα /Vβ subgroups and the number of expressed classical MHC loci. A larger number of expressed MHC genes would result in the positive selection of a more diverse TCR repertoire, but too many expressed MHC class I genes would also reduce the T cell repertoire during negative selection. Currently, the precise numbers of MHC class I and/or MHC class II genes have been ascertained in only a few birds. The chicken MHC-B locus contains two classical MHC class I genes (BF1 and BF2) and two classical MHC class II B genes (BLB1 and BLB2). However, only BF2 and BLB2 are dominantly expressed at the RNA and protein levels 47 . Similarly, among the five MHC class I genes in duck, only UAA is a dominantly expressed classical MHC class I gene; the others are the weakly expressed UDA and unexpressed pseudogenes (UBA, UCA, and UEA) 48 . Furthermore, in the genome sequence of zebra finch, only one functional MHC class I gene has been identified 49 . The above examples suggest that the evolution of fewer Vα /Vβ subgroups is probably due to the dominant expression of a single classical MHC class I gene in these avian species, providing an opportunity for the co-evolution of both MHC and TCR genes with associated roles in presenting and recognizing antigens. As summarized in Table 2, many more functional germline Vδ genes have been identified in "γ δ high" species than "γ δ low" species, indicating that the germline diversity of the Vδ gene is directly proportional to the percentage of peripheral γ δ T cells in mammals and chicken. Furthermore, three important points are relevant to the Vδ genes. First, the subgroup numbers of Vδ genes show no significant differences between the "γ δ high" and "γ δ low" species. Second, an enormous expansion of the germline repertoire of some Vδ subgroups is a striking feature observed in "γ δ high" species. For example, the Vδ 1 subgroup of cattle, sheep, and pig contains at least 52 50 , 40 51 , and 31 52 members, respectively. Finally, the single Vδ subgroup of chicken, which contains as many as 36 members 45 , falls into a bird-specific clade without any mammalian counterparts in the phylogenetic analysis. Table 2. Numbers of TCR V segments and subgroups in selected mammals and birds. a The Vα segments include the Vα expressed in either TCRα and/or TCRδ chains. b Numbers preceding the comma are the V segments, and numbers following the comma are the V subgroups. The numbers of functional segments or subgroups are shown in brackets. c The numbers of V segments were deduced based on the numbers of hybridizing bands with probes of specific subgroups using genomic Southern blotting. d The numbers of subgroups were deduced using cDNA sequences. "-" Indicates that no relevant information was available. Scientific RepoRts | 7:41426 | DOI: 10.1038/srep41426 Taken together, these findings suggest that the Vδ genes evolved following birth/death pathways similar to those that gave rise to Ig because antigen recognition by both Ig and the γ δ TCR complex is not MHC-restricted 3 . Although the distribution of T-cell populations in birds except the chicken remains to be determined, the presence of such a large number of Vδ genes as well as the expansion of the Vδ 2 subgroup suggest that the duck probably belongs to the "γ δ high" species. The length distribution of the CDR3 loop has been used as a metric in assessments of the possible range of binding paratopes generated by a given TCR type and has been analysed in human, mouse 53 , Japanese flounder 36 , and nurse shark 54 , albeit the data of the latter two species were reported from a limited sample size (Table 3). In the human, mouse, and nurse shark, the CDR3δ loops display a much broader length distribution than in the other three TCR types because of the presence of multiple D gene segments (for the mouse and human) that can join together, as well as the numerous N nucleotides (for the nurse shark) inserted into the V-D and D-J junctions. Notably, although 0 to 2 putative Dδ segments were shown to incorporate into a single CDR3δ , the duck CDR3δ loop lengths are 5-19 amino acid residues, similar to the ranges for the other TCR types of duck. However, the CDR3γ loops in the human, mouse, Japanese flounder, and nurse shark display a narrow length distribution (1-12, 4-11, 5-10, and 6-12, respectively), whereas the duck CDR3γ loops exhibit a broader distribution with 2-16 amino acid residues, which is far beyond the range exhibited by the listed counterparts. Given that the γ δ TCR can interact with diverse ligands in various ways, it is likely that the broad length distribution of CDR3γ compensates for the narrow length distribution of CDR3δ in ducks. The CDR3α and CDR3β loops of human, mouse, and Japanese flounder have, on average, very similar lengths (9.2 vs. 9.5, 8.5 vs. 8.9, and 11.2 vs. 11.2, respectively). The average lengths of the duck CDR3α and CDR3β loops show a tendency similar to the three species, although the average CDR3α loops appear to be 0.7 amino acid residues shorter than CDR3β (9.5 vs. 10.2). However, the duck CDR3α and CDR3β loops, ranging from 4-16 and 5-17 amino acid residues, display much wider distribution than those of the three species (6-12, 6-12 and 7-15 for CDR3β , as well as 6-12, 4-13, and 7-15 for CDR3β ). This indicates that duck CDR3α and CDR3β may have increased flexibility and are therefore better suited to recognize a larger number of antigenic conformations presented by MHC molecules. Methods Animal, DNA and RNA isolation and reverse transcription. A White Peking Duck aged 90 days post-hatching was purchased from Beijing Jinxin Duck Centre. Genomic DNA was extracted from blood cells following a routine phenol-chloroform protocol. Total RNA was isolated from various tissues using an RNeasy Mini Kit (Qiagen, Valencia, CA, USA). Reverse transcription was conducted using M-MLV reverse transcriptase (Invitrogen, Beijing, China) with an oligo(dT) adapter primer NotI-d(T)18 (Supplementary Table S1). Animal care was in accordance with the guidelines of China Agricultural University for animal welfare. All animal experiments in the present study were approved by the Animal Care and Use Committee of China Agricultural University. Bacterial artificial chromosome (BAC) genomic library. The White Peking Duck BAC (bacterial artificial chromosome) genomic library was constructed by Majorbio Co. Ltd., Shanghai, China. The BAC library was divided into two sub-libraries, each of which was prepared using blood cell genomic DNA that had been partially digested with the restriction enzymes Hind III or Bam HI. Each sub-library was composed of 49,152 clones, which were placed into 16 superpools of 8 × 384-well plates. Using pulsed-field gel electrophoresis analysis of 185 clones that were randomly selected from two sub-libraries, the average insert sizes were estimated to be 152 kb. BAC screening and sequencing. Positive BAC clones covering the duck TCRα /δ , β , γ and δ 2 loci were isolated from the BAC library via PCR-based screening with primers (Supplementary Table S1) designed based on the available TCR mRNA constant sequences of mallard from GenBank. For TCRα /δ , the first positive BAC clone was sequenced from both ends, and the end sequences were used to design primers (Supplementary Table S1) for the next round of screening to determine the BAC clone overlap. The positive BAC clones were subjected to shotgun sequencing and assembled using the next-generation sequencing platform by BGI (Beijing, China). Identification of germline V, D, J and C gene segments. To determine the locations of the V gene segments, BAC sequences were screened using the IgBLAST algorithm (http://www.ncbi.nlm.nih.gov/igblast/) by similarity to homologues from human and mouse. V gene segments are named 3′ to 5′ with the subgroup number followed by the gene segment number if there was more than one member in this subgroup. The D and J gene segments were annotated by searching the recombination signal sequences (RSS) using FUZZNUC (http:// Table 3. CDR3 length of TCR chains in selected vertebrates. The mean length is bracketed. The CDR3 length was defined as four amino acids less than the number of amino acid residues between the J region-encoded GXG triplet, where G is glycine and X is any amino acid, and the nearest preceding V region-encoded cysteine. embossgui.sourceforge.net/demo/fuzznuc.html) and the conserved motif FGXG encoded by the J segments manually. The exon-intron organization of the C regions was searched manually by comparing the cDNA sequence encoding the complete C region for each TCR with the duck genomic sequences. Non-TCR genes located in or flanking each TCR locus were identified using GENSCAN (http://genes.mit,edu/GENSCAN.html). 5′ RACE. The 5′ RACE System for Rapid Amplification of cDNA Ends (version 2.0, Life Technologies/Gibco BRL, Gaithersburg, MD, USA) was applied to thymus total RNA to obtain the expressed repertoire of each TCR type as well as the novel expressed V segments that were not located on the BAC clones. Specific primers for each constant region of the TCRα /δ , β and γ loci are listed in Supplementary Table S1. The resulting PCR products were cloned into the pMD-19T vector (TaKaRa, Dalian, China) and sequenced. 3′ RACE. The cDNA sequences encoding the complete C region of each TCR, including the immunoglobulin domain, Cp, Tm, Ct and 3′ UTR, were obtained by nested 3′ RACE PCR using thymus cDNA. Specific primers for each TCR gene were derived from the V region sequences. For the first round of PCR, sense primer located closer to the 5′ end of the cDNA (Supplementary Table S1) were paired with the antisense primer RT-P1. For the second round of PCR, a nested primer located 3′ to the original primer (Supplementary Table S1) was paired with antisense primer RT-P2, and a dilution of the first PCR was used as the template. The resultant PCR products were cloned into the pMD-19T vector and sequenced. Southern blotting. Genomic DNA was digested with different restriction enzymes and loaded into a 0.9% agarose gel, electrophoresed for 6 h, and transferred to a positively charged nylon membrane (Roche, Germany) for hybridization. The conserved Cα , Vα 1, Vα 2, Cδ , Vδ 2, Vδ 5, Cβ , Vβ 2, Vβ 3, Cγ , Vγ 1, Vγ 6, VHδ and Cδ 2 sequences from White Peking duck were used as probes. These cDNA fragments were labelled using a PCR DIG Probe Synthesis Kit (Roche, Beijing, China) using the primers listed in Supplementary Sequence analyses. DNA and protein sequence editing, alignments, and comparisons were performed using the DNASTAR Lasergene software suite 55 and Boxshade software (http://www.ch.embnet.org/software/ BOX_form.html). Dot plot analyses of the V regions of TCRα /δ , TCRβ and TCRγ loci were conducted with the dotter program 56 . For a given TCR type, if the V region (corresponding to FR1 through FR3) of a cDNA clone shared less than 97% nucleotide identity with the germline V segments identified in the BAC as well as V regions of every other cDNA clone, the V region was considered a novel V segment 57,58 . The CDR3 of the rearranged TCR V domain was defined as the region between the J region-encoded FGXG motif and the nearest preceding V region-encoded cysteine, according to the IMGT unique numbering system 59 . The length of CDR3 was defined as four amino acids less than the number of amino acid residues between the J region-encoded GXG triplet, where G is glycine and X is any amino acid, and the nearest preceding V region-encoded cysteine as described in ref. 53. Phylogenetic analyses. The nucleotide sequences corresponding to FR1 through FR3 of all V genes were aligned for tree construction using ClustalW. Phylogenetic trees were constructed in MEGA version 5.10 60 using the neighbour-joining method with 1,000 bootstrap replicates. The GenBank accession numbers of all sequences used are listed in Supplementary Table S2.
8,979.6
2017-01-30T00:00:00.000
[ "Biology" ]
Annual modulation of dark matter signals: Experimental results and new ideas Direct detection experiments searching for the scattering of dark matter particles off nuclei expect an annual modulation in their event rate. In this presentation, I will review the theoretical predictions and the experimental status of the search for annual modulations, with a focus on ongoing and planned experiments using NaI detectors. In particular, I will discuss the interpretation of the DAMA signal and related model-building efforts. Introduction One of the most intriguing predictions for the direct detection of dark matter (DM) in lowbackground underground experiments is that the event rate should exhibit an annual modulation. This modulation is a result of the rotation of the Earth around the Sun, which leads to a seasonal variation of the velocity of the Earth in the Galactic rest frame at the level of 10% (see Ref. [1] for details). As a result, both the overall flux of DM particles and their incoming velocity is predicted to peak around early June, which in turn shifts the spectrum of nuclear recoils from DM scattering to higher energies [2]. To first approximation, the time-dependence of the differential event rate can be written as dR where the modulation amplitude A(E R ) is predicted to switch sign at very low recoil energies (so-called anti-modulation) [3]. This variation of the DM signal with time can be exploited to remove unknown but time-independent backgrounds and may serve as the ultimate proof for the DM origin of an observed signal. 1 The elephant in the room In fact, this situation is not hypothetical. For many years the DAMA collaboration has been observing an annual modulation in their experimental single-hit data, which exhibits a time dependence compatible with the expectations for DM scattering, and which has by now reached a significance of more than 13σ [8]. Interestingly, the question whether the energy dependence is also compatible with the expectations for DM scattering turns out to be more subtle. In the energy range 2-6 keVee it is indeed possible to fit the energy dependence of the modulation amplitude with various DM models, including both spin-independent and spin-dependent scattering on either iodine or sodium [9]. However, the DAMA collaboration has recently released new data with a lowenergy threshold of only 0.75 keVee, which should make it possible to distinguish between these various possibilities [10]. For example, for spin-independent scattering on iodine, the anti-modulation should become visible at the lowest observable energies unless it is smeared out by the energy resolution. Preliminary results suggest that spin-independent scattering is strongly disfavoured (for the commonly assumed detector resolution and quenching factors), while spin-dependent scattering still gives an acceptable fit to data (see figure 1). Deeper insights can be expected if the threshold can be lowered even further. Unfortunately, there is no official analysis of the DAMA data that would allow for a comparison with other direct detection experiments. Nevertheless, such comparisons have been performed using publicly available data, most recently in Refs. [11][12][13][14][15]. These studies show very clearly that for standard astrophysical assumptions the DAMA signal is incompatible with existing exclusion limits for any type of nuclear scattering. The case of elastic scattering can even be excluded independent of astrophysical assumptions. This leads to the sobering conclusion that if the DAMA signal is in fact due to the scattering of DM particles, we must be fundamentally wrong about their astrophysical distribution and fundamental interactions. While this may seem unlikely, scientific progress requires that we find ways to independently test DAMA without the need for any such assumptions. In other words, we need to develop new NaI detectors capable of searching for annual modulations over the same energy range as DAMA. Unfortunately, this task is far from easy, as it remains challenging to achieve the necessary crystal purity and remove enough radioactive contaminants to produce detectors with the same ultra-low background rate as DAMA (of the order of 1 cpd/kg/keV in the region of interest). Testing DAMA with NaI experiments Nevertheless, two collaborations have recently achieved a level of background, and hence sensitivity, comparable to DAMA. The first is COSINE-100, which is a joint venture of KIMS and DM-Ice and operates at the Yangyang Underground Laboratory in South Korea. The total rate achieved by the experiment is low enough to exclude DAMA for standard assumptions [20], 1 It is worth noting that many backgrounds are not time-independent. Indeed, the underground muon flux correlates with atmospheric temperature, such that one may suspect backgrounds from neutron scattering to exhibit a seasonal variation [4]. And even slowly decreasing backgrounds from intrinsic radioactivity may lead to an apparent modulation of the signal if the background subtraction is performed periodically [5,6], as illustrated very recently by the COSINE collaboration [7]. . This plot has been produced using a new version of the public code DDCALC [16][17][18], which will be released in the near future, assuming standard quenching factors (Q Na = 0.3 and Q I = 0.09) and the energy resolution from Ref. [19]. 004.2 but the more model-independent search for annual modulations is currently compatible both with DAMA and with no modulation [21]. These constraints will however improve considerably with further data-taking, as well as with the planned upgrade COSINE-200, which is anticipated to achieve lower background and threshold. The second NaI experiment to test the DAMA claim is ANAIS-112 at Canfranc Underground Laboratory in Spain, which has achieved a 1 keV analysis threshold. Although the background level is slightly higher than DAMA, its time dependence is well understood and there is no evidence for annual modulations. Based on the current exposure from three years of data taking, ANAIS-112 and DAMA are incompatible at more than 3σ [22]. The significance of the exclusion is expected to increase to over 4σ with additional data taking and improved background rejection. Additional experiments that aim for a particularly high radio purity and expect to be able to independently test the DAMA claim in the near future are PICOLON [23] and SABRE [24]. The latter is particularly innovative in that it aims for two different sites in the two hemispheres: one in the Gran Sasso National Laboratory in Italy and one in the Stawell Underground Physics Laboratory in Australia. A novel strategy to probe the DAMA modulation has recently been proposed by the COSI-NUS collaboration [25]. The idea is to operate NaI crystals as low-temperature calorimeters, using the phonon channel to achieve much higher energy resolution and much lower threshold than otherwise possible. The simultaneous observation of heat and scintillation light furthermore allows for the discrimination of electron and nuclear recoils and a correspondingly strong background suppression. The potentially background-free environment makes it possible to test DAMA within a single annual cycle, using the fact that the modulation amplitude cannot be larger than the average absolute rate [26]. The required target mass of around 1 kg is expected to be available for data taking in 2023. If these experiments, as widely assumed, yield null results, it will be essential to understand whether there exists any residual model dependence in the comparison with DAMA. In this context, it will be essential to establish whether the quenching factors relating the observed and true recoil energy may vary from detector to detector. If for example it turns out that these quenching factors depend on the growth method of the crystal or the concentration of Tl doping, the energy range probed by different experiments would differ and the comparison would once again become model-dependent. Accurate measurements of the NaI quenching factors, as carried out for example in Ref. [27], are therefore an integral part of every effort to test the DAMA modulation. If, on the other hand, the DAMA signal is confirmed, we need to address awkward questions about our understanding of DM. Indeed, after a considerable effort by the entire community, there is not a single consistent model that would give a good fit to the DAMA modulation while evading all other constraints. A convincing signal in NaI detectors would therefore force us to fundamentally re-think the interactions between dark and visible matter and the distribution of DM in the Milky Way. Inelastic dark matter To illustrate the amount of innovation and creativity in the community trying to find viable models to explain the DAMA anomaly, I would like to review a particularly interesting idea, which has proven ultimately unsuccessful in explaining DAMA but has led to a variety of other activities. The idea is to consider a fermionic DM particle with a Dirac mass term m D and introduce an additional Majorana mass term m M through spontaneous symmetry breaking. 2 For m M ≪ m D this leads to two nearly degenerate mass eigenstates χ and χ * with mass splitting δ. Interestingly, the mass eigenstates turn out to have off-diagonal couplings, i.e. all interactions must involve either the transition χ → χ * or χ * → χ. Because of the energy required to overcome the mass splitting, this model has been named "Inelastic DM" (IDM) [29]. In IDM models, there may be a population of excited states, produced either in the early Universe (if the decay into the ground state is sufficiently slow) or produced via upscattering on cosmic rays, the Sun or the Earth [30]. These excited states can give rise to qualitatively new signals in direct detection experiments if they either de-excite spontaneously in the detector (χ * → χ + γ, called Luminous DM [31]) or if they release the energy δ upon scattering off nuclei or electrons (χ * + X → χ + X , called Exothermic DM [32]). As pointed out in Ref. [33] the resulting event rate may depend on the orientation of the detector relative to the velocity of the Earth through the Milky Way, leading to a characteristic daily modulation of the signal, which offers another way of distinguishing signal from background. Alternatively, one can look for the production and subsequent decays of χ * at colliders. This process is particularly interesting if the excited state is sufficiently long-lived that its decay gives rise to a displaced vertex. Ref. [34] proposed a strategy to search for such events at Belle II. The specific signature depends on whether DM is produced in isolation or in association with the dark Higgs boson responsible for generating the Majorana mass term [35]. Indeed, such a dark Higgs boson may also be long-lived, such that one can search for two displaced vertices involving each a pair of charged leptons or charged mesons. An official search for this signature by the Belle II collaboration is ongoing, so that exciting results may be expected in the near future. Conclusion Annually modulating event rates are among the most striking predictions for DM direct detection experiments and have triggered large experimental efforts. After more than a decade of data taking, the DAMA annual modulation keeps growing in significance and has now been observed for the first time for energies below 1 keVee. Although independent analyses of the DAMA signal are complicated by the lack of publicly available data, it seems clear that there is no consistent interpretation of all direct detection experiments in terms of any known model of DM-nucleus scattering. It is therefore essential to perform a completely model-independent test of the DAMA signal using independent NaI experiments. Two such experiments, namely COSINE-100 and ANAIS-112 have already published results that are in tension with the DAMA signal and are expected to become even more constraining in the near future. Further insights are expected from the upcoming ultra-pure detectors PICOLON and SABRE, the latter aiming for one detector each on both hemispheres. Finally, the COSINUS collaboration is pursuing the innovative idea to operate NaI crystals as low-temperature calorimeters. Given these developments, the next 2-3 years will be decisive for our understanding of DM annual modulations. Clearly, a confirmation of the DAMA anomaly would be groundbreaking, but its implications for DM research and model-building are completely unclear. Nevertheless, given the creativity that the community has exhibited in the past, we can certainly look forward to new DM models being proposed, which will in turn inspire novel search strategies.
2,833.6
2023-07-03T00:00:00.000
[ "Physics" ]
Web GIS as a pedagogical tool in tourist geography course: the effect on spatial thinking ability and self-efficacy ABSTRACT While Web GIS has been supposed as a useful tool in improving the spatial thinking abilities of students, most existing empirical studies were seldom undertaken in an educational programme providing rather limited geo-technological training to students. This study focuses on tutorial sessions of a tourism geography course in a Singaporean university, in which students usually engaged with very few GIS and other geospatial technologies. Combining a standardised assessment, that is Spatial Thinking Abilities Test, and semi-structured interviews, the study suggests that the implementation of Web GIS, collaborating with students’ major background and pre-existing GIS experience, efficiently enhanced their performance in spatial thinking abilities test; moreover, the exposure to GIS practices during tutorial sessions also stimulated students’ interest in using GIS and enhanced self-efficacy, which further equipped students with stronger motivation to continue learning geography. The paper contributes to existing debates around (Web) GIS as a pedagogical tool through incorporating qualitative interview materials into the discussion. Introduction Geographical Information Systems (GIS) is an integrated set of data-driven programmes which enable users to easily collect, analyse, archive and retrieve spatial information derived from the real world (Fitzpatrick & Maguire, 2001).While taking geography as its home discipline, GIS shares great similarities with computational sciences, as it relies on a range of computational hardware and software to collect, store and process huge volumes of digital geospatial data (Nyerges, 2009).The computational feature of GIS differentiates it from other sub-disciplines of geography in which digital data and computational settings may not be necessary.Notably, the differences between GIS and other sub-disciplines of geography do not refer to a definite boundary; GIS as a critical analytical tool and a thinking way has been employed in studies on a wide range of geographical topics like gentrification (Zambrano et al., 2021), feminist space (Kwan, 2002) and ageing population (Ho et al., 2021). As an important component of the discipline of geography and a crucial analytical tool of geographical research, GIS has two implications in geography education: teaching about GIS and teaching with GIS (Sui, 1995).To put it simply, teaching about GIS aims to provide students with the knowledge and technology of GIS per se (such as the digitalisation, visualisation, management and analysis of geospatial data) as well as relevant training (Harvey & Kotting, 2011;Lukinbeal & Monk, 2015); teaching with GIS means that GIS acts as a toolkit to help students acquire knowledge of various disciplines (A.González et al., 2021;Yin, 2010), that is the primary entry point of this article.Teaching with GIS has undergone wide interrogations in the contexts of both universities and schools in the last two decades.Existing literature has emphasised that GIS could act as an efficient pedagogical tool to help students acquire specific knowledge of human geography, migration studies, public health and other subjects regarding geospatial processes (Baker et al., 2015).In addition to disciplinary knowledge, recent scholarship has noted the effect of GIS on students' spatial thinking capacities, self-efficacy and general achievement in their learning performance (e.g.Bearman et al., 2016;Fargher, 2018). Despite existing inspired discussion regarding GIS as a pedagogical tool, how GIS could contribute to higher geography education needs further exploration.As stated previously, scholars have respectively examined the effect of GIS on either learning disciplinary knowledge or developing thinking abilities.However, there is a lack of the conversations between the two branches of research.Little research has commented on whether and how the improvement of students' spatial thinking abilities, thanks to the adoption of GIS in teaching, could contribute to their general interest in learning geography (Hou et al., 2016).The possibility should be further examined by empirical evidence.Taking a tutorial session of a tourism geography course as the research case, the paper interrogates the role of Web GIS as a pedagogical tool in higher geography education.The choice of Web GIS was based on a considerable body of literature which deliberates the distinctions between desktop GIS and Web GIS in geography education, although the comparison between different formats of GIS will not be one of the research objectives.Specific research questions include: (1) how does the use of Web GIS impact students' spatial thinking ability?(2) how do undergraduates reflect on their use of Web GIS in a class which conventionally involves very few GIS technologies?Both standardised quantitative assessment and qualitative interviews were employed in this study to address the two research questions. GIS and Web GIS With the development of computational and geospatial technologies, desktop applications have not remained the only format of GIS; instead, mobile technology and internet infrastructure have been forcefully integrated into GIS capabilities.In the context of geography education, the use of Web(−based) GIS, or so-called internet(−based) GIS, has been notably increasing in geography classrooms, which is supposed to be an efficient alternative to desktop GIS (Baker, 2005).Web GIS is "a form of GIS that is deployed using an Internet Web browser" (Bodzin et al., 2016, p. 280).Through specific web browsers, users of Web GIS can access up-to-date maps and other spatial data (Kim et al., 2013).Similar to desktop GIS, Web GIS mostly provides users with interfaces including a wide range of mapping tools to allow users to manipulate digital maps and conduct spatial analysis (Milson & Earle, 2008). Despite similarities, there exist many differences between desktop GIS and web GIS other than the medium of delivery (desktop applications or web browsers).While desktop GIS is described as fully functional, Web GIS usually refers to limited functions in terms of mapping and spatial analysis which weaken the performance of Web GIS in expert geospatial analysis (Songer, 2010).Meanwhile, the barriers for using traditional desktop GIS are also obvious.Lloyd (2001) has contended that the software complexity, the lack of curriculum materials and the shortage of experienced GIS faculty had impeded the wide use of desktop GIS in higher geography education.Limited by the technical development in the early 2000s, Lloyd (2001) merely suggest solutions, like "a combination of new funding sources and careful attention to the design of a new computer classroom", "writing a series of computer mini-applications and accompanying learning activities", and "working with part-time and junior faculty members and graduate teaching assistants" (p.162).Nowadays, thanks to the development of computational and internet sciences, Web GIS seems to be an efficient alternative to desktop GIS in the classroom.Some scholars have attempted to adapt pedagogy to the rapidly developed GIS.For instance, Ricker and Thatcher (2017) suggest CyberGIS which relies on not only internet technology but also big data.Those advanced pedagogy has largely remained ideas and lacks experimental practices.In general, Web GIS "is more intuitive than previous versions of GIS with user-friendly interfaces which do not require a great deal of expert GIS knowledge to use" (Fargher, 2018, p. 2).Moreover, as Web GIS is usually free to be accessed through web browsers, it avoids expensive expenditure on purchasing typical GIS applications and the effort to install desktop GIS applications. The user-friendly, time-saving and effort-saving features of Web GIS make it more suitable for school geography education (Baker, 2005; R. D. M. González & Torres, 2020).Compared to desktop GIS, school instructors and students do not have to experience a heavy cognitive load while employing Web GIS as a toolkit in the classroom and the users do not conduct complicated spatial analysis (Songer, 2010).Existing literature has suggested that both web GIS and desktop GIS can help students' learning in class and work better than a paper atlas or other conventional pedagogy (Bodzin et al., 2015;Milson & Earle, 2008).Given the similar functions of Web GIS and desktop GIS in geography education, the following literature review will take GIS as an umbrella term and does not distinguish specific kinds of GIS. The effect of GIS on learning disciplinary knowledge Either desktop GIS or Web GIS has been widely adopted as a pedagogical tool in higher geography education, particularly in courses at an introductory level which do not aim to deliver professional GIS or cartographic knowledge, like world geography, historical geography and health geography.Initially, GIS functioned as an efficient alternative to traditional teaching strategies.Rutherford and Lloyd (2001) make a comparative study between computer-aided instructional strategy, that is desktop GIS, and traditional lecture instruction in a world geography course at the undergraduate level and suggest GIS could significantly improve students' achievement.GIS also helpfully motivates students to actively enquire into problems proposed by instructors and creates a supportive and efficacious atmosphere in the classroom (Songer, 2010;Summerby-Murray, 2001).More importantly, researchers advocate GIS as a useful teaching approach because it benefits students' understanding of obscure concepts and complex spatial processes.In their class of earth and environmental system which transcends natural and social disciplines, Bodzin and Anastasio (2006) comment on the use of GIS "The implementation of Web-based GIS in conjunction with other content materials enables learners to analyse and synthesise large amounts data that would be much more difficult in other formats" (p.295).Park (2021) uses Web GIS to explain to students the spatial pattern of air pollution and health risks.More recently, researchers note the impact of social context on implementing GIS in higher education and advice on how higher education in developing countries better incorporate GIS into teaching (A.González et al., 2021). As for courses which teach about GIS, teaching with GIS becomes compulsory for educators.Compared to courses discussed previously, pedagogical studies regarding professional GIS courses are interested in how to innovate conventional teaching approaches.For example, while Web GIS is usually used in human geography or nongeographical courses, Clark et al. (2007) suggest that Web GIS is also efficient and more popular among students in an introductory GIS course due to the self-paced interactive atmosphere it creates.There are also other formats of innovations like Carlson's (2007) field-based GIS, in which students are asked to bring a portable device installed with GIS software and experience the studied field in person.However, those innovative attempts remain rather less compared to other geographical courses.With the rapid development of computing and Internet technology, researchers also interrogate how the digital age as students' growing-up contexts make a difference in teaching GIS in the classroom (Harvey & Kotting, 2011). Beyond higher education, GIS, particularly Web GIS or other kinds of minimal GIS, has been increasingly used as a pedagogical tool in school geographical or nongeographical classes.Similar to its use in universities, GIS in school classrooms also helps to illustrate disciplinary knowledge, abstract concepts and complex spatial processes, as well as creates a self-motivated classroom (Keiper, 1999;Reed & Bodzin, 2016).Due to the different purposes of higher education and school education, school educators intendedly emphasise that GIS helps children to establish common sense and basic knowledge of the whole society rather than merely specific disciplines.Milson and Earle (2008) find that Internet-based GIS stimulates students' cultural awareness and empathy for distant others through bringing remote issues to interfaces just in front of students.R. D. M. González and Torres (2020) argue that the implementation of Web GIS in school geography classes "contribute[s] to spatial citizenship to raise awareness of spatial values, civic engagement, and democratic participation" (pp.82).Pedagogical research in the school environment pays more attention to the interactions between students and teachers and takes the enthusiasm of teachers towards GIS into the discussion.While many teacher participants positively comment on the efficiency of GIS in teaching activities (Keiper, 1999;Kerski, 2003;Kim et al., 2013), teachers expressed varying attitudes towards the adoption of GIS (Bednarz & Schee, 2006;Walshe, 2017), which is quite understandable given the long-standing barriers regarding implementing GIS in school classrooms.In section 2.1, we have demonstrated the factors which limit the use of desktop GIS, such as high expenditures and complex installation.Although Web GIS can avoid most limitations of desktop GIS and seemingly suits school classrooms better, scholars have identified many other frustrations with using Web GIS.For example, school teachers may lack necessary Web GIS training in their trainee period; the use of Web GIS requires the re-design of the whole curriculum which brings extra work burdens; the irregular updates of selected Web GIS applications in the middle of academic terms may interrupt the pre-designed courses and any preparation; there are not satisfactory hardware facilities (e.g.computers and internet connections) in many schools, especially those in developing regions (Sinha et al., 2017;Walshe, 2017). The effect of GIS on students' spatial thinking abilities Spatial thinking abilities are directly relevant to geospatial technologies like GIS and become the primary concerns of geography educators and researchers (Marsh et al., 2007).In its report, Learning to Think Spatially, National Research Council (2006) conceptualises spatial thinking as a constructive amalgam containing knowledge in three aspects, which are the nature of space, the methods of representing spatial information and the processes of spatial reasoning, and the three aspects could reinforce each other.With the belief that spatial thinking ability as an important component of the educational curriculum at all levels could be taught and learnt, National Research Council suggests that "GIS had a clearly demonstrated potential as a support system for spatial thinking." (p.221) Inspired by the report, researchers started to explore the relevance between GIS and spatial thinking abilities through conducting various assessments involving experimental and control groups (Kim & Bednarz, 2013;Lee & Bednarz, 2012).The analyses of assessment scores hint towards a strong relationship between GIS learning and the improvement of spatial thinking abilities in both professional GIS courses and other geographical courses at the college level (Jo et al., 2016;Lee & Bednarz, 2009;Madsen & Rump, 2012).Bodzin et al. (2016) unpack the strong relationship in their study on a course of earth materials and earth history that GIS "supported their geospatial analysis for making inferences about space, geospatial patterns, and geospatial relationships among the data that were visualised in the Web GIS." (pp.279) Also, existing research argues that the efficiency of GIS in improving spatial thinking ability is conditional on pedagogical design, educational institutions and technological settings (Manson et al., 2014;Xiang & Liu, 2019).Studies at the level of higher education further suggest that the improvement of spatial thinking abilities potentially benefits students' acquisition of disciplinary contextual knowledge (Giorgis, 2015;Hou et al., 2016).As spatial thinking ability is key to everyday decision-making and applicable in various professional fields, enhanced spatial thinking due to GIS could increase students' employability (Bearman et al., 2015;Şeremet & Chalkley, 2015). At the school level, researchers also suggest that teaching with GIS should not focus too much on technical details but scrutinise the cultivation of students' spatial thinking (Marsh et al., 2007).Different from studies at the university level, which intend to figure out the improved scope of spatial thinking abilities through various standardised assessments, studies regarding school students mostly adopt interviews, surveys and questionnaires designed by researchers themselves to examine whether students have better performance in terms of spatial thinking abilities and how their final learning outcome and overall cognitive abilities have been impacted (Baker & White, 2003;Bodzin et al., 2015;Fargher, 2018;Nielsen et al., 2011).Most studies suggest a positive effect of GIS on the two perspectives, despite the exact GIS formats (like Web GIS, digital atlas and other minimal GIS).Given the positive impacts and limited use of GIS in school education, researchers call for greater incorporation of GIS into school classrooms (R. D. M. González & Torres, 2020). In short, while a large literature examines GIS as a pedagogical tool in either schools or universities, this topic still has great potential to stimulate more academic discussions.With a particular focus on the use of GIS in university geographical courses, we suggest that existing literature merely notes the curriculum of single courses while paying limited attention to the overarching design of whole educational programmes.Researchers should interrogate whether the Web GIS as a pedagogical tool would work differently in different contexts of educational programmes, such as between a programme involving abundant spatial technology training and a programme lacking this training.Moreover, although standardised assessments provide valuable quantitative data for the analysis of spatial thinking, the research in schools hints at the value of qualitative methods, which potentially support relevant research in universities (Jo et al., 2016).Qualitative methods might be the potential to further shed light on how the implementation of Web GIS evokes students' subjective feelings towards learning geography and using spatial technological tools.In this paper, we use the term of self-efficacy to refer to students' interest, desire and self-motivation in the future use of Web GIS or other spatial technological tools. Research contexts and participants The study was conducted among students of the Tourism Geography course in a Singaporean university, who were Year 1 or Year 2 undergraduates.According to the course curriculum, students needed to register for one tutorial session, which was designed to practise theoretical issues taught in lectures, and there were three time slots for students to choose according to their schedule.Given that all tutorial sessions would help students achieve compulsory learning goals, tutorial group 1 (W1) and group 3 (W3) were designed to be the experimental groups in this study which employed Web GIS as a pedagogical tool and group 2 (P2) was the control group which still used paper maps as usual.Before students registered for a specific tutorial group, they had been fully informed of the differences between groups, the overall course design and the research information.Thus, students could decide which tutorial group they were registering for and whether they participated in the study.According to the institutional regulations which ask students to free select tutorial sessions and restrict lecturers to interrupt students' selection, we as tutorial instructors did not advise on students' decisions and had rather limited capacities to control the balances between three tutorial sessions in terms of gender, disciplinary background, and GIS experience.Admittedly, self-selection bias might be caused in the following analyses.Through comparing the test scores of spatial thinking between experimental and control groups, the effect of Web GIS on spatial thinking could be examined.Although some researchers intend to test students' spatial thinking ability before class involving Web GIS and take the pre-class scores as the baseline for future analysis (e.g.Kim & Bednarz, 2013), we asked students to take the Spatial Thinking Abilities Test (STAT) only after class.The STAT is a standardised test to assess the spatial thinking ability of respondents, which will be demonstrated in detail in the methodology section.The reason why we did it was to avoid the analytical error caused by repeated participation in the same test.To put it more frankly, as all questions in the STAT adopted in this research have standard answers, students likely have better performance when they repeatedly answer the same questions. Finally, 83 students were participating in the study.Table 1 displays the overview of participants which includes the exact number of participants by group, gender, major and GIS experience.In terms of gender, female students (about 69.88%) were more than male students (about 30.12%) in total number of students.From the perspective of major backgrounds, more non-geography students (about 55.42%) than geography students (about 44.58%) participated in this research, despite it being a geographical course.Notably, most students (66 out of 83, about 79.52%) did not have any GIS experience before the tutorial sessions; particularly, the number of students who used GIS before was significantly less than the number of students majoring in geography (17 to 37).The lack of GIS experience was understandable as GIS was not a strong branch in the Department of Geography where the study was conducted, and most students registered for more human geography courses than physical and GIS ones.With such background, the study potentially examines how the use of GIS as a pedagogical tool could make up for the lack of professional GIS training in terms of spatial thinking and self-efficacy. The design of the researched tutorial sessions While the tutorial classroom had access to desktop GIS and professional GIS instructors were based in the department, we still selected Web GIS as the primary pedagogical tool in this research.Web GIS is "ideal for a majority of classrooms that are not interested in the time, commitment and energy required of desktop GIS" (Baker, 2005, p. 46).As can be seen from Table 1, most students who attended the tutorial lacked the necessary GIS experience to conduct analyses on desktop GIS and nearly half of the students were from non-geographical majors which usually did not require geospatial analytics in their everyday studies.In this regard, although the tutorials belonged to a professional geographical course and desktop GIS is expected to provide more precise spatial analyses (Fargher, 2018;Lloyd, 2001), Web GIS appeared to be more suitable for students and the overarching course design.Specifically, an online mapping tool, MyMap, was selected as the primary Web GIS application in this study, which allowed users to create and customise maps by plotting specific places using the Google map interface (Figure 1).Within each tutorial group, students were required to complete the following tasks regarding volunteering tourism in teams of four. Task 1 This task aimed to create a map of students' volunteer activities via either MyMap or paper maps.Students were asked to mark all overseas places where they had participated in volunteer activities in the past.Here we emphasised "overseas" because all students were Singaporeans and travelling within the city-state for volunteer or any other purposes could hardly be supposed as a kind of tourism.For the experimental groups, students were able to add map layers, put pins, draw lines and use any mapping tools on the base map after logging into the interface of MyMap with their Google accounts. Figure 2 shows a map made by one team in the experimental groups.For the control group, each team was given two paper maps at different scales and they could use stickers (to identify the location of volunteer places) and markers (to make lines between places) to make their own maps. Task 2 Each student was asked to answer a hypothetical question, about where to do volunteer work in future, and provide at least two reasons based on theories regarding volunteer tourism which had been demonstrated in lectures before tutorial sessions and their reading materials.For students in the experimental groups, they marked their interested places on the base map via putting digital pins and provided their reasons via the function of "add text" in MyMap.For students in the control group, they wrote down their reasons on stickers which also marked their interested place on the paper maps. Task 3 Students were expected to discuss their answers in Task 2 with their teammates and further edit the map of their teams.After team discussion, all teams were sharing their maps in front of the whole tutorial groups through projectors (for the experimental groups) or presenting as posters (for the control group). In the tutorial design, only MyMap tools which edit layers, points and lines on a map were used other than other tools with stronger analytical power.There were two reasons here.First, given the existence of the tutorial group using paper maps (P2), we had to ensure that the tasks assigned to the groups of Web GIS (W1 and W3) could be processed manually.Second, similar to the reason why Web GIS rather than desktop GIS was selected, student participants were unable to utilise many analytical tools of MyMap and conduct complicated geospatial analyses. Spatial thinking ability test At the end of each tutorial session, students took the Spatial Thinking Ability Test (STAT) through an online Google form at their own discretion.The STAT used in this study was developed by Lee and Bednarz (2012).As a standardised assessment of spatial thinking, its rigorousness, reliability and feasibility have been proved by existing work like Jo et al. (2016).The STAT includes 16 questions requiring either multiple choices or short responses, which help to identify spatial thinking abilities across eight aspects.These eight aspects are "( 1 mentally visualising 3-D images based on 2-D information [question 8]; (7) overlaying and dissolving map [question 9 to 12]; and (8) comprehending geographic features represented as points, line or polygon [question 13 to 16]" (Lee & Bednarz, 2012, p. 18).In this study, the experimental groups and the control group were using the same assessment for convenience to compare final test scores.Each student was given 15 minutes to complete the test.The question types and full question list of STAT are provided in Appendices A and B. The analysis of STAT scores was achieved through a range of statistical processing. Semi-structured interviews In this study, we also conduct semi-structured interviews to enhance our understanding of the effect of Web GIS on students' spatial thinking abilities.Although the STAT could shed light on the final impact of Web GIS, the standardised quantitative method hardly reflected students' self-efficacy towards using GIS and their personal feelings regarding the overarching design of tutorial sessions.We advertised the recruitment of interviewees after the STAT and provided available time slots for interviews to help them decide whether to participate in the follow-up research.Based on the interest expressed by students, six students were finally recruited to participate in interviews.As there were significantly more female students than male students in this class (58 to 25), we did not pursue a balanced gender structure while recruiting interviewees.Although the number of non-geography students (46) exceed the number of geography students (37) in the STAT, more geography students were recruited in the following-up interviews as very few non-geography students expressed interest in this.It might be because the research and relevant course were based in the department of geography and suited geography students better in terms of time availability.The overview of the interviewees could be seen in Table 2.All interviewees will be anonymised in the following texts. There were two stages contained in each interview.In the first stage, students were asked their opinions on tutorial experience with Web GIS, their comments on (dis) advantages of Web GIS and the challenges encountered during tutorial sessions.After the first stage, interviewees would receive their scores for the STAT and the interview processed to the second stage, which aimed to bring forth students' personal inference about what were the key factors influencing their spatial thinking and their performance in the STAT.The full question list is provided in Appendix C. Students' spatial thinking ability The STAT scores collected from all three tutorial groups were aggregated and analysed in this section.Although 83 students took the STAT at the end of tutorial sessions, we excluded two uncompleted test forms collected from W1 from the analysis, which possibly influenced the accuracy of the following statistical processing.Thus, all analyses made in this section were based on 81 valid STAT forms. STAT scores across tutorial groups Given the setting of the experimental groups (W1 and W3) and the control group (P2), the first question which should be answered is whether the implementation of Web GIS took any impact on students' spatial thinking ability.Table 3 displays the mean value, SD, median value and other information of the three groups.All three statistical indexes suggest that students in W1 (Mean = 11.67,SD = 2.04, Median = 12) and W3 (Mean = 11.32,SD = 2.06, Median = 11) who used Web GIS for their tutorial tasks had a better performance than students in P2 (Mean = 10.34,SD = 3.11, Median = 10) who used paper maps. Table 3 also suggests differences between the two experimental groups.Students in W1 received higher scores on STAT than students in W3.Considering the tutorial sessions delivered to W1 and W3 were the same, the difference was likely due to different majors and GIS experience.As can be seen in Table 3, W1 had the highest percentage of students majoring in geography and having GIS experiences beforehand and W3 had the lowest percentage.The long-term learning experience in geography and relevant GIS training had possibly endowed students in W1 with better spatial thinking abilities before the tutorial sessions (Jo et al., 2016).The comparison between P2 and W3 further suggests that the impact of Web GIS implementation on spatial thinking abilities might be stronger than the impact of pre-existing geographical experiences, because P2 had a higher percentage of students in geography major and with GIS experience, but received the lowest scores on STAT.When the two impact factors, the implementation of Web GIS and the pre-existing geographical experiences, functioned together, students' spatial thinking abilities could be further enhanced and the significant gap between W1 and P2 becomes rather understandable. STAT scores across eight aspects of spatial thinking The STAT enabled researchers to unpack spatial thinking abilities across eight different perspectives.With a probe into more details of STAT scores and careful comparison between the experimental groups and the control group, the study further revealed how the implementation of Web GIS influenced students' spatial thinking from different perspectives.Table 4 displays students' performance in eight aspects of STAT questions, which have been demonstrated in section 3.3.It was still W1 that made the best scores out of all three groups.Students in W1 received higher scores in six (Type 1, 2, 5, 6, 7, 8) out of eight types than students in P2 did and in four types (Type 1, 2, 4, 7) than students in W3 did.The outstanding performance could be explained by both the use of Web GIS and the pre-existing learning experience of geography.Students in W3 had a notable performance in Type 5, 6 and 8, which examine students' abilities to correlate spatially distributed phenomena, visualise 3D images based on 2D information and comprehend geographic features.Considering the low percentage of students majoring in Geography and with pre-existing GIS experience, it emphasises that the implementation of Web GIS had particular advantages to improve students' ability to understand complicated spatial patterns and visualise spatial data (Bodzin et al., 2016).However, it is reluctant to assert that students in W3 achieved overwhelming scores over their counterparts in P2, as they scored higher on merely half of the types, namely Type 1, 5, 6, and 8, and received the same scores for questions of Type 2. The comparison between P2 and W3 further indicates the usefulness of Web GIS in improving spatial thinking, while also suggesting that there were perhaps some limitations of Web GIS. The limitations of Web GIS were further noted in questions of Type 3 and 4, in which students in P2 received the best scores.These questions examine students' ability to choose the ideal location based on the given spatial factors and to visualise a slope profile based on a topographic map, which requires contextual knowledge from geography and other social disciplines.The mixed disciplinary background of students in P2 likely helped them to solve these questions.In this regard, the cultivation of spatial thinking abilities may benefit from the collaborations between geography and other relevant disciplines, which, however, need to be supported by any further empirical evidence in future research. In general, the experimental groups (W1 and W3) had better performance than the control group (P2) in most types.This result resonates with data analysis in the last section, which indicates the positive impact of both the use of Web GIS and the preexisting learning experience of geography could enhance students' performance in the STAT.All three tutorial groups received their own best scores in questions of Type 1 (comprehending direction and orientation) and the lowest scores in questions of Type 6 (mentally visualising 3-D images based on 2-D information), which implies that the satisfying performance of spatial thinking is conditioned by not only the implementation Students' self-efficacy towards using GIS The interviews with six students in the experimental groups who used Web GIS for their tutorial session reveal how students reflected on their experiences with Web GIS and how much the implementation of Web GIS altered their attitudes towards GIS and more general geography learning.All interviewees agreed that using Web GIS platforms such as MyMap had made their tutorial experiences more interesting than conventional classroom teaching.According to two of our interviewees (S and J), incorporating Web GIS into tutorials provided an engaging approach to learning geography, especially since they devote most tutorial sessions to discussing specific empirical cases or reading materials.The introduction of Web GIS activities can thus bring about novel ways of engaging students during classroom time. I think it (Web GIS) can be interesting for a lesson because many of the Geography lessons focus a lot on the readings and just regurgitate the content in the readings.So, this (Web GIS) can add colour and fun elements to the tutorial session . . .(S, Geography, Year 1 UG) (Web GIS gives) visual engagement to better grasp spatial aspects of geography and it helps to give variety to classroom learning besides discussion and such . ..(J,Sociology, Year 3 UG) Interestingly, the very nature of Web GIS being a hands-on learning platform for activities in a classroom setting can, in turn, bring about a higher level of students' involvement in the learning activity.Using Web GIS requires users to be keen on the geospatial technology and gives students opportunities to practice and apply the conceptual issues they had learnt via a more visually, understandable format.As mentioned by E: (Web GIS) gives students a more hands-on approach, even when we are not exactly out in the field.Through the implementation of Web GIS, students obtained the opportunity to train their ability to see spatially defined theories or concepts in concrete, virtual forms.Thus, the introduction of Web GIS into tutorials brought forth the connection between abstract contextual knowledge to real world events.As W (Philosophy, Year 4 UG) shared, "(Using Web GIS is) quite fun as I can see how Geography can be applied in the world rather than just as theories . . .". Therefore, Web GIS as a pedagogical tool can re-enforce the overarching teaching objective in a visual format where students are able to comprehensively understand abstract spatial knowledge with "real world" spatial patterns.MyMap, the main Web GIS tool used in the study, was also identified to be an interesting and user-friendly platform, particularly compared to traditional GIS platforms like ArcGIS.All six interviewees gave positive feedback when they were asked to describe their experience with MyMap through the tutorial, especially among students who have no prior knowledge of GIS.The user-friendly feature of Web GIS became the greatest attraction for student interviewees to use it, as Web GIS enabled them to spend less time and effort learning geospatial technologies while they could complete tutorial tasks on time.This finding recalls arguments in existing research, that Web GIS works effectively to remove multiple barriers of desktop GIS in teaching activities (Songer, 2010).In addition to greatly easing off their learning journey, the interviews further suggest that the experience with Web GIS changed the negative "stereotype" of GIS software in students' minds.Given that very few students in this study had GIS experiences before the tutorial sessions, they usually, while may imprecisely, thought that GIS is commonly associated with having good background knowledge of coding and other complicated computing skills, which was following the findings in existing literature like Lloyd (2001) and Songer (2010).This is especially so for human geographers who tended to view GIS as a tool associated with data driven methodology, coding and physical geography related concepts.Although students' idea about GIS has certain rationality (Bearman et al., 2015(Bearman et al., , 2016)), it inevitably prevents students' attempts from including GIS in their study, at least before they used MyMap in this tutorial session.S directly addressed this point as she was aware of the need to know computational coding when using GIS, and that had made her apprehensive in using the associated platform.Unlike desktop GIS like ArcGIS, Web GIS has simplified the technically and spatially complicated functions which makes it highly appealing to students, especially among those with no GIS background.Based on that, students were more receptive to the usage of such technology in the classroom when they viewed this as a tool with fewer technical requirements as compared to conventional desktop GIS.This, in turn, brought about stronger motivation to be engaged with such technology which could potentially increase their self-efficacy toward using GIS and the whole geographical discipline.As some students raised during interviews, students would generally be more engaged to learn about spatial concepts if there were more usages and practices in using Web GIS technology for tutorials and other classroom activities in their university. (I think when) students are interested in spatial technology, they would be more engaged in using such a platform to make the study processes more efficient.(L,Geography, Year 2 UG) (Using My Map) was a simple and fun way to see things spatially and that could be sufficient to intrigue students to learn more about Geography. (J, Geography, Year 2 UG) In short, all interviewed students had positively reflected on their learning experiences with Web GIS and indicated the changing ideas about geospatial technologies and the interest to adopt Web GIS in future learning.The above discussion has shown how Web GIS as a pedagogical tool can result in a higher level of self-efficacy towards using GIS software, which they had used to avoid in their study experience, and how the learning experience with Web GIS can evoke stronger interest in learning geography. Concluding remarks Combining the standardised assessment, Spatial Thinking Abilities Test, and semistructured interviews, the study investigates how Web GIS as a pedagogical tool impacted students' spatial thinking and how students personally reflected on their learning experience with Web GIS in a course of tourism geography in a Singaporean university.As for the impact of Web GIS on spatial thinking, the comparison of STAT scores between the experimental groups and the control group suggested that the implementation of Web GIS effectively enhanced students' spatial thinking.Our analyses also indicated that students' disciplinary background and pre-existing GIS experiences meditated their spatial thinking.The study experience in geography and relevant GIS training helpfully improved students' abilities to solve spatial issues, but the overall improvement of spatial thinking also required collaboration between knowledge of different disciplines. Regarding students' reflection on their experience with Web GIS, the inclusion of Web GIS into classroom teaching which usually involves very few geospatial technologies notably changed students' negative stereotypes of GIS and stimulated their self-efficacy, which further equipped them with stronger motivation to continue learning geography. The paper could strongly make up for existing debates around whether and how (Web) GIS could serve higher education of geography as a pedagogical tool.While the pedagogical study of (Web) GIS mostly fails to incorporate qualitative material into a discussion (similar arguments could be seen in Guan et al. (2019)), the interviews with students enabled us to unpack how students viewed their own learning experiences regarding GIS and spatial thinking.The study further proved the feasibility of the STAT in diverse social contexts and populations suggested by Jo et al. (2016) by investigating its implementation in Singapore, a less researched context in scholarship adopting STAT.Moreover, the university setting also constituted the unique research context in this paper.All data collection was based in a university where rather few GIS courses and relevant technical training have been included in the classroom.With the specific design of researched tutorial sessions, we would argue that if students lack GIS training in the whole programme, short-term exposure to GIS or perhaps other geospatial technologies could significantly improve students' performance of spatial thinking. It must be admitted that there are some potentials to further polish this research in the future.If the STAT scores and interview data could had been collected from more tutorial sessions and if more students could had been recruited into this study, the study would have a richer database for the interest of more rigorous analysis.Moreover, given that spatial thinking might be subject to various pedagogical, institutional and technological conditions (Manson et al., 2014), a probe into these conditions and their relationship with the integration of GIS should also be critical for the interest of students' spatial thinking. Figure 2 . Figure 2. MyMap result of a team in the experimental groups. Table 1 . Summary of participants. Table 3 . Mean, SD, and Median scores on STAT by tutorial groups. Table 4 . STAT scores by the eight types of spatial thinking abilities. It's quite fun to mark out and see the world from MyMap.I think it (MyMap) gives a very quick and easy reference because it is not easy to zoom into details especially when you print out maps.(MyMap) gives a very big overview[in terms of understanding the world] . . . I think based on what we know . . .GIS is a tough skill to do because there is coding involved in it eg: Python . . .so for most Arts major, that is seen as kind of a difficult skill to learn.Web-based GIS is a bit easier (for us to use) because it removes the coding component which by itself is quite difficult.We are still Arts students and in general, it can be quite difficult for us to see things spatially, using the GIS way of thinking, [let alone using codes to create maps].(MyMap) is [thus] easier relative to the coding side of GIS.(S, Geography, Year 1 UG)
9,454.2
2023-04-05T00:00:00.000
[ "Geography", "Education", "Computer Science" ]
Graphene Oxide as a Nanocarrier for a Theranostics Delivery System of Protocatechuic Acid and Gadolinium/Gold Nanoparticles We have synthesized a graphene oxide (GO)-based theranostic nanodelivery system (GOTS) for magnetic resonance imaging (MRI) using naturally occurring protocatechuic acid (PA) as an anticancer agent and gadolinium (III) nitrate hexahydrate (Gd) as the starting material for a contrast agent,. Gold nanoparticles (AuNPs) were subsequently used as second diagnostic agent. The GO nanosheets were first prepared from graphite via the improved Hummer’s protocol. The conjugation of the GO and the PA was done via hydrogen bonding and π–π stacking interactions, followed by surface adsorption of the AuNPs through electrostatic interactions. GAGPA is the name given to the nanocomposite obtained from Gd and PA conjugation. However, after coating with AuNPs, the name was modified to GAGPAu. The physicochemical properties of the GAGPA and GAGPAu nanohybrids were studied using various characterization techniques. The results from the analyses confirmed the formation of the GOTS. The powder X-ray diffraction (PXRD) results showed the diffractive patterns for pure GO nanolayers, which changed after subsequent conjugation of the Gd and PA. The AuNPs patterns were also recorded after surface adsorption. Cytotoxicity and magnetic resonance imaging (MRI) contrast tests were also carried out on the developed GOTS. The GAGPAu was significantly cytotoxic to the human liver hepatocellular carcinoma cell line (HepG2) but nontoxic to the standard fibroblast cell line (3T3). The GAGPAu also appeared to possess higher T1 contrast compared to the pure Gd and water reference. The GOTS has good prospects of serving as future theranostic platform for cancer chemotherapy and diagnosis. Introduction The discovery of graphene and graphene derivatives in the field of nanoscience and nanotechnology has attracted a great deal of research attention, this is because of their wide range of exceptional properties, including electrical, mechanical and thermal properties, to mention a few [1]. the drug as the anticancer drug in this work and as a replacement for the established toxic anticancer agents, such as doxorubicin (DOX) [12,13], which has been reported in various articles. In this work, GO nanosheets were used to conjugate the natural compound (PA) simultaneously with the Gd contrast agent (GAGPA). Subsequently, the GAGPA nanohyrid was used to adsorb AuNPs as the second MRI contrast agent (GAGPAu). GAGPAu is also referred to graphene oxide (GO)-based theranostic nanodelivery system (GOTS). Although most articles use AuNPs for CT contrast improvement, it was used as supporting MRI contrast in this work. Results and Discussion This section highlights the results from the characterization of the pure phases and developed nanocomposites. Studies have shown that aromatic molecules mostly chemotherapeutics can be doped on the surface of GO nanosheets using their sp 2 -carbon sites as base for π-π stacking interactions or the COOH groups for hydrogen bonding with the guest molecules. The aforementioned interactions have been confirmed and meticulously discussed according to the methods of characterization. The GOTS is developed based on the concept of theranostic delivery system (TDS) with therapeutic and diagnostic agents both loaded on the GO layers. Figure 1a illustrates the theranostic arrangement of the GO nanosheets conjugated with protocatechuic acid and gadolinium, with AuNPs adsorbed on the surface in a typical TDS setting. Molecules 2018, 23, x FOR PEER REVIEW 3 of 16 toxic anticancer agents, such as doxorubicin (DOX) [12,13], which has been reported in various articles. In this work, GO nanosheets were used to conjugate the natural compound (PA) simultaneously with the Gd contrast agent (GAGPA). Subsequently, the GAGPA nanohyrid was used to adsorb AuNPs as the second MRI contrast agent (GAGPAu). GAGPAu is also referred to graphene oxide (GO)-based theranostic nanodelivery system (GOTS). Although most articles use AuNPs for CT contrast improvement, it was used as supporting MRI contrast in this work. Results and Discussion This section highlights the results from the characterization of the pure phases and developed nanocomposites. Studies have shown that aromatic molecules mostly chemotherapeutics can be doped on the surface of GO nanosheets using their sp 2 -carbon sites as base for π-π stacking interactions or the COOH groups for hydrogen bonding with the guest molecules. The aforementioned interactions have been confirmed and meticulously discussed according to the methods of characterization. The GOTS is developed based on the concept of theranostic delivery system (TDS) with therapeutic and diagnostic agents both loaded on the GO layers. Figure 1a illustrates the theranostic arrangement of the GO nanosheets conjugated with protocatechuic acid and gadolinium, with AuNPs adsorbed on the surface in a typical TDS setting. Protocatechuic Acid Release Pattern from GAGPA Nanocomposite Profiles of the drug release from GAGPA nanocomposite were obtained both in PBS media (pH 7.4 and 4.8). As shown in Figure 1b, the drug release started at around 5 min and stopped after about 3000 min, with over 65% of the drug was released in pH 4.8 and 40% in pH 7.4. This is understandably due to the gradual dissolution and detachment of the anticancer agent from the GO nanocarrier, which are bind by hydrogen-bonding and π-π stacking to the hydroxyl groups as well as the sp 2 carbon atoms of the GO nanosheets [12,17]. Although based on the aromatic structure of the drug, the hydrogen bonding between the carboxylic and epoxide groups of GO and the hydroxyl groups of the protocatechuic [18] is favored in this case. The higher release process in the acidic medium than in the alkaline medium could also be due to the increase in hydrogen bonding propensity under acidic conditions, which would result in higher competition among the hydrogen bearing groups. This process weakens the hydrogen bonding between the carboxylic groups and the hydroxyl groups of the GO nanolayers and the protocatechuic acid respectively [5], hence the variation in the release profiles. The release in the alkaline medium could also be influenced by ion-exchange reaction [19]. The simulation release mechanism implies higher release in the actual cancer cells than in blood stream or non-cancer cells, since the host environment of cancer cells is acidic and vice versa for the blood and tissues. This has been established since 1960 by Ehrlich, where tumor-targeting is said to be based on the acidity of the pathological sites [20]. The release profiles are indicating the successful adsorption of the anticancer agent on GO nanosheets, which is in conformity with the anticancer evaluation of the nanocomposite as discussed in the cytotoxicity studies of this work. It is worthy of mentioning, that the percentage drug release profiles of our GAGPA nanocomposite is much higher than the reported drug release profiles of GO-conjugated/DOX nanocomposite [21]. Protocatechuic Acid Release Kinetics from GAGPA Nanocomposite The kinetic release of protocatechuic acid from the GAGPA nanocomposite was studied for clear understanding of the drug release behavior. Three kinetic models were used to study the drug release: Pseudo-first order : Pseudo-second order : Parabolic diffusion : In the above equations, q e and q t represent the amounts of PA released at equilibrium (e) and at time (t), while M t and M o are representing the amount of the PA in the nanocarrier at the time of release t and 0, respectively and k is the rate constant [22]. Although the drug release data was analyzed with the 3 models, only the best fit plot was presented in this paper (Figure 1c,d). However, the correlation coefficients (R 2 ) deduced from all the three model plots are presented in Table 1. The pseudo-second order kinetic model was observed to be the best fit for the PA release data from the GAGPA nanohyrid, with correlation coefficients (R 2 ) of 0.985 for pH 7.4 and 0.992 for pH 4.8. As reported in Table 1, the R 2 of the first order is 0.863 (pH 7.4) and 0.563 (pH 4.8), while the parabolic diffusion is 0.936 (pH 7.4) and 0.932 (pH 4.8). Further, the rate constant (k) derived from the pseudo-second order model is 4.9 × 10 −3 and 2.3 × 10 −3 g/mg h for pH 7.4 and 4.8, respectively. Other information deduced from the model and its plot is the percentage saturation (%) and t 1/2 (min), which have all been summarized in Table 1. Powder X-ray Diffraction Studies The step by step loading of the guest molecules into the GO nanocarrier was monitored by PXRD, in which the X-ray patterns were obtained at various stages. The patterns of the pure GO synthesized from the graphite material using improved Hummer's method were obtained. Subsequently, the diffractograms of the pure drug was also taken and lastly the diffractograms of GO after Gd and protocatechuic acid loading (GAGPA) as well as after coating with AuNPs GO-Gd/PA-Au, as named GAGPAu in Figure 2a. The pure GO nanosheets XRD reflection can be seen at around 2θ position = 10 • degrees (d = 8.5 Å) in the GO diffractogram [23,24], indicating successful preparation of the GO nanolayers. However, emphasis is on the GAGPA nanocomposite which was obtained after drug and Gd loading. The GAGPA diffractogram appears to have shifted a bit towards the lower 2θ angle on the plane, and broader in shape than the pure GO diffractogram. This is an indication of hydrogen-bonding and π-π stacking bonding between the carboxylic groups of GO [25] and hydroxyl groups of the drug as suggested by the drug release profiles [5,26,27]. It also could be due to electrostatic interaction between Gd and GO surface. The interactions appear to be at the surface only and not within the GO interlayer sheets. Generally, the PA and Gd loadings did not result in significant changes in phase in the GO nanosheet structure. Nevertheless, after coating with AuNPs (GAGPAu), the diffractogram assumes the diffractogram of the pure AuNPs. Only a weak reflection of the GAGPA can be noticed in the diffractogram. All other reflections are distinctive of the face centered cubic structure (FCC) of AuNPs (111, 200, 220, at 38 • , 45 • and 65 • theta positions, respectively) [11]. This is due to the electrostatic interaction between the negatively charged GO surface and the positively charged AuNPs [12]. Powder X-ray Diffraction Studies The step by step loading of the guest molecules into the GO nanocarrier was monitored by PXRD, in which the X-ray patterns were obtained at various stages. The patterns of the pure GO synthesized from the graphite material using improved Hummer's method were obtained. Subsequently, the diffractograms of the pure drug was also taken and lastly the diffractograms of GO after Gd and protocatechuic acid loading (GAGPA) as well as after coating with AuNPs GO-Gd/PA-Au, as named GAGPAu in Figure 2a. The pure GO nanosheets XRD reflection can be seen at around 2θ position = 10° degrees (d = 8.5 Å) in the GO diffractogram [23,24], indicating successful preparation of the GO nanolayers. However, emphasis is on the GAGPA nanocomposite which was obtained after drug and Gd loading. The GAGPA diffractogram appears to have shifted a bit towards the lower 2θ angle on the plane, and broader in shape than the pure GO diffractogram. This is an indication of hydrogen-bonding and π-π stacking bonding between the carboxylic groups of GO [25] and hydroxyl groups of the drug as suggested by the drug release profiles [5,26,27]. It also could be due to electrostatic interaction between Gd and GO surface. The interactions appear to be at the surface only and not within the GO interlayer sheets. Generally, the PA and Gd loadings did not result in significant changes in phase in the GO nanosheet structure. Nevertheless, after coating with AuNPs (GAGPAu), the diffractogram assumes the diffractogram of the pure AuNPs. Only a weak reflection of the GAGPA can be noticed in the diffractogram. All other reflections are distinctive of the face centered cubic structure (FCC) of AuNPs (111, 200, 220, at 38°, 45° and 65° theta positions, respectively) [11]. This is due to the electrostatic interaction between the negatively charged GO surface and the positively charged AuNPs [12]. Raman Spectroscopy Studies The degree of disorder that is induced by drug loading through hydrogen boding as well as surface coating with AuNPs was assessed through Raman spectroscopy using the Raman shift. The disorder peaks (D band) and graphitic peaks (G band) obtained from the pure GO and after Raman Spectroscopy Studies The degree of disorder that is induced by drug loading through hydrogen boding as well as surface coating with AuNPs was assessed through Raman spectroscopy using the Raman shift. The disorder peaks (D band) and graphitic peaks (G band) obtained from the pure GO and after successive loadings with the guest materials (GAGPA and GAGPAu) were studied to support other studies conducted on the GOTS and the nanocarrier. It can be seen in Figure 2b that the visible variations in the bands intensities of the samples at different levels of loading. The intensities of the bands appear to be increasing at every stage of modification, starting with GO nanosheets in Figure 2b (A). The intensity of the D and G bands are lower than the intensities of the D and G bands of the GAGPA nanocomposite (Figure 2b (B)). Consequently, the intensities of the D and G bands of GAGPA are lower than those of the GAGPAu nanocomposite (Figure 2b (C)). This is presumably due to the surface interactions that occurred between the GO and the PA molecules (mainly hydrogen bonding and π-π interactions), and subsequently between the GO nanosheets and the positively charged AuNPs (electrostatic interactions) [12]. In addition, the I D /I G intensity ratios appear to be increasing slightly in the order of the surface activities. The relative intensity ratio of D to G bands (I D /I G ) serve to indicate the degree of disorder/functionalization in a graphitic material [28] and are non-proportionate to the sp 2 -carbon clusters sizes [29]. The sp 2 -carbon atoms are believed to be the site of the π-π interactions [12]. Nevertheless, the estimated I D /I G of the pure GO, GAGPA and GAGPAu nanocomposites are 0.84, 0.85 and 0.95, respectively. This slight increase after every stage of modification is also an indication of successive interactions at the various stages of the GO-based nanocomposite synthesis. Similar observations have been previously reported by other researchers who conjugated GO with other therapeutics in drug delivery applications [23]. Thermal Studies The thermal changes of the GO nanolayers at different levels of modification were monitored by TGA/DTG analysis. The thermal activities were used to support other studies in the confirmation of the GOTS formation at various stages. The thermal decompositions of the pristine GO nanosheets were first studied to confirm the initial formation of the GO nanolayers from graphite source by the improved Hummer's method. Consequently, the thermal studies of the pure PA, the GAGPA and GAGPAu nanocomposites were followed. Figure 3a-d and Table 2 present the thermograms of pure GO, pure PA, GAGPA and GAGPAu nanocomposites. Table 2 also highlights some of the important parameters associated with the thermal activities, which include, decomposition temperature range (T range ), maximum peak temperature (T max ) and change in mass [(decomposition mass) Delta m]. The GO thermogram (Figure 3a) did not show much activity, only three decompositions can be observed, starting with decomposition at 71 • C (16.4%) due to loss of water, second and the major decompositions at 197 (30.4%) is due to breakdown of the GO bonds, that is graphene to oxygen functional groups (reduction) and lastly the residue at 255 • C (9.7%) [25]. Table 2. Decomposition temperature range (T range ) maximum peak temperature (T max ) and change in mass (Delta m). The protocatechuic acid thermogram in Figure 3b equally shows three decompositions, starting with the removal of absorbed water at 122 • C (11.1%) [26], and followed by PA decomposition at 262 • C (67.4%) [27]. The residue from PA decomposed at 308 • C (9.1%). The GAGPA thermogram in Figure 3c shows similar decomposition patterns as the drug and the GO. Although it appears more thermally stable than the GO nanosheets, which could be attributed to the conjugation that occurred as a result of hydrogen bonding between the GO functional groups and the drug. The major decomposition of GO appears at 200 • C (34.1%), which is slightly higher than in its pure form (197 • C). In addition, the PA residue decomposed at much higher temperature (888 • C, 18.1%) which is likewise higher than in the pristine PA (308 • C). Thus, GAGPA is considerably more stable than the pristine components. This has been observed by other researchers, where the developed composites appeared to be more stable thermally than the individual GO and the therapeutic agents, which confirms the hydrogen bonding between GO and the drug [5]. At the last stage of loading (after coating of AuNPs), the thermal events appear to be more than in the previous thermograms. Figure 3d is the GAGPAu thermograms, which indicates various decompositions that are linked to the individual loaded guests. The decompositions appear to be fragmented at different temperatures, nevertheless, the major decompositions are at 73 °C (12.8%) 200 °C (13.7%), 537 °C (19.7%) and 747 °C (5.8%) representing loss of physically-adsorbed water, GO decomposition, PA decomposition and AuNPs decomposition, respectively. The decompositions have confirmed the successive doping of the guest molecules onto the GO nanocarrier and thus, formation of GO-based TDS [9]. It is also comprehensible that GAGPAu nanocomposite is the most thermally stable amongst its counterparts. Fourier Transformed Infrared Spectroscopy Analysis The chemical interactions between the GO nanosheets and the adsorbed species were further studied with a Fourier transform infrared spectroscopy (FTIR) analysis at various phases' of preparation. The absorptions spectra of the pure GO, pure PA, pure Gd, drug and Gd loaded GO At the last stage of loading (after coating of AuNPs), the thermal events appear to be more than in the previous thermograms. Figure 3d is the GAGPAu thermograms, which indicates various decompositions that are linked to the individual loaded guests. The decompositions appear to be fragmented at different temperatures, nevertheless, the major decompositions are at 73 • C (12.8%) 200 • C (13.7%), 537 • C (19.7%) and 747 • C (5.8%) representing loss of physically-adsorbed water, GO decomposition, PA decomposition and AuNPs decomposition, respectively. The decompositions have confirmed the successive doping of the guest molecules onto the GO nanocarrier and thus, formation of GO-based TDS [9]. It is also comprehensible that GAGPAu nanocomposite is the most thermally stable amongst its counterparts. Fourier Transformed Infrared Spectroscopy Analysis The chemical interactions between the GO nanosheets and the adsorbed species were further studied with a Fourier transform infrared spectroscopy (FTIR) analysis at various phases' of preparation. The absorptions spectra of the pure GO, pure PA, pure Gd, drug and Gd loaded GO nanocomposite (GAGPA) and the GAGPA nanocomposite coated with AuNPs (GAGPAu) are presented in Figure 4A-E. The GO nanocarrier FTIR spectrum in Figure 4A depicts a broad and intense band of -OH stretching at 3278 cm −1 , which is attributed to the hydroxyl groups that are present in the GO. The broadness of the peak could be due to bonding of the OH to carbon atoms present in the structure [30]. The stretching vibration of C=O bonds is observed at 1721 cm −1 , the peak is ascribed to carbonyl and carboxylic acid groups of GO. The band for C=C bonds appeared at 1617 cm −1 and are linked to remnant of sp 2 or unoxidized carbon structure of graphite [31]. The COH bonding is observed at 1360 cm −1 [30]. The C-O stretching vibrations can be seen at 1162 and 1037 cm −1 , which are also attributed to oxidation of GO. In the PA spectrum ( Figure 4B), a broad band at 3187 cm −1 is due to O-H stretching vibrations [32]. A band at 1666 cm −1 is assigned to C=C and the one at 1291 cm −1 is associated with the carboxyl group (C=O stretching) of protocatechuic acid [33]. The carboxylic group OH bending vibrations appeared between 935 and 551 cm −1 . The Gd(NO 3 ) 3 spectrum is shown in Figure 4C; two bands at 3453 and 3187 cm −1 are linked to O-H stretching vibration [30], a band at 1650 cm −1 is associated with H 2 O bending vibration [34]. The two bands at 1444 and 1314 cm −1 are due to the NO 3 -stretching vibration of the nitrate group [35]. The GAGPA and GAGPAu nanocomposite spectra ( Figure 4D,E, respectively) are similar to the GO spectrum. However, new bands and shift in bands can be observed when compared to the pristine GO, which are resulting from the surface interactions between GO and the loaded compounds. For example, the stretching vibration of C=O band at 1721 cm −1 is missing in all the nanohybrids. This is due the hydrogen bonding between carbonyl group of the GO and the PA hydrogen bearing groups. 1037 cm −1 , which are also attributed to oxidation of GO. In the PA spectrum ( Figure 4B), a broad band at 3187 cm −1 is due to O-H stretching vibrations [32]. A band at 1666 cm −1 is assigned to C=C and the one at 1291 cm −1 is associated with the carboxyl group (C=O stretching) of protocatechuic acid [33]. The carboxylic group OH bending vibrations appeared between 935 and 551 cm −1 . The Gd(NO3)3 spectrum is shown in Figure 4C; two bands at 3453 and 3187 cm −1 are linked to O-H stretching vibration [30], a band at 1650 cm −1 is associated with H2O bending vibration [34]. The two bands at 1444 and 1314 cm −1 are due to the NO3-stretching vibration of the nitrate group [35]. The GAGPA and GAGPAu nanocomposite spectra ( Figure 4D,E, respectively) are similar to the GO spectrum. However, new bands and shift in bands can be observed when compared to the pristine GO, which are resulting from the surface interactions between GO and the loaded compounds. For example, the stretching vibration of C=O band at 1721 cm −1 is missing in all the nanohybrids. This is due the hydrogen bonding between carbonyl group of the GO and the PA hydrogen bearing groups. Further, the C=C absorption band has shifted to 1609 and 1623 cm −1 and has become sharper and stronger in the GAGPA and GAGPAu, respectively. This is presumably due to π-π interactions, which usually occur at the sp 2 carbon [36]. Likewise, the COH bonding absorptions can be seen to have shifted to 1364 and 1383 cm −1 and even stronger in the GAGPA and GAGPAu, respectively. The C-O stretching vibrations have also shifted to 1059 and 1106 cm −1 in the GAGPA and GAGPAu, respectively. These are all indications of surface interactions as observed in the Raman shifts of the pure GO and the nanocomposites. Transmission Electron Microscopy Studies The micrographs of the nanocomposites were taken after drug loading and AuNPs coating, GAGPA and GAGPAu, respectively. The purpose of this study is to understand the shapes, sizes and to certain extent the morphology of the developed nanohybrids. Although transmission electron microscopy (TEM) is used in the determination of shapes and sizes, the general morphology can be viewed at low magnifications micrographs. Figure 5a,b are micrographs of the GAGPA and GAGPAu nanocomposites at different magnifications. The GAGPA micrographs show typical multiple layered structure of GO with deposition of the drug on the surface [37]. The deposits of the drug can be seen Further, the C=C absorption band has shifted to 1609 and 1623 cm −1 and has become sharper and stronger in the GAGPA and GAGPAu, respectively. This is presumably due to π-π interactions, which usually occur at the sp 2 carbon [36]. Likewise, the COH bonding absorptions can be seen to have shifted to 1364 and 1383 cm −1 and even stronger in the GAGPA and GAGPAu, respectively. The C-O stretching vibrations have also shifted to 1059 and 1106 cm −1 in the GAGPA and GAGPAu, respectively. These are all indications of surface interactions as observed in the Raman shifts of the pure GO and the nanocomposites. Transmission Electron Microscopy Studies The micrographs of the nanocomposites were taken after drug loading and AuNPs coating, GAGPA and GAGPAu, respectively. The purpose of this study is to understand the shapes, sizes and to certain extent the morphology of the developed nanohybrids. Although transmission electron microscopy (TEM) is used in the determination of shapes and sizes, the general morphology can be viewed at low magnifications micrographs. Figure 5a,b are micrographs of the GAGPA and GAGPAu nanocomposites at different magnifications. The GAGPA micrographs show typical multiple layered structure of GO with deposition of the drug on the surface [37]. The deposits of the drug can be seen in the micrographs with 200 nm magnification, as indicated by the red arrow. This confirms the earlier assertions from XRD, FTIR, Raman spectroscopy and TGA analyses that suggest the successful loading of PA on the GO nanosheets. On the other hand, the GAGPAu nanocomposite micrographs show predictable outcome, where the positively charged AuNPs can be seen to be adsorbed on the surface of the GO nanosheets. The adsorbed species are believed to be bonded onto the GO surface by electrostatic interactions. Nevertheless, the deposited AuNPs are spherical in shape and predominantly small in size. The mean average size of the AuNPs is around 2 nm, as deduced from the histogram and distribution curve. The results are in agreement with the XRD diffractograms, where pure AuNPs reflections were observed in the GAGPAu nanohybrid. Similar observation has been previously reported by Usman et al., [9], where layered double hydroxide (LDH)-based nanohybrid was surface coated with AuNPs. The micrographs also confirm the earlier assertion of successful formation of a GO-based TDS. Cytotoxicity Studies Cytotoxicity studies were conducted to determine the effectiveness of the theranostic nanodelivery system of the anticancer agent. The tests determine the level of toxicity to healthy cells. In doing so, two cell lines were used in testing the cytotoxicity of the GO nanocarrier, the final TDS that is after AuNPs doping (GAGPAu) and the anticancer drug, pure PA. Standard fibroblast cell line (3T3) was used to test the safety of the TDS as to healthy cells, whilst human liver hepatocellular carcinoma cell line also known as HepG2, was used for cancer cytotoxicity test. Figure 6a,b show the data expressed in form of histogram obtained from the zones of inhibition of the study in 3T3 and HepG2 cell lines, respectively. The samples were dosed in various On the other hand, the GAGPAu nanocomposite micrographs show predictable outcome, where the positively charged AuNPs can be seen to be adsorbed on the surface of the GO nanosheets. The adsorbed species are believed to be bonded onto the GO surface by electrostatic interactions. Nevertheless, the deposited AuNPs are spherical in shape and predominantly small in size. The mean average size of the AuNPs is around 2 nm, as deduced from the histogram and distribution curve. The results are in agreement with the XRD diffractograms, where pure AuNPs reflections were observed in the GAGPAu nanohybrid. Similar observation has been previously reported by Usman et al. [9], where layered double hydroxide (LDH)-based nanohybrid was surface coated with AuNPs. The micrographs also confirm the earlier assertion of successful formation of a GO-based TDS. Cytotoxicity Studies Cytotoxicity studies were conducted to determine the effectiveness of the theranostic nanodelivery system of the anticancer agent. The tests determine the level of toxicity to healthy cells. In doing so, two cell lines were used in testing the cytotoxicity of the GO nanocarrier, the final TDS that is after AuNPs doping (GAGPAu) and the anticancer drug, pure PA. Standard fibroblast cell line (3T3) was used to test the safety of the TDS as to healthy cells, whilst human liver hepatocellular carcinoma cell line also known as HepG2, was used for cancer cytotoxicity test. Figure 6a,b show the data expressed in form of histogram obtained from the zones of inhibition of the study in 3T3 and HepG2 cell lines, respectively. The samples were dosed in various concentrations; 0.0, 1.6, 3.1, 6.3, 12.5, 25.0, 50.0 and 100.0 µg/mL for both cell lines. It is evident from the chart that the cells appear to have grown above average in all the concentrations, indicating the TDS, GO and the PA drug are non-toxic towards the 3T3 cells (Figure 6a) even at the highest dose which also suggests that they could be negligibly cytotoxic or nontoxic to healthy human cells. Contrary to the 3T3 cell lines, the HepG2 shows inhibited growth at 100 µg/mL concentration. The PA and the GAGPAu TDS have shown a high anticancer efficacy at 100 µg/mL, where the cancer cells are observed to show growth below average (Figure 6b). This implies that the GOTS could inhibit cancer growth. However, the GO nanocarrier did not indicate any efficacy even at the highest dose, since the cancer cells show almost 100% growth at 100 µg/mL GO dose. This also suggests the non-susceptibility of the cancer cells towards the GO nanosheets or the non-anticancer properties in the synthesized GO nanosheets. Similar results were reported by He et al. [38], where their GO nanocarrier did not indicate any cytotoxic activity against the cancer cell lines tested. The results have been further tested with t-test statistical analysis for accuracy. No significant difference between all the three tested samples (GAGPAu, PA and GO) was observed in the 3T3 cell lines when compared with the control cells, as revealed by the p-value (p > 0. 5). However, in the HepG2 cells, the last three doses of PA (6.3, 25 and 100 μg/mL) and the last dose of GAGPAu (100 μg/mL) showed significant toxicity as compared to the control cells. The p-value is p < 0.1. The high surface area and positively charged surface of the nanocomposites influences the cell penetration either through the low-affinity transmembrane protein pathway or by high-affinity folate receptor. The GO uptake is mostly via clathrin-mediated pathway endocytosis [39]. The results of the cytotoxicity assay have confirmed the nanocomposite has good potential anticancer component in theranostics. Magnetic Resonance Imaging Studies The second theranostic component of the nanocomposite is the diagnostic modality. Magnetic resonance imaging was employed for this purpose. The GOTS was used in the test, which was prepared into various Gd 3+ concentrations; 2.0, 0.5 and 0.2 w/v. In addition, two references were used Gd(NO3)3 (0.5 w/v) and water. Figure 7 depicts the T1−weighted image of the aqueous GAGPAu nanocomposite distributed in tubes and the references. The steady rise in brightness of the tubes can be visibly seen, which indicates signal enhancement. This is further confirmed by measuring the intensity of each of the tubes, which expectedly shows the values increasing correspondingly with The results have been further tested with t-test statistical analysis for accuracy. No significant difference between all the three tested samples (GAGPAu, PA and GO) was observed in the 3T3 cell lines when compared with the control cells, as revealed by the p-value (p > 0. 5). However, in the HepG2 cells, the last three doses of PA (6.3, 25 and 100 µg/mL) and the last dose of GAGPAu (100 µg/mL) showed significant toxicity as compared to the control cells. The p-value is p < 0.1. The high surface area and positively charged surface of the nanocomposites influences the cell penetration either through the low-affinity transmembrane protein pathway or by high-affinity folate receptor. The GO uptake is mostly via clathrin-mediated pathway endocytosis [39]. The results of the cytotoxicity assay have confirmed the nanocomposite has good potential anticancer component in theranostics. Magnetic Resonance Imaging Studies The second theranostic component of the nanocomposite is the diagnostic modality. Magnetic resonance imaging was employed for this purpose. The GOTS was used in the test, which was prepared into various Gd 3+ concentrations; 2.0, 0.5 and 0.2 w/v. In addition, two references were used Gd(NO 3 ) 3 (0.5 w/v) and water. Figure 7 depicts the T1−weighted image of the aqueous GAGPAu nanocomposite distributed in tubes and the references. The steady rise in brightness of the tubes can be visibly seen, which indicates signal enhancement. This is further confirmed by measuring the intensity of each of the tubes, which expectedly shows the values increasing correspondingly with the Gd 3+ concentrations, 2.0 (452.71), 0.5 (338.20) and 0.2 (331.80) w/v, Gd 0.5 w/v (235.45) and water (228.66). The outcome suggests that the developed nanocomposite has higher contrast properties than the conventional Gd-based contrasts. Gd and AuNPs have individually been used for the enhancement of MRI and CT contrasts, respectively [40][41][42][43][44][45][46][47][48][49][50]. Nevertheless, the agents have been combined for bimodal contrast enhancement of MRI/CT [44,[51][52][53] as well for the sole purpose of MRI signal improvement using different nanocarriers [7,35]. The MRI signals are believed to be improved through improved interaction between Gd 3+ ions and GO nanosheets. Moreover, the process is believed to be assisted by the ultrasmall AuNPs at the surface of the GO, which increase the surface area of the nanocomposite, thereby improving water molecular movements within the GO structure [52]. This in turn affects the longitudinal relaxation time (T1 signal) by shortening and reducing the relaxivity, which results in the increase in signal intensity. In addition, the GOTS has an advantage of low toxicity. Characterization The developed nanocomposites were characterized via different techniques. Powder X-ray Characterization The developed nanocomposites were characterized via different techniques. Powder X-ray diffraction (XRD-6000 diffractometer, Shimadzu, Tokyo, Japan), with CuKα radiation (λ = 1.5406 Å, 40 kV and 30 mA) and scan rate of 0.5 • θ/min. Fourier transformed infrared spectroscopy (FTIR) (Thermo Nicolet model Nicolet 6700) was done with a KBr disc. Raman spectroscopy study was done on a UHTS 300 Raman spectrometer (WITec GmbH, Ulm, Germany) at laser excitation wavelength of 532 nm. High resolution transmission electron microscope (HRTEM), Tecnai TF20 X-Twin (FEI, Hillsboro, OR, USA) was used in studying the GO and the nanocomposites structures as well as the drug loading. UV-visible spectroscopy was used for drug release studies and was conducted on a Lambda 35 ltraviolet-visible spectrophotometer (PerkinElmer, Boston, MA, USA). Thermogravimetric analysis (TGA)/differential thermogravimetric (DTG) was done on a TGA/DSC 1HT model (Mettler Toledo, Shah Alam, Selangor, Malaysia) at heating rate of 10 • C/min and nitrogen flow rate of 50 mL/min. Graphene Oxide (GO) Synthesis The improved Hummer's method was adopted in the preparation of the GO nanocarrier. Briefly, concentrated H 2 SO 4 (360 mL) and H 3 PO 4 (40 mL) were mixed and then added to graphite powder (3 g) in a 500 mL beaker. The mixture was stirred for about 5 min for homogeneity. Under stirring at room temperature, the slow addition of 18 g KMnO 4 was immediately followed. By the end of the KMnO 4 addition, the temperature of the mixture rose to about 40 • C. The mixture was further stirred for 12 h at 50 • C under dark conditions, then poured into a 400 mL iced DI. Prior to that, the mixture was allowed to cool down to room temperature. Hydrogen peroxide, 3 mL was added to the mixture, which melted the ice. The resultant suspension was filtered and washed via a centrifuge with DI (200 mL) first and subsequently with HCl and ethanol (200 mL each). Finally, the sample was washed with diethyl ether (200 mL), followed by filtering and drying at 40 • C in a vacuum oven [18]. Synthesis of Graphene Oxide-Gadolinium and Protocatechuic Acid Nanocomposite Synthesis of graphene oxide-gadolinium and protocatechuic acid nanocomposite (GAGPA) was prepared by first dissolving the protocatechuic acid (0.6 g) in aqueous medium (50 mL) by heating/stirring at 40 • C. Then 0.0008 M gadolinium nitrate was added and stirred for 20 min, after which a clear solution was obtained. GO, 0.2 g was then added to the solution. The pH of the mixture was adjusted and maintained at 5.5 using 0.5 M sodium hydroxide solution. The dispersion was allowed to stir for 24 h at room temperature in a dark condition. The slurry obtained was washed 3 times with DI water. Prior to that, the slurry was centrifuged to collect the precipitate. The sample was dried at 40 • C in a vacuum over a period of 48 h. Synthesis of Gold Nanoparticles on the GAGPA Nanocomposite Surface The GAGPA nanocomposite (0.15 g) was ultrasonically dispersed in DI (90 mL), HAuCl 4 (2%, 6 mL) was added to the dispersion under stirring at room temperature. NaOH (0.25 M, 2 mL) was then added to the mixture and was allowed to stir for 24 h under nitrogen atmosphere and heating at 60 • C. The mixture was centrifuged and re-dispersed in DI water (30 mL); NaBH 4 (1 M, 20 mL) was then introduced and stirred for an hour. The slurry was then centrifuged/washed six times and dried in a vacuum oven at 70 • C for 24 h. The resulting material is gold nanoparticles on the GAGPA nanocomposite surface. Drug Loading and Release from GAGPA Nanocomposite The release pattern and the amount of the anticancer agent loaded into GAGPA nanocomposite was deduced from the data acquired using a Lambda 35 ultraviolet-visible spectrophotometer. The study was done by first dispersing GAGPA (25 mg) in PBS 7.4 and 4.8 (30 mL of each). The tubes were placed in an oil bath shaker, set at 37 • C. The samples were shaken gently; release media (3 mL) was withdrawn and replaced with PBS (3 mL) at predetermined times intervals. The collected media containing the PA that was released from the nanocomposite were analyzed using the UV-Vis spectrophotometer at wavelength maximum (λ max = 259 nm). Cell Culture The purchased standard fibroblast (3T3) and carcinoma (HepG2) cell lines were cultured using RPMI 1640 as culture medium (Invitrogen, New Zealand). 10% fetal bovine serum and 1% antibiotics (penicillin/streptomycin) were used to supplement the culturing medium. The cells were kept and incubated in a humidified chamber at 37 • C and 5% CO 2 atmosphere. Culture harvest was done via trypsinization. Cytotoxicity Evaluation For the toxicity evaluation, 96-wells were plated with grown cells with 1.0 × 105 density of cells per well, using 100 µL of cell culture medium. A 24 h attachment period was allowed before the addition of the PA, the nanocarrier (GO) and the GAGPAu nanocomposite at varied concentrations. Incubation period followed (72 h). 3-[4,5-Dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide (MTT, 5 mg) was dissolved in PBS (2 mL) and distributed into 96-well plates. The cells were incubated at 37 • C, after which formazan product was obtained. DMSO (100 µL) was added to the cells and then shaken. Measurement of the optical densities of the cells at 570 nm was followed. Viabilities of the cells were expressed inform of percentages with the untreated cells as reference. Triplicate experiments and measurements were done all through the experiments for accuracy and mean ± standard deviations were taken. Magnetic Resonance Imaging Analysis Imaging modality test was done on a 3.0 T MRI clinical instrument (3.0 T Siemens Magnetom, Erlangen, Germany). GAGPAu was prepared in three different concentrations (2.0, 0.5 and 0.2), which were based on Gd 3+ concentration in the samples. Gd(NO 3 ) 3 (0.5 w/v) and water were used as references. An MRI phantom was used as holder of the samples, which were placed in the magnetic source area. The samples were imaged with the following conditions: TR/TE: (83/9000) 160 × 320 s and field of view (FOV): 120 × 120. Analysis of the T1-weighted images was done with Syngovia MRI software (Syngo MR E11, Siemens, Erlangen, Germany, 2013). Conclusions In this work, a theranostic nanodelivery system that consists of both therapeutic and contrast agents were simultaneously loaded onto GO nanosheets for imaging and pharmaceutical applications was successfully prepared. The bimodal theranostic nanodelivery system (GAGPAu) was developed from initial synthesis of GO nanocarrier, followed by aqueous doping of Gd 3+ and protocatechuic acid via hydrogen bonding and π-π interactions (GAGPA). Subsequently, AuNPs were surface coated through electrostatic interactions (GAGPAu). The drug release study showed above 60% protocatechuic acid was released in the acidic PBS medium compared to less than 40% in the alkaline media, suggesting higher drug delivery in the cancer target site. The cytotoxicity studies of the tumor cells further confirmed the drug release study, which showed cancer cell death at 100 µg/mL GAGPAu dose. The GOTS appeared nontoxic to normal cells. The GAGPAu contrast enhancement for imaging modalities was tested with Tesla 3.0 MRI equipment. The T1 weighted image of the GOTS dispersions showed increment in contrast of the nanocomposite to be higher than pure Gd(NO 3 ) 3 and water reference. This preliminary, in vitro result of this research suggests a possible future solution to the highly toxic chemotherapy and diagnosis of cancer diseases. Further studies especially in the in vivo aspect of the GAGPAu are very important for the way forward.
9,494.8
2018-02-01T00:00:00.000
[ "Chemistry", "Materials Science", "Medicine" ]
A key region of molecular specificity orchestrates unique ephrin-B1 utilization by Cedar virus An expanded hydrophobic cavity within the structurally constrained receptor-binding site of the Cedar virus attachment glycoprotein facilitates idiosyncratic utilization of ephrin-B1. --An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs). --High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, http://www.life-sciencealliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max. 200 characters including spaces). This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title and running title. It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person. Author names should not be mentioned. B. MANUSCRIPT ORGANIZATION AND FORMATTING: Full guidelines are available on our Instructions for Authors page, http://www.life-sciencealliance.org/authors We encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript. If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information. These files will be linked online as supplementary "Source Data" files. ***IMPORTANT: It is Life Science Alliance policy that if requested, original data images must be made available. Failure to provide original images upon request will result in unavoidable delays in publication. Please ensure that you have access to all original microscopy and blot data images before submitting your revision.*** In the manuscript, Rhys et al. report CedV could use not only ephrin-B2, which is a common HNV receptor, but also ephrin-B1 as an entry receptor. They determined the crystal structure of CedV-G protein at a resolution of 2.78-Å, and the CedV -G bound to ephrin-B1 complex structure at a resolution of .4.07-Å. Structural analyses reveal that diverse HNV-G proteins bind to their distinct ephrin receptors in a conserved binding mode, while subtle structural features of CedV contribute to its unique ephrin ligand specificity. Overall, the paper is written clearly, and the findings represent an important advancement for HNV viral entry. However, the major concern is the novelty. There is a paper published on PNAS recently, titled "Structural and functional analyses reveal promiscuous and species specific use of ephrin receptors by Cedar virus". They demonstrate that CedV can use ephrin-B1, A2, A5 to enter cells, and determined the CedV-G structure at 3.7-Å resolution, and the complex structures of CedV-G with ephrin-B1 or B2 at 3.5-Å and 2.85-Å respectively. It seems this PNAS paper has more data, especially they have the exact affinity data of the protein binding . Both two studies have similar conclusions. Minor concerns: In Figure 6, it is better to add a panel showing the sequence alignment of different HNV-G, to highlight the critical binding sites. Reviewer #2 (Comments to the Authors (Required)): In this manuscript, Pryce et al demonstrate that EphrinB1, in addition to EphrinB2, but not EphrinB3, is a functional receptor for CedV -a member of the Henipavirus (HNV) genus. The data are supported by comprehensive structural/biochemical experiments including binding assay, pseudovirus entry assay and structural analysis. Utilization of EphrinB1 as an entry receptor has not been reported for any members of the HNV genus before this paper and a recently published paper by Laing et al (PMID: 31548390). This is an important study as it reveals the ability of HNVs to use different members of the Ephrin protein family for entry and shed light on the molecular barriers that dictate specific receptor usage by different HNVs. The data are well presented and the discussion section is thorough. This reviewer has several comments/suggestions described below: In the 2nd part of the Results section, since the authors use ephrin constructs that are fused to Fc, their dimeric nature leads to avidity and artificially enhance the affinity. Thus, these are not genuine Kds. The authors should cite and discuss the recently reported findings by Liang et al (PMID: 31548390) (including comparing their structure to theirs). Reported buried surface area (BSA): the authors should clarify that the values in this manuscript are total BSA on EphrinB1 and CedV G (to be consistent with Liang et al, which reported the values on one interface only). The authors state "Together, binding-induced structural transitions within both CedV-G and the ephrin G-H loops support a model of an induced-fit mechanism of ephrin recognition that is conserved across ephrin-tropic HNVs ". How did the authors distinguish between true induced-fit and selection from a conformational equilibrium? "Similarly, both ephrin-B2-Fc and ephrin-B1-Fc inhibited CedVpp entry into CHO-B2 cells (Fig. 5d, right panel), evidencing the ability of ephrin-B1 to block ephrin-B2-dependent CedVpp entry through competition for an overlapping binding site on CedV-G. Moreover, ephrin-B2-Fc inhibited CedVpp entry into CHO-B1 cells (Fig. 5d, left panel). In both CHO-B2 and CHO-B1 cells, ephrin-B2-Fc-mediated inhibition of CedV-G was more potent than ephrin-B1-Fc (Fig. 5d), further supporting our binding (Fig. 2) and entry (Fig. 3) data that suggest ephrin-B2 is more efficiently utilized than ephrin-B1. " Are the structural data consistent with the authors' claim that CedV G utilize EphrinB2 more efficiently than it does EphrinB1? Or what could rationalize this preferred utilization? Also, these results confirm the recently reported structures by Laing et al (PMID 31548390) and this should be mentioned. How do the authors reconcile the virtually identical binding responses of NiV G to ephrin-B2 or ephrin-B3 (Fig 2 a) with the 3 log difference observed between ephrin-B2 or ephrin-B3 for inhibition of NiVpp entry in CHO-B2 cells (Fig 5 d)? "Although CedV-G is unable to utilize ephrin-B3, our structural hypothesis suggests that acquired ephrin-B1 specificity does not necessarily come at the expense of ephrin-B3 usage, as the LW motif is common to both ephrin-B2 and ephrin-B3." Could the authors use their structural data to explain why Ephrin B3 is incompatible with CedV G? Could the authors discuss how conserved EphrinB1 is among different species and whether CedV can transmit among species by utilizing EphrinB1? The Editorial Board Life Science Alliance Dear Dr. Leibfried, We wish to submit the revised manuscript by Pryce and Azarm et al. entitled 'A key region of molecular specificity orchestrates unique ephrin-B1 utilization by Cedar virus' to be considered for publication in Life Science Alliance. We thank the reviewers for their responses, and below we address the comments made point by point (our responses in blue and changes to the text in red). Reviewer #1: In the manuscript, Rhys et al. report CedV could use not only ephrin-B2, which is a common HNV receptor, but also ephrin-B1 as an entry receptor. They determined the crystal structure of CedV-G protein at a resolution of 2.78-Å, and the CedV -G bound to ephrin-B1 complex structure at a resolution of 4.07-Å. Structural analyses reveal that diverse HNV-G proteins bind to their distinct ephrin receptors in a conserved binding mode, while subtle structural features of CedV contribute to its unique ephrin ligand specificity. Overall, the paper is written clearly, and the findings represent an important advancement for HNV viral entry. However, the major concern is the novelty. There is a paper published on PNAS recently, titled "Structural and functional analyses reveal promiscuous and species specific use of ephrin receptors by Cedar virus". They demonstrate that CedV can use ephrin-B1, A2, A5 to enter cells, and determined the CedV-G structure at 3.7-Å resolution, and the complex structures of CedV-G with ephrin-B1 or B2 at 3.5-Å and 2.85-Å respectively. It seems this PNAS paper has more data, especially they have the exact affinity data of the protein binding. Both two studies have similar conclusions. Response: We thank the Reviewer for their efficient capsulation of our study. This manuscript, in its exact form, was deposited into bioRxiv before the PNAS study was published, or even available online. This journal has editorial policies that do not make prior publications a consideration in evaluating manuscripts under their scoop protection policies. We believe in the open and transparent peer review system that Life Sciences Alliance follow. We agree with the reviewer that any comparisons with the PNAS study by Laing et al. should and can be made in open peer review forums. Minor concerns: In Figure 6, it is better to add a panel showing the sequence alignment of different HNV-G, to highlight the critical binding sites. Response: We completely agree with the reviewer that providing the annotated sequence alignment of HNV-G proteins will help add clarity and give additional context to Fig. 6. Indeed, such an annotated sequence alignment with additional structural annotations was provided in Supplementary Figure 1. We felt these data were best presented as an independent figure, given (1) the discontinuous nature of the ephrin-binding site, (2) the size of the resulting sequence alignment, and (3) the additional information content provided by the structural annotations (e.g. contact residues, disulphide bonds, and N-linked glycan sites etc.) Reviewer #2: In this manuscript, Pryce et al demonstrate that EphrinB1, in addition to EphrinB2, but not EphrinB3, is a functional receptor for CedV -a member of the Henipavirus (HNV) genus. The data are supported by comprehensive structural/biochemical experiments including binding assay, pseudovirus entry assay and structural analysis. Utilization of EphrinB1 as an entry receptor has not been reported for any members of the HNV genus before this paper and a recently published paper by Laing et al (PMID: 31548390). This is an important study as it reveals the ability of HNVs to use different members of the Ephrin protein family for entry and shed light on the molecular barriers that dictate specific receptor usage by different HNVs. The data are well presented and the discussion section is thorough. This Reviewer has several comments/suggestions described below: In the 2nd part of the Results section, since the authors use ephrin constructs that are fused to Fc, their dimeric nature leads to avidity and artificially enhance the affinity. Thus, these are not genuine K d s. Response: We thank the Reviewer for noting this point and allowing us to clarify the text. We agree that avidity effects resulting from Fc-tagged HNV-G proteins may yield avidity enhanced binding and thus different estimates of K d relative to experiments utilizing monomeric proteins. As such, we do not undertake direct quantitative comparison with other studies quoting bimolecular interaction kinetic values. We instead present values for all ephrin-tropic HNV-Gs, to permit quantitative comparison of all HNV-G ephrin pairs utilizing a uniform experimental set-up. To avoid any confusion, we have updated text in this section: Line 159: The section title has been updated to remove the phrase 'nanomolar affinity' and now reads 'CedV-G binds both ephrin-B1 and ephrin-B2'. The authors should cite and discuss the recently reported findings by Laing et al (PMID: 31548390) (including comparing their structure to theirs). Response: Since this manuscript was deposited into BioRxiv before the Laing et al. study was published, even online, we were not able to make any reference to it. In line with editorial advice, we now cite the Laing et al. (PMID: 31548390) reference as an endnote following the Acknowledgements section. The authors state "Together, binding-induced structural transitions within both CedV-G and the ephrin G-H loops support a model of an induced-fit mechanism of ephrin recognition that is conserved across ephrin-tropic HNVs ". How did the authors distinguish between true induced-fit and selection from a conformational equilibrium? Response: We thank the Reviewer for bringing up this important point and agree the presented structures do not provide grounds to distinguish induced fit from conformational selection. As such, we have altered the text as follows: Line 314-316: Differences between the structural states of both CedV-G and ephrin-B1 may represent an induced-fit mechanism of ephrin recognition, which has been postulated for other ephrin-tropic HNV-Gs [36, 37, 45], or selection from a conformational equilibrium. Furthermore, wording elsewhere has been altered for the purpose of clarity: Line 294-295: Structural plasticity within this region is observed in other HNV-G proteins and their ephrin-bound complexes [36,37]. "Similarly, both ephrin-B2-Fc and ephrin-B1-Fc inhibited CedVpp entry into CHO-B2 cells (Fig. 5d, right panel), evidencing the ability of ephrin-B1 to block ephrin-B2-dependent CedVpp entry through competition for an overlapping binding site on CedV-G. Moreover, ephrin-B2-Fc inhibited CedVpp entry into CHO-B1 cells (Fig. 5d, left panel). In both CHO-B2 and CHO-B1 cells, ephrin-B2-Fc-mediated inhibition of CedV-G was more potent than ephrin-B1-Fc (Fig. 5d), further supporting our binding (Fig. 2) and entry (Fig. 3) data that suggest ephrin-B2 is more efficiently utilized than ephrin-B1. " Are the structural data consistent with the authors' claim that CedV G utilize EphrinB2 more efficiently than it does EphrinB1? Or what could rationalize this preferred utilization? Also, these results confirm the recently reported structures by Laing et al (PMID 31548390) and this should be mentioned. How do the authors reconcile the virtually identical binding responses of NiV G to ephrin-B2 or ephrin-B3 (Fig 2 a) with the 3 log difference observed between ephrin-B2 or ephrin-B3 for inhibition of NiVpp entry in CHO-B2 cells (Fig 5 d)? Response: We thank the reviewer for noting this 2-log (not 3-log) difference. Indeed, we also detect a similar difference in the ephrin-B2 and ephrin-B3 IC 50 values of NiVpp entry inhibition on the Vero-CCL81s (Figure 2b). We have previously noted this apparent discrepancy (e.g. Ref 46: Negrete, O.A. et al., PLoS Pathog, 2006;compare Figures 1B & 3), which is a known observation that has repeatedly appeared in the literature. Envelope-receptor interactions that lead to binding versus inhibition of entry measure two different things. Binding measures direct protein-protein interactions under equilibrium conditions, whereas entry involves a host of allosteric signals from receptor binding, to F-triggering, to fusion-pore formation, which then results in delivery of the viral RNP into the host cell cytoplasm, and subsequent transcription and viral genome replication that leads to detection of the reporter gene signal. Thus, small differences in binding affinity, and more importantly, differences in the efficiency of how particular receptors (e.g. ephrin-B2 versus ephrin-B3) allosterically trigger F proteins (not measured by receptor binding affinities alone) can amplify any putative differences in envelop-receptor interactions that result in entry. How different receptors mechanistically trigger productive fusion and entry by various paramyxoviruses is a subject of intense investigation by aficionados of paramyxovirus entry and is beyond the scope of this current study. "Although CedV-G is unable to utilize ephrin-B3, our structural hypothesis suggests that acquired ephrin-B1 specificity does not necessarily come at the expense of ephrin-B3 usage, as the LW motif is common to both ephrin-B2 and ephrin-B3." Could the authors use their structural data to explain why Ephrin B3 is incompatible with CedV G? Response: We thank the Reviewer for the opportunity to clarify this point. The precise structural determinants that prevent recognition of ephrin-B3 by CedV-G are presently unclear. Indeed, answering such questions is a focus of ongoing work. We have updated the Discussion to highlight this uncertainty rather than propose speculative hypotheses: Line 456-459: Whilst the molecular features that preclude ephrin-B3 utilization by CedV-G remain unclear, our structural hypothesis suggests that acquired ephrin-B1 specificity does not necessarily come at the expense of ephrin-B3 usage, as the LW motif is common to both ephrin-B2 and ephrin-B3. Could the authors discuss how conserved EphrinB1 is among different species and whether CedV can transmit among species by utilizing EphrinB1? Response: Ephrin-B1 is highly conserved amongst mammalian species (96-99% sequence similarity), at least equal to if not slightly less so than ephrin-B2. We have added in a statement with regards to ephrin-B1 conservation in the relevant part of the Discussion section: Line 472-475: Of note is the relatively high expression of ephrin-B1 in the lung, esophagus, and salivary glands ( Supplementary Fig. 5), which suggests that ephrin-B1 utilization could augment aspects of oropharyngeal transmission postulated for HNV [66], especially since ephrin-B1 is almost as conserved as ephrin-B2 across mammalian species (96-99% sequence similarity). "As expected [43,54,55], ephrin-B2-Fc and ephrin-B3-Fc inhibited NiVpp entry into CHO-B2 cells, while ephrin-B1-Fc failed to strongly inhibit entry at concentrations as high as 10 nM (Fig. 5d, If you are planning a press release on your work, please inform us immediately to allow informing our production team and scheduling a release date. To upload the final version of your manuscript, please log in to your account: https://lsa.msubmit.net/cgi-bin/main.plex You will be guided to complete the submission of your revised manuscript and to fill in all necessary information. Please get in touch in case you do not know or remember your login name. To avoid unnecessary delays in the acceptance and publication of your paper, please read the following information carefully. A. FINAL FILES: These items are required for acceptance. --An editable version of the final text (.DOC or .DOCX) is needed for copyediting (no PDFs). --High-resolution figure, supplementary figure and video files uploaded as individual files: See our detailed guidelines for preparing your production-ready images, http://www.life-sciencealliance.org/authors --Summary blurb (enter in submission system): A short text summarizing in a single sentence the study (max. 200 characters including spaces). This text is used in conjunction with the titles of papers, hence should be informative and complementary to the title. It should describe the context and significance of the findings for a general readership; it should be written in the present tense and refer to the work in the third person. Author names should not be mentioned. B. MANUSCRIPT ORGANIZATION AND FORMATTING: Full guidelines are available on our Instructions for Authors page, http://www.life-sciencealliance.org/authors We encourage our authors to provide original source data, particularly uncropped/-processed electrophoretic blots and spreadsheets for the main figures of the manuscript. If you would like to add source data, we would welcome one PDF/Excel-file per figure for this information. These files will be linked online as supplementary "Source Data" files. **Submission of a paper that does not conform to Life Science Alliance guidelines will delay the acceptance of your manuscript.** **It is Life Science Alliance policy that if requested, original data images must be made available to the editors. Failure to provide original images upon request will result in unavoidable delays in publication. Please ensure that you have access to all original data images prior to final submission.** **The license to publish form must be signed before your manuscript can be sent to production. A link to the electronic license to publish form will be sent to the corresponding author only. Please take a moment to check your funder requirements.** **Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript. If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately.** Thank you for your attention to these final processing requirements. Please revise and format the manuscript and upload materials within 7 days. Thank you for this interesting contribution, we look forward to publishing your paper in Life Science Alliance. Thank you for submitting your Research Article entitled "A key region of molecular specificity orchestrates unique ephrin-B1 utilization by Cedar virus". It is a pleasure to let you know that your manuscript is now accepted for publication in Life Science Alliance. Congratulations on this interesting work. The final published version of your manuscript will be deposited by us to PubMed Central upon online publication. Your manuscript will now progress through copyediting and proofing. It is journal policy that authors provide original data upon request. Reviews, decision letters, and point-by-point responses associated with peer-review at Life Science Alliance will be published online, alongside the manuscript. If you do want to opt out of having the reviewer reports and your point-by-point responses displayed, please let us know immediately. ***IMPORTANT: If you will be unreachable at any time, please provide us with the email address of an alternate author. Failure to respond to routine queries may lead to unavoidable delays in publication.*** Scheduling details will be available from our production department. You will receive proofs shortly before the publication date. Only essential corrections can be made at the proof stage so if there are any minor final changes you wish to make to the manuscript, please let the journal office know now. DISTRIBUTION OF MATERIALS: Authors are required to distribute freely any materials used in experiments published in Life Science Alliance. Authors are encouraged to deposit materials used in their studies to the appropriate repositories for distribution to researchers. You can contact the journal office with any questions<EMAIL_ADDRESS>Again, congratulations on a very nice paper. I hope you found the review process to be constructive and are pleased with how the manuscript was handled editorially. We look forward to future exciting submissions from your lab.
4,818.8
2019-08-05T00:00:00.000
[ "Biology", "Medicine" ]
The depth of sorbath penetration in periodic adsorption processes In the technical sciences, intuitive, dimensional criteria characterizing a particular physical process are widely used. Examples of such criteria are: the thickness of the boundary layer; velocity of sound; critical temperature; the mean free path of molecules, and so on. In this paper, a new dimensional criterion is proposed − the depth of sorbate penetration into the adsorbent grain, which may be useful in preliminary calculations of the pressure swing adsorption plants. This criterion is a direct analogue of the parameter widely used in electrical engineering − the depth of altering current penetration into a conductor or the skin depth. Application of the proposed criterion is most advisable in the analysis of rapidly flowing, variables in the direction adsorption processes. For the theoretical analysis of rapidly occurring sorption processes in the technical literature, use a kinetic coefficient of adsorption having the dimension [c -1 ]. The equation of adsorption kinetics with kinetic coefficient of adsorption has the form: Introduction In the technical sciences, intuitive, dimensional criteria characterizing a particular physical process are widely used. Examples of such criteria are: the thickness of the boundary layer; velocity of sound; critical temperature; the mean free path of molecules, and so on. In this paper, a new dimensional criterion is proposed − the depth of sorbate penetration into the adsorbent grain, which may be useful in preliminary calculations of the pressure swing adsorption plants. This criterion is a direct analogue of the parameter widely used in electrical engineering − the depth of altering current penetration into a conductor or the skin depth. Application of the proposed criterion is most advisable in the analysis of rapidly flowing, variables in the direction adsorption processes. For the theoretical analysis of rapidly occurring sorption processes in the technical literature, use a kinetic coefficient of adsorption having the dimension [c -1 ]. The equation of adsorption kinetics with kinetic coefficient of adsorption has the form: where, a is the adsorption; τ is the time of adsorption; β is the kinetic coefficient of adsorption; C 0 is the concentration of component in the entrance of adsorbent layer; C' is the concentration of the component in equilibrium with adsorbent. This kinetic coefficient of adsorption is not very suitable for practical applications, since the actual time dependence of sorbate uptake by the grain of the adsorbent, as a whole, is not linear, and its linearization is possible only at the very beginning or at the very end of the adsorption process. Consequently, in the most important, from the practical point of view, modes of adsorption, the kinetic coefficient is not actually determined. The proposed criterion is free from this drawback and can be uniquely determined for each batch of adsorbent. Material And Methods We consider the one-dimensional problem of diffusion and absorption of sorbate in a semi-infinite layer of the adsorbent. For the convenience to solve the problem, we place the origin of coordinates on the boundary of the adsorption layer, as shown in Figure 1. If we assume that adsorption is described by the Henry isotherm, then such a problem reduces to solving a one-dimensional molecular diffusion equation. This equation can be written in the form: where, C is the current molar concentration of the component in the gas that is in the pores of the adsorbent, The physical meaning of the dimensionless Henry constant is that it shows how many times the volume of absorbed gas is greater than the bulk volume of the adsorbent itself. Sometimes this value, by analogy with the absorption coefficient, is called the adsorption coefficient. If we know the value of Henry's constant K g [kmol/(m 3 ·Pa)], then at a given pressure it is easy to find the value of gas adsorption in a [kmol/m 3 ]. Multiplying the obtained adsorption value by the molar volume of the gas absorbed at a given pressure, we obtain the value of the dimensionless Henry constant: here, R is the universal gas constant, [J/(kmol·K)]; T is the temperature, [K]; P is the pressure, [Pa]. As can be seen from the last formula, the dimensionless Henry constant does not depend on the pressure of the adsorbed gas and is uniquely determined by the Henry constant. We assume that 1 kmol of pure sorbate is instantaneously supplied to the boundary of the adsorbent layer. The solution of this problem of adsorption can be represented in the form of a one-dimensional source function [1], which in the adopted notation has the form: It is easy to verify that the total amount of sorbate in the adsorbent bed remains constant, and over time only its distribution inside the adsorbent bed changes. Indeed, the total amount of sorbate inside the adsorbent layer is: where, We call a value, having the dimension of length, is the depth of the sorbate penetration into the adsorbent grain. The physical meaning of this value is fairly obvious, and can be easily derived from equation (4). At a distance from the surface of the adsorbent,   2 the sorbate concentration in e=2.718 times lower than the sorbate concentration on the surface of the adsorbent grains. The convenience of the proposed parameter for estimating the course of adsorption processes is explained by the fact that practically all adsorption processes that are used in technique are periodic. Therefore, the time of the adsorption process, which is necessary to calculate the depth of sorbate penetration, is always known − this is the time of adsorber operation in the adsorption regime, or the time of regeneration of the adsorbent. For effective operation of the adsorption apparatus, the depth of the sorbate penetration into the grain of the adsorbent must be greater than the size of the grains themselves. If the depth of sorbate penetration into the adsorbent is less than the characteristic size of the adsorbent grains, the adsorption capacity of the adsorbent will not be fully utilized, and the adsorption apparatus with such adsorbent will not work efficiently. It is very important that the amount of sorbate absorbed in the adsorbent layer, of thickness x, is described by the error function: This makes it possible not to calculate the values of this integral, but to use already available tables of its values, which are usually given in the manuals on mathematical statistics. Therefore, without making calculations, it can be argued that 68.3% of the sorbate absorbed in the adsorbent layer with depth of σ. In a depth of 2σ will absorbed 95.4% of the sorbate, and in the layer of depth 3σ − 99.7% of the sorbate will be absorbed, that is, practically entire sorbate absorbed by the adsorbent. If the sorbate is fed not in one portion but in several, it is obvious that later portions of the sorbate will penetrate to a lesser depth. In this case, it can be argued that more than 68.3% of the sorbate absorbed by the adsorbent will be absorbed in the adsorbent layer with a depth σ. If the surface of the adsorbent grains is not flat, but convex, the proportion of the adsorbent, absorbed near the surface of the grains, will become even greater. Therefore, it can be argued that for an arbitrary character of the change in sorbate concentration, on the surface of the adsorbent grains having a spherical or cylindrical shape, more than 70% of the sorbate, absorbed by the adsorbent, will be absorbed in the surface layer with a depth σ. In a layer of depth 2σ, more than 95% of the sorbate will be absorbed, that is, practically the entire sorbate that is absorbed by this adsorbent. Let us demonstrate the application of the proposed criterion for the analysis of processes in a pressure swing adsorption unit designed to produce oxygen. To absorb nitrogen in such installations, synthetic zeolites are usually used. The isotherm of nitrogen adsorption by zeolites at pressures below 10 bars is practically linear, so we can use the data available in the literature to determine the Henry constant. The typical value of the Henry constant under nitrogen absorption by a CaA zeolite is K g =2.0·10 -2 nsm 3 /(g·mmHg) [2], or in units more convenient for us − K g =4.7·10 -6 kmol/(m 3· Pa). Hence, we find the value of the dimensionless Henry constant The diffusion coefficient of nitrogen in air can be determined from the approximate empirical formula [3]: The equivalent value of the diffusion coefficient in porous bodies taking into account the relative pore volume, the variable pore cross section, the presence of dead-end pores, etc., is found by the formula [4]: where D is the diffusion coefficient of the absorbed gas in the gas mixture being separated, [m 2 /s]; ε is the porosity or relative volume of the volume occupied by the pores; for most porous bodies permeable to gases, the porosity value is close to 0.3; ξ is the diffusion resistance coefficient, taking into account the variable pore cross section, the presence of dead-end pores, and the like. The value of this coefficient is usually determined experimentally, and is in the interval from 4 to 9 [4]. In order to determine the equivalent value of the diffusion coefficient, we use the experimental value of the equivalent diffusion coefficient of nitrogen in the pores of the zeolite at atmospheric pressure, which is reviewed in [2]. This value of the equivalent diffusion coefficient turned out to be 2.4·10 -6 m 2 /s. The value of the diffusion resistance coefficient, adjusted from this value of the equivalent diffusion coefficient, was 7.7. Using this value of the diffusion resistance coefficient and equations (6) and (8), we find the dependence of the depth of penetration of the sorbate on the pressure and time of the process. Figure 2 shows the calculated values of the penetration depth of nitrogen into the synthetic zeolite, depending on the pressure and time of the process. The above graph can be used for preliminary calculations of pressure swing adsorption plants designed to extract oxygen from air. To obtain a relatively high concentration of oxygen − more than 90%, it is necessary to ensure, as far as possible, a more complete regeneration of the adsorbent. This is possible only if the depth of penetration of nitrogen into the zeolite is greater than two radii of the zeolite grains. Under this condition, an almost complete regeneration of the adsorbent inside the zeolite grains will be ensured, more precisely, more than 95% of the nitrogen absorbed in the zeolite will be removed from this zeolite during the regeneration process. Thus, the effective operation of the pressure swing adsorption unit to produce oxygen is possible only when the characteristic grain size of the adsorbent is much smaller than the penetration depth of the absorbed component − nitrogen. The graph shown in Fig. 2 allows, knowing the grain size of the adsorbent used, to determine the period of operation of the adsorbers, and, conversely, by the adsorber operating time, determine the maximum grain size of the adsorbent suitable for this installation. It can be seen from the graph shown in Fig. 2, that with a constant period of operation time, the depth of nitrogen penetration into the zeolite increases with decreasing regeneration pressure. This suggests that vacuum regeneration of the adsorbent is preferable to regeneration at atmospheric pressure. The results obtained are in good agreement with the data published in the technical literature. The experience of operating the pressure swing adsorption plants, summarized in the review [2], suggests that a relatively high oxygen concentration, more than 90%, is easily achieved in installations using microspherical adsorbent granules with a diameter is less than 1 mm. The same concentration of production oxygen is difficult to achieve using the zeolite having granules of 2-3 mm in size, and it is practically not achievable when use the zeolites with a granules size of more than 3 mm. It would be a mistake to consider that the smaller the size of the adsorbent grains, the more efficiently the pressure swing adsorption system works. If the process of regeneration of the adsorbent precedes the better, the smaller size of adsorbent grains, then the impurity absorption process passes well on the adsorbent with larger grains. Indeed, for a given adsorption time, the concentration of the absorbed component on the surface of the larger grains will always be smaller, since a large fraction of absorbed substance takes up the inside of the grains. A lower concentration of the absorbed component on the grain surface means that it is easier to obtain a higher concentration of the product gas at the outlet of the adsorber by means of an adsorbent with coarse grains. Therefore, for the given grain size of the adsorbent, there are an optimal period of absorption and the optimal adsorbent regeneration time. It follows from the above analysis that the high purity of oxygen obtained by the pressure swing adsorption method is easier to obtain in a plant with three or more adsorbers, since only in such installations it is possible to organize the operation so that the regeneration time of the adsorbent is substantially greater than the nitrogen absorption time. The experience of operating the pressure swing adsorption plants for obtaining oxygen confirms the drawn conclusions. It is in units with three or four adsorbers and vacuum regeneration that it is possible to achieve the greatest efficiency of the process of releasing oxygen from the air. Conclusion A new dimensional criterion for estimating periodic processes of gas adsorption and desorption is proposed − the depth of sorbate penetration into the adsorbent. It is shown that it can be used for preliminary calculations of oxygen production plants using the pressure swing adsorption method.
3,201.8
2018-12-11T00:00:00.000
[ "Engineering" ]
Optimal Packet Length for Free-Space Optical Communications with Average SNR Feedback Channel . In this article, a method to enhance data rates of free-space optical (FSO) systems using packet length optimization is proposed. The average signal-to-noise ratio (ASNR) is measured at the receiver and sent back to the transmitter to optimize packet length. In addition, the length of packet is optimized to enhance the average throughput. We concluded that packet length can be reduced at low ASNR. However, packet length should be increased at higher values of received ASNR. For each ASNR, we also choose the optimal modulation and coding scheme (MCS) and optimal packet length to maximize the throughput. Different MCSs are investigated such as 4-pulse amplitude modulation (PAM) with and without channel coding, 8-PAM, 16-PAM, and 32-PAM. The proposed method gives 0.8–1.9dB gain with respect to conventional FSO with adaptive modulation and coding (AMC) and fixed packet length. This is the first paper to deal with packet length optimization for FSO systems. Introduction FSO communications allow high data rates as compared to radio frequency (RF) communications [1][2][3][4][5].It is easy to install and does not require uncovering of walkways to introduce fiber joins.Moreover, FSO spectrum is license free.erefore, FSO communications become viable due to its low implementation cost from one side and to overcome RF spectrum scarcity on the other side.However, the performance of FSO systems can be degraded due to rain, mist, tide, warm, or pointing errors [1][2][3][4][5][6][7][8][9][10].To circumvent this drawback, we can use cooperative or spatial diversity combined with error control coding [11][12][13][14][15][16][17].In cooperative FSO, relays amplify or decode the source message before transmitting it to the destination.Cooperative protocols allow us to benefit from cooperative diversity as the same information is transmitted over independent relayed channels [11][12][13][14][15][16][17].Cooperative communications and channel coding techniques allow us to combat the effects of atmospheric turbulence and improve the performance of FSO systems. Adaptive modulation and coding (AMC) allows us to increase data rates in free-space optical (FSO) communications.In fact, the best modulation and coding scheme (MCS) is selected for each instantaneous or average SNR.At high SNRs, we can use 32-pulse amplitude modulation (32-PAM) to increase data rates since its spectral efficiency is 5 bit/s/Hz.At low SNR, robust on-off keying (OOK) modulation with channel coding can be used.It is well known that channel coding is required to overcome channel impairments and noise.Different MCSs can be deployed for each range of SNR: i.e., MCS i is used if D i ≤ SNR < D i+1 .In this paper, we will provide a methodology to derive the values of thresholds D i for fixed and optimal packet length. e thresholds for optimal packet length are denoted by D i , and thresholds for fixed packet length are denoted by S i . In all previous studies, packet length is fixed and only the MCS is varied where different-PAM can be used with different channel encoders [18][19][20][21][22][23][24][25][26].e main contribution of this paper is to optimize packet length in order to maximize the system throughput.For each average signal-to-noise ratio (ASNR), we choose the optimal packet length and MCS to have the largest throughput.e proposed solution allows 0.8-1.9dB gain with respect to conventional FSO systems with AMC and constant packet length.is is the first paper to deal with packet length optimization for FSO systems. e system model is presented in Section 2. e packet error probability is analyzed in Section 3 in the absence and presence of channel coding.e optimal packet length is derived in Section 4. Adaptive packet length and MCS using ASNR is described in Section 5. Numerical results are given in Section 6. Section 7 concludes the paper. Signal Model In FSO communications, the received signal is written as follows: where x l is the transmitted signal, l is the symbol index, g is the atmospheric fading coefficient, I p is the effect of pointing errors, n l is an AWGN with normalized variance, and Γ is the average SNR.e average SNR can be expressed as follows: where E b is the transmitted energy per bit and N 0 is the power spectral density of the noise.e instantaneous photo SNR is expressed as follows: We denote as the atmospheric turbulence.Using a Gamma-Gamma distribution for atmospheric turbulence and in the presence of pointing errors, the probability density function (PDF) is equal to [27] where G is Meijer's G function and b is the ratio between the equivalent beam radius at the receiver and pointing error displacement standard deviation.For small pointing errors, i.e., b ⟶+ ∞, the above distribution converges to the nonpointing error case, where ζ � 2.064 and ε � 1.342 correspond to strong turbulence, ζ � 2.296 and ε � 1.822 correspond to moderate turbulence, and ζ � 2.902 and ε � 2.51 correspond to weak turbulence [28,29].e cumulative distribution (CDF) of SNR is given by the following equation [27]: Preliminary Results e average packet error probability (PEP) can be tightly upper bounded by the following equation [30]: where f Γ (c) is the PDF of SNR Γ and w 0 is a waterfall threshold.Equation (7) shows that the PEP for a given instantaneous SNR c ≤ w 0 is approximated by 1.However, the PEP for a given instantaneous SNR c > w 0 is approximated by 0. e waterfall threshold can be written as follows [30]: where g(c) is the PEP for a given instantaneous SNR c. Without Channel Coding. For uncoded M-PAM, we have where N is the number of data bits per packet, n d is the number of parity bits per packet, and P es is the symbol error probability (SEP) of M-PAM [31]. With Channel Coding. For convolutionally coded communications, we have where in which d f and a d are, respectively, the free distance and distance spectrum. 2 Journal of Computer Networks and Communications where R c is the rate of convolutional encoding. Using equation ( 11), we have where We have where i � 1 for coded communications and i � 2 for uncoded communications. Waterfall reshold. Appendix shows that waterfall threshold is written as where where E ≃ 0.577 is the Euler constant. Optimal Packet Length Using ASNR e number of trials of HARQ is [32][33][34] calculated as follows: e throughput in bit/s/Hz is expressed as follows: where B � 1/T s is the used bandwidth and T s is the symbol period.In equation (23), we used the expression of PEP provided in equation ( 7). e optimal packet length maximizing the throughput can be obtained using the gradient algorithm: Usually, the gradient search is used to minimize a function f, and we write N i+1 � N i − μ((zf(N � N i ))/zN).Here, we aim to maximize the throughput which is equivalent to minimize f � −Thr. e derivative of throughput with respect to packet length is equal to is expression is valid for Gamma-Gamma atmospheric turbulence with pointing errors. e used packet length is the closed solution to that obtained by gradient search such that N + n d is a multiple of log 2 (M) so that bits can be converted to (N + n d )/log 2 (M) M-PAM symbols. Optimal Packet Length with AMC Figure 1 shows the average throughput of FSO communications with respect to ASNR using optimal N obtained with gradient search equation (24).From Figure 1, we can deduce the following strategy: As shown in Figure 2, a similar approach is used when packet length is fixed, N � 410 and n d � 10 with thresholds S 1 � 16.1 dB, S 2 � 23.3 dB, and S 3 � 30 dB. Figure 3 shows the proposed adaptive packet length and adaptive MCS using ASNR Γ. e ASNR is measured at the receiver to select the optimal packet length and MCS. e ASNR is compared to thresholds D 1 , D 2 , D 3 , and D 4 to determine the optimal MCS.Packet length is computed using the gradient algorithm as explained in equations ( 24) and (25) . Numerical Results Numerical results were obtained in the presence of both atmospheric turbulence and pointing errors.Except in Figure 4, the ratio b between the equivalent beam radius at receiver and pointing error displacement standard deviation is 12. Weak, moderate, and strong turbulence were considered.Figures 5 and 6 show, respectively, the average throughput of FSO communications for 4-PAM, ASNR � 15 dB (respectively, ASNR � 20 dB) with respect to packet length N. In all results, n d � 10.We notice that throughput can be maximized by optimizing packet length. Journal of Computer Networks and Communications In fact, we notice that we can choose the value of packet length N to maximize the throughput.As shown in Figure 5, increasing packet length does not mean increasing the throughput all the time and there is a certain length that maximizes the throughput.e optimal packet length increases as the average SNR increases, as shown in Figures 5 and 6. e optimal packet length is 60 (respectively, 120) for E b /N 0 � 15 dB (respectively, 20 dB).By comparing the results of Figures 5 and 6, we concluded that packet length should be increased as the ASNR increases. Figure 7 compares the average throughput of 4-PAM when packet length is optimal with respect to the fixed packet length (FPL) N of 200, 400, and 1000.We notice that packet length optimization allows 0.8 dB gains with respect to N � 200 and throughput equal to 0.5 bit/s/Hz.Optimal packet length allows 1.3 dB gains with respect to N � 400 and throughput equal to 0.5 bit/s/Hz.Optimal packet length allows 1.9 dB gains with respect to N � 1000 and throughput equal to 0.5 bit/s/Hz.At high ASNR, N � 1000 allows higher throughput than N � 400 and N � 200.At low ASNR, N � 200 allows higher throughput than N � 400 and N � 1000.Figure 2 shows the average throughput of FSO communications with N � 410 for different modulations and coding schemes (MCSs): 4-PAM with R c � 0.5 convolutional coding; 4-PAM, 8-PAM, 16-PAM, and 32-PAM without channel coding.If 4-PAM is used, the packet of length N + n d � 420 is converted to 420/2 � 210 symbols.If 8-PAM is used, the packet is converted to 420/3 � 140 symbols.If 16-PAM is used, the packet is converted to 420/ 4 � 105 symbols.If 32-PAM is used, the packet is converted to 420/5 � 84 symbols.erefore, packet length should be a multiple of 60. e following AMC approach is suggested: resholds S i correspond to intersection of different curves. 4-PAM transmits 2 bits per symbol, whereas 5 bits are coded in one 32-PAM symbol.As the number of bits per symbol increases, the spectral efficiency increases, but the PEP also increases since the symbols are closer to each other for a fixed transmit energy per bit. Figure 8 shows the throughput of FSO systems with adaptive modulation and coding (AMC) for fixed and adaptive packet length.Optimal packet length offers 1 dB gain with respect to N � 410.e curve of optimal packet length with AMC is the upper bound of 5 curves of Figure 1. e curve of fixed packet length N � 410 with AMC is the upper bound of 5 curves of Figure 2. e effects of pointing errors are investigated in Figure 4 for b � 3 and b � 12. b is the ratio between the equivalent beam radius at the receiver and pointing error displacement standard deviation.We notice that the proposed optimal packet length allows us to increase the throughput even in the presence of pointing errors.When b is small, the pointing errors increase and the throughput decreases. We compare in Figure 9 the throughput derived in our paper for N � 400 to that obtained from [27].We used the PDF of SNR [27] to compute the PEP and throughput of 4-PAM as an integral as follows: where g(x) is the PEP for SNR equal to x, as given in equation (12).Another contribution of the paper is to show that PEP can be deduced from CDF of SNR and yields close results to the PEP computed using the above integral.Figure 9 shows that our results are very close to those of [27].We have provided in equation ( 10) a tight upper bound of PEP.erefore, it is a tight lower bound on throughput. Conclusion In this paper, we have optimized the throughput for FSO communications.For each average SNR (ASNR), we choose the optimal packet length and MCS to have the largest throughput.Optimal packet length was obtained using the gradient search algorithm and yields 0.8-1.9dB gain with respect to conventional FSO.We have shown that the optimal packet length should be increased with average SNR to obtain higher throughput.We have also identified threshold to select the appropriate MCS for each average SNR and ensure higher data rates.e average SNR is compared to thresholds: when average SNR (ASNR) is less than 14. per bit E b /N 0 (dB) Throughput bit (s/Hz) Optimal packet length Figure 1 : Figure 1: Average throughput of FSO systems for optimal packet length and different modulations and coding schemes with moderate turbulence. Figure 2 :Figure 3 :Figure 4 : Figure 2: Average throughput of FSO systems for fixed packet length and different modulations and coding schemes with moderate turbulence. Figure 5 :Figure 6 : Figure 5: Average throughput of FSO systems with respect to packet length for an average SNR of 15 dB. Figure 7 : Figure 7: Average throughput of FSO systems for optimal and fixed packet length: 4-PAM modulation and moderate turbulence. +∞ 0 1 − 1 − 1 ) a i e −c i u N+n d Let y � a i e −c i u , and we deduce per bit E b /N 0 (dB) Throughput bit (s/Hz) AMC with N = 410 AMC with optimal packet length Figure 8 : Figure8: roughput of FSO systems with AMC using ASNR: fixed versus optimal packet length and moderate turbulence.
3,260.6
2019-05-16T00:00:00.000
[ "Computer Science", "Business" ]
Lidar studies on microphysical influences on the structure 1 and lifetime of tropical cirrus clouds 2 8 Abstract. An understanding of the role of ice crystals in the cirrus cloud is significant on the the radiative 9 budget of the planet and consequent changes in the temperature. The structure and composition of the cirrus is 10 affected by the microphysical parameters and size and fall speed of ice crystal inside the clouds. In this study, 11 the structure and dynamics of tropical cirrus clouds were analysed by the microphysical characterisation using 12 the data obtained by the ground based lidar system over the tropical site Gadanki [13.5 N, 79.2 E], India for a 13 period of 6 years from 2005 to 2010. The observed clouds have optical depth within the range 0.02 to 1.8, lidar 14 ratios are in the 20 to 32 sr range and depolarisation ratio varies between 0.05 and 0.45. The altitude and 15 temperature dependence of the parameters, their interdependence and the fall velocity – effective size analysis 16 were investigated. Dependence of the microphysical parameters on the ice fall velocity which is critical for 17 climate change was also analysed. The same are compared with the CALIPSO satellite based CALIOP lidar 18 observations. 19 20 Introduction In the high altitude cirrus clouds, ice crystals appear in a variety of forms and shapes, depending on the formation mechanism and the various atmospheric conditions (Liou, 1986;Rogers, 1976;Heymsfield and Platt, 1984).In cirrus clouds, at temperatures lower than about −45 0 C, ice crystals form and exist as mainly nonspherical particles (Petrenko and Whitworth, 1999).Moreover, high altitude clouds in the altitude range between 8 to 20 km, have an important place in sustaining the energy budget (Wallace and Hobbs, 2006) of the earth atmosphere system by interacting with solar radiation (Stephens, 1990).Ice clouds reflect solar radiation effectively back to space, called albedo effect and absorb thermal emission from the ground and lower atmosphere, through the greenhouse effect (Stephens, 1990).Inspite of their occurrence hight and temperature, the microphysical conditions have fundamental implications in terms of radiative transfer (Liou and Takano, 2002).There for it is significant to analyse the microphysical properties of cirrus clouds including their structure for estimating their radiative properties accurately (Cess et al., 1990). Earlier extensive experimental studies analysed the importance of cirrus clouds in the radiation budget using various techniques employing lidars (Liou, 1986;Prabhakara et al.;1988;Ramanathan et al., 1989;Sassen et al., 1989;Wang et al., 1996;Hartmann et al., 2001;Stubenrauch et al., 2010;Motty et al., 2015Motty et al., , 2016)).Heymsfield and McFarquahar (1996) found that cirrus clouds distributed in the tropics and play an important role in radiative effects.Prabhakara et al. (1993) suggested that the greenhouse effect produced by the optically shows that aerosols in the tropical tropopause layer (TTL) act as ice condensation nuclei and is higher during monsoon periods. The vertical profiles on cirrus formations over a local station can be obtained from the ground based lidar system while for global coverage observations using the Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) onboard the Cloud Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite are widely used (Dessler et al., 2009;Meenu et al., 2010).During the last few years, significant efforts are being pursued to study the cirrus characteristics using the ground based lidar system over the tropical station Gadanki (Sivakumar et al., 2003,Parameswaran et al.,2003and Krishnakumar et al.,2014), but mainly for deriving the general features and their variations in different periods of the year.Sivakumar et al. (2003) found that the various cirrus formations are closely related to the tropospheric temperature.Parameswaran et al. (2003) stated that for the cirrus covered region, the decrease in the environmental lapse rate could possibly be attributed to the cloud induced IR heating.Also according to Krishnakumar et al. (2014), a notable dependence is observed between the ice crystal morphology in the clouds and the various dynamical conditions of the prevailing atmosphere.Thus the ice composition and the microphysics of cirrus can be understood by using the lidar data on their scattering properties in deriving the quantitative values of the extinction coefficient, optical depth, depolarisation properties, lidar ratio, effective size of ice crystal and fall velocity in order to analyse the radiative effects. The major objective of this paper is to contribute towards improving the understanding of the radiative effects of cirrus clouds in terms of their microphysical parameters over the Indian tropical station, Gadanki [13.5 0 N, 79.2 0 E], using the ground based NARL lidar system and the Caliop observations on a seasonal mean basis along with their interdependence.Cirrus cloud observations on 152 days during the period 2005-2010 were analysed using the data from both the lidar techniques and the results obtained were compared with the earlier reports. Ground based lidar (GBL) The ground based data obtained from the lidar system installed at the National Atmospheric Research Laboratory (NARL), Gadanki (13.5 0 N, 79.2 0 E) for the period (2005)(2006)(2007)(2008)(2009)(2010).The further specification for the ground based lidar system are the same as employed in Motty et al. (2015Motty et al. ( , 2016)), and the following text is derived from there with minor modification.The NARL lidar is a monostatic, biaxial duel polarisation system which transmits Nd: YAG laser pulses of wavelength 532 nm at a rate of 20 Hz (50 Hz since 2007).Each pulse has pulse energy of 550 mJ (600 mJ since 2007) and pulse duration of 7 ns.The laser beam emerging with a divergence of 0.45 mrad from the source and to reduce the divergence to < 0.1 mrad before transmitting vertically into the atmosphere, the beam is expanded using a 10X beam expander.The backscattered photons from the atmosphere are collected by two independent telescopes.One of the telescopes is designed to collect the backscattered photons from the air molecules above 30 km in the atmosphere (called Rayleigh receiver) and the other is designed to collect the backscattered photons from altitude below 30 km to study aerosols and clouds (called Mie-receiver).The Mie-receiver contains a Schmidt-Cassegrain telescope of 35.5 cm diameter with a field of view of 1 mrad.In order to eliminate the unwanted background noise from the received signal, a narrow band interference filter with wavelength centred at 532 nm and a full-width at half-maximum of 1.1 nm is placed in front of a polarizing beam-splitter.The polarizing-beam splitter splits the beam into parallel and perpendicularly polarized beams which are then detected by two identical orthogonally aligned photomultiplier tubes.The counting system consists of a Multi-Channel Scalar card and the photon counts are accumulated in 300m resolution bins and integrated for 4 min.Lidar data were collected only during the nights that are free from low-level clouds and rain. Satellite based lidar (SBL) The CALIOP onboard the CALIPSO satellite provides high-resolution observations of the vertical distribution of clouds and aerosols and their optical properties along the satellite track (Winker et al., 2007).The following description about the lidar system is obtained from Motty et al., (2015Motty et al., ( , 2016) ) with minimal changes.In order to compare the properties of cirrus clouds obtained from NARL lidar, level-2, 5 km cloud layer and cloud profile data of product version 3 obtained for a grid (5 0 N -20 0 N; 60 0 E -85 0 E) during the period of June 2006-December 2010 was used (Motty et al.,2016).CALIOP is a near-nadir viewing space-based, dual-wavelength, dual-polarization, three channel elastic backscatter lidar that transmits linearly polarized laser pulses having an average pulse energy of 110 mJ both at first (1064 nm) and second harmonic (532 nm) wavelengths of Nd: YAG laser (Winker, 2003;Hunt et al., 2009;Winker et al., 2009;Pandit et al., 2015;Motty et al.,2016 ).The backscattered signal is received by a 1m diameter telescope with parallel and perpendicularly polarized channels at 532 nm wavelengths and one parallel channel at 1064 nm (Winker et al., 2007).The present study utilizes version 3 of the level-2 cloud layer data product from CALIOP, where the data is gridded at 5 km horizontal resolution.The data used in this work have a vertical and horizontal resolution of 60 m and1000 m respectively in the altitude region between 8.2 and 20.2 km. In order to obtain the properties more accurately, simultaneous observations using ground-based and space-borne lidar over a tropical station with opposite viewing geometry, night-time data collected when the CALIOP overpasses nearby the Gadanki region were only considered.Because of the Caliop's repeat cycle of 16 days, four overpasses at most can be obtained in each month, with 2 each daytime and night-time overpass. During the observation period a total number of 152 data files were collected from the NARL lidar system ( Motty et al.,2016) and 116 data files (Caliop observations are available from 2006 June onwards) from the nocturnal observations obtained using CALIOP in the region selected around Gadanki.In this study the Calipso observations with cloud top altitude greater than 8 km and those with CAD score in the range of 80-100 were only considered.Also the Caliop parameters of cloud layers are obtained directly from CALIOP 5 km cloud layer data products. Method of analysis The extinction coefficient as well as the IWC depends upon the particle size distribution of the clouds.The IWC can be determined from the extinction measurements by using the relation provided by Heymsfield et al. (2005). The effective diameter provides additional information for the ice cloud radiative characteristics and can be obtained from the equation by Heymsfield et al. (2014) as a function of IWC, σ and the density of solid ice (ρ). In order to analyse the fall velocity of ice crystals in terms of effective size, an aerodynamic equation suggested by Mitchell et al., (1996) was employed. Microphysical statistics of tropical cirrus The extinction coefficient, optical depth, lidar ratio and depolarization ratios are considered to be of special importance since they are related to microphysics of the ice crystals contained on cirrus clouds.The microphysical properties as well as the mid cloud altitude and temperature of the cirrus clouds together play an important role in the radiative properties (Sunil kumar et al., 2008;Seifert et al., 2007).Here the climatology of the cirrus over the tropical station Gadanki covering 152 days of lidar observations from 2005 to 2010 were analysed.The observation period is divided into the prominent seasons of the Indian subcontinent namely winter (December,January,February), summer(March,April,May),South West monsoon(June,July,August) North East monsoon(September,October,December). Extinction coefficient The extinction coefficient (σ) provides the information on the influence of scatters on the radiation.Figure1 shows the contour plot of seasonal variation of σ derived from ground based as well as satellite based lidar observations. It can be seen that there is significant variation in the values of σ during the period of observations for ground based system.It is found that the σ value ranges between 5.0 ×10 -3 to 8.91×10 -6 m -1 and it is noted to be highest during summer season having the range of value 1.5×10 -4 to 1.7×10 -6 m -1 .During winter the σ value is in the range of 2.0 ×10 -6 to 2.5×10 -5 m -1 range.The σ value during South-West Monsoon period is found to be higher than North East Monsoon period having σ value ranges between 1.5×10 -6 to1.7×10 -4 m -1 and 1.0 ×10 -6 to 1.5×10 -5 m -1 respectively.The satellite based Caliop lidar results also show similar behaviour and the σ values ranges between 1.7 ×10 -4 to 7.8×10 -6 m -1 .During the summer period σ ranges from 5.8 ×10 -4 to 4.6×10 -3 m -1 and during winter from to 4.5×10 -4 to 3.6×10 -3 m -1 .Similar to ground based observation during south west monsoon, Caliop observation also shows higher σ during south west monsoon period than the north east monsoon periods having values ranges from 6.2×10 -4 to 5×10 -3 m -1 and 3.9 ×10 -4 to 3.1×10 -3 m -1 respectively. Optical Depth The statistics of seasonal variation of optical depth by the two measurement techniques are shown in figure 3. The contour plot of seasonal variation of cirrus optical depths distribution for each season is depicted in the figure 3. From fig. 3(a), 59% of the observed clouds were optically thin clouds with τ<0.3, 33% were subvisual clouds with τc < 0.03 and only 8% were thick clouds with τ>0.3.Also in the region of 14 km and above, where the cloud frequency is high, it is noted that 50% of clouds were sub-visual (τ c < 0.03) and 36% were optically thin (τ c <0.3) and the remaining are thick clouds.During the summer season τ ranges within 0.005 to 1.4 and during winter from 0.005 to 1.During the south west monsoon period, most of the observed τ values are for thin clouds ranges between 0.01 and 1, and thick clouds were observed in all other seasons.North east monsoon clouds give the τ within 0.002 to 0.87.The uncertainty related to cloud optical depth estimation from CALIPSO data is less than 50% (Winker et al., 2007).The CALIPSO observations show that these clouds have an average optical depth 0.15-1.7km.Figure 3(b) shows that from the satellite observation, 44% of the observed clouds were optically thin clouds and the rest were thick clouds.The 80% of the observed clouds during the NE monsoon period were optically thin.The seasonal optical depth distribution behaviour shows the same pattern in the range 0-1km distributed over the observation periods for the two measurement techniques.The monsoon periods have clouds with high optical depth in ground based measurement but the values are less when compared to CALIPSO measurement.This may be due to the higher convective activities at tropics during the monsoon periods. The scatter plot of variation of optical depth with mid cloud height and temperature are shown in Fig. 4( a & b).From the height dependence of optical depth (fig.4(a)), it can be inferred that optical depth increased with the cloud height in the 11-15-km range and then decreases.Generally, the cirrus clouds that exist in the tropopause region are optically thin (Nee et al., 1998). Lidar Ratio The correct values of extinction to back scatter ratio, commonly known as lidar ratio depicts the idea of ice crystals in the cirrus clouds and reflect the cloud characteristics in the corresponding height region (Das et al., 2009;Chen et al., 2002).According to Heymsfield and Platt (1984), in cirrus clouds at temperatures ≤-50 0 C and altitude 12 km contains different kinds of ice crystals.Figure 6 (a) shows that the lidar ratio varies with mid cloud height randomly.It was found that within 12.5-15 km, the lidar ratio values are distributed mainly between 20-28.The lower and higher clouds shows relatively smaller LR values and over 13.5-14.5range, higher LR values of the range 28-32 sr were observed.This indicates that most of the observed clouds consist of hexagonal type crystals (Sassen et al., 1992) for both observations.As in Fig. 6(b), lidar ratio appears to vary with mid cloud temperature with no clear tendency.This can be due to the variations in the ice crystal properties at the particular temperature range or may be due to the heterogeneous cloud formations over the region (Das et al.,2009).But in some cases, high LR values are observed at lower temperature region which indicates the presence of thin cirrus clouds. Depolarisation Ratio Depolarisation ratios are influenced by the inhomogeneity of ice paricles in cirrus clouds in the lidar analysis.The depolarisation values for the water droplets are smaller rather than the ice particles.Figure 7 shows the seasonal distribution of depolarisation ratio and their height dependence and the wide range of variation observed can be due to the heterogeneous nucleation process of cirrus, which results in different cloud composition or may be due to the aspect ratio changes (Das et al., 2009).For ground based measurements, the depolarisation ratio ranges within 0.04-0.45as in fig.7 Interdependence of the optical properties The dependence of microphysical characteristics of cirrus clouds over Gadanki were further investigated. Figure 9(a) depicts the scatter plot of optical depth variation with lidar ratio.The lidar ratio of thin cirrus was observed in 14-28 sr range and has the maximum value obtained at an optical depth range between 0.05 and 0.3, which agrees with Wang et al. (2013).As the optical depth increases (thick clouds), the lidar ratio decreases and is more clear in the satellite based observation.Also most of the optically thin cirrus shows lower lidar ratio values.The variation of depolarisation ratio with optical depth is shown in figure 9(b) and from that clouds having large depolarisation values are observed with smaller optical depth.From the figure, it is clear that the sub-visual clouds show higher depolarisation values.The points with lower optical depth and higher depolarisation values indicate the presence of thin cirrus clouds.In both the observation techniques, the points denoting higher optical depth and higher depolarisation ratios show the occurrence of thick cirrus clouds.The points with smaller depolarisation values are for water clouds.Some of the previous studies revealed a relation between the particle depolarization ratio and particle lidar ratio, but found differing interdependencies. According to Reichard et al. (2002), the particle depolarization ratio splits into two branches and with increasing particle lidar ratio values, the difference of these two branches decreases.According to Chen et al. (2002), the particle depolarization ratio splits into two branches, but with increasing particle lidar ratio values, the difference of these two branches increases.Later by Sakai et al. (2006), the values for particle depolarization and particle lidar ratio are enclosed between two vertices and are possible to distinguish the phase and orientation of randomly oriented ice crystals by measuring the particle depolarization and particle lidar ratio.crystals, 30% were hexagonal thick plate crystals and 18.8% were long column type.From the satellite observation, 52.8% of the cloud crystals were long column, 42.1% were thin plates and only 5.2% were thick plate type crystals.The probable size and shape distribution of ice crystals in the cirrus clouds can be further analysed clearly by calculating the effective diameter and fall velocity analysis. Effective diameter and fall velocity analysis Das et al. ( 2010) obtained the possible ice crystal formations with respect to the fall velocityeffective diameter relation derived by Mitchell (1996).Here the average value of effective diameter is obtained using the equation by Heymsfield et al. (2014) as a function of IWC, σ and the density of solid ice (ρ) and according to the diameter obtained, the possible shapes are selected as suggested by Mitchell (1996).The obtained effective diameters were estimated by averaging the diameter obtained during each season for the observed year.In the present observation, ice crystals of various shapes, mainly hexagonal plates (HP) (15D 100 and 100D 3000) and hexagonal columns (HC) (100D 300 and D 300) have been analyzed and the fall velocity is obtained using their corresponding equations obtained by Das et al.(2010).These obtained ice crystal shapes are similar to the previous classification as by Sassen et al. (1992) and are commonly found in cirrus clouds.The seasonal average of the obtained parameters by the two measurement techniques are summarized as in Table 1and 2. Several earlier studies reported the size of most of the cirrus ice crystal to be about several hundred microns and falls with velocity between 30 and 100 cm/sec (Mitchell, 1996;Heymsfield, 2003;Deng and Mace, 2008;and Das, 2010).The results of the present study are in agreement with the previously obtained values. The effective size of ice crystals in cirrus usually increases with mid cloud temperature as in figure 11 indicates the presence of smaller sized ice crystals at lower temperature region.The lower temperature has been suggested for homogeneous ice nucleation (Kärcher and Lohmann, 2002).The results obtained are similar for both the The fall velocity, the rate at which an ice crystal falls through a cloud, is dependent on its mass, size and shape and thus can be used for effective size analysis.From the scatter plot of fall velocity variation with the mid cloud temperature as in figure12, the fall velocity is found to be broadly distributed and show an increasing tendency with temperature.This is in agreement with the results obtained by Das et al. (2010) observed over Chung-Li using the lidar measurements.As depicted in figure 12(b), the fall velocity varies linearly with the effective size of cirrus crystal.Figure 12(c) shows the dependence of fall velocity on the cirrus optical depth measurements.For larger ice crystals, the chances for homogeneous nucleation are lesser and the sedimentation rate increases.The smaller sized crystals having low fall velocities undergo homogeneous nucleation processes and thus the cloud remains for longer in the upper atmosphere.Also the water vapour in that region is higher and it acts as a blanket and prevents long-wave radiation being emitted by the Earth from escaping into space, enhancing warming.The effective size distribution of ice particles which decides the lifetime of the cirrus clouds has strong impacts on the cloud radiative forcing due to its influence on cirrus cloud coverage.Here, the fall velocity of cirrus ice crystals observed are relatively high, which may indicate that the ice particles will fall out rapidly.Therefore, it can be said that the lifetime of the cirrus clouds in the tropics will dissipate faster and causes LW dominance and thus have a significant effect on the radiation budget. Conclusion The important microphysical properties such as extinction coefficient, optical depth, lidar ratio and depolarisation ratio for cirrus clouds obtained during 2005-2010 were analysed using the observations made by the ground based lidar system at NARL Gadanki (13.5 0 N, 79.2 0 E) and are compared with the available night time observations from the CALIOP on board the CALIPSO satellite.The dependence between these quantities and its relationship with mid cloud height and mid cloud temperature were also obtained.The radiative effects of obtained cirrus clouds were discussed by the effective sizefall velocity analysis of the cirrus ice particles. The inter comparison of measurements by the two techniques showed that the satellite measurements match very closely with the ground based data.However, small differences were noticed since the observations were not being exactly obtained at the same place and the sampling frequencies were also different.Some of the generally observed results from both observation techniques are listed which are as follows:  The cirrus extinction coefficient ranges from 5.0 ×10 -3 to 8.91×10 -6 m -1 and 1.7 ×10 -4 to 7.8×10 -6 m -1 for ground based and satellite based measurements respectively and it decreases with the temperature.  Cirrus optical depth varies from 0.02 to 1.8 ranges with most frequent occurrence being in the range 0.04 to 0.3.It was noted that optical depth increases with temperature of the mid cloud within the range -50 0 C to -70 0 C and decrease with temperature outside this range.Within 11-15km range, the optical depth increases with height and decreases for higher altitudes.  The lidar ratio varies with mid cloud height randomly, and within 10 to 20 km the lidar ratio values are distributed in the range 20 to 32 sr.The lower and higher clouds show relatively smaller LR values and within 13.5 to 14.5 range higher LR values of the range 28 to 32 sr were observed.Lidar ratio varies with the mid cloud temperature with no clear trend. Depolarization ratio was observed to increase from 0.05 to 0.45 with height between 10 and 16 km and tend to decrease with the temperature varying from −80 to −30 0 C.  Most of the optically thin cirrus shows lower lidar ratio values and are scattered among 14-28 sr range. As the optical depth increase (thick clouds), the lidar ratio decreases.  The clouds having large depolarisation values are with lower optical depth.Thin cirrus have lower optical depth (τ ≤ 0.3) and higher depolarisation values (0.3 to 0.35) whereas the higher optical depth (τ > 0.3) and higher depolarisation ratios (0.35 to 0.45) shows the occurrence of thick cirrus clouds and also the smaller depolarisation values (below 0.1) are for water clouds.  The presented results are used to understand possible hexagonal crystal formations inside clouds using the depolarisation ratio-lidar ratio analysis.The ground based measurements gives the occurrence of more hexagonal thin plate crystals (51.20 %) during the observation whereas the satellite based measurements shows the presence of higher long column crystal formation (52.80 %). The detailed study on the influence of the possible hexagonal crystal formation on the characterisation of radiative properties can be pursued in future.  The results obtained on various microphysical properties of cirrus and their interrelationship along with a comparative study of GBL and SBL data are expected to be useful in the radiative budget analysis. Atmos.Chem.Phys.Discuss., doi:10.5194/acp-2016-456,2016 Manuscript under review for journal Atmos.Chem.Phys.Published: June 2016 c Author(s) 2016.CC-BY 3.0 License.thin cirrus act as a significant factor in maintaining the warm pool.Heymsfield and Miloshevich (2003) have shown that top of the cloud layers composed of thin ice crystals, whereas the lower parts consists more thick crystals.Wang et al. (1994) and Stubenrauch et al. (2010) stated that tropical cirrus clouds have strong potential to impact the tropical and global climate.Recent research by Kulkarni et al.(2008) and Vernier et al. (2015) Atmos.Chem.Phys.Discuss., doi:10.5194/acp-2016-456,2016 Manuscript under review for journal Atmos.Chem.Phys.Published: June 2016 c Author(s) 2016.CC-BY 3.0 License. Fernald's inversion method is used to derive the extinction coefficient for the ground based NARL lidar data in the region up to 20 km.The methods are the same as employed in Motty et al.,(2016) .Cloud optical depth Atmos.Chem.Phys.Discuss., doi:10.5194/acp-2016-456,2016 Manuscript under review for journal Atmos.Chem.Phys.Published: June 2016 c Author(s) 2016.CC-BY 3.0 License.(COD, τc) is calculated by integrating the extinction coefficient from cloud base to its top and is an important parameter depends on the composition and thickness of the cloud.The following estimations of τ are obtained by Sassen et al. (1992) from their visual appearance: τc ≤ 0.03 for sub visible, 0.03 < τc ≤ 0.3 for thin, τc > 0.3 for dense cirrus clouds.The depolarisation ratio, δ(r) within the cloud indicates the phase of the cloud and thereby to identify the type of ice crystals present within the cloud.Most of the tropical cirrus clouds are composed of non-spherical ice crystals and will cause significant depolarisation.The lidar ratio (sc) depends on the structure and properties of ice crystals within the cirrus and as such it is range dependent.The range dependent lidar ratio is obtained using the method suggested by Satyanaryana et.al.(2010). Figure 2 ( Figure 2(a & b) shows the trend of the variation of average extinction coefficient with mid cloud height and temperature from both observation techniques.In both cases the observed σ value does not show any clear altitude dependence, but from the figure (2a), it is clear that most of the clouds formations in the altitude range 13-15 km range from both observation techniques.The figure (2b) shows the temperature dependence of  values and the points represent the average value of with mid cloud temperature and it can be seen that decreases with the temperature and the most favourable cirrus occurring temperature ranges between -70 to -55 0 C. Figure 4(b) shows the temperature dependence of optical depth and from that most of the thin clouds are having low temperature (below -68 0 C) and cloud formations are widely observed in the range -70 0 C to -55 0 C.As the temperature increases the optical depth also increases.The clouds observed in the tropical tropopause region are having an average temperature of ≈ 75 0 C, are normally Atmos.Chem.Phys.Discuss., doi:10.5194/acp-2016-456,2016 Manuscript under review for journal Atmos.Chem.Phys.Published: June 2016 c Author(s) 2016.CC-BY 3.0 License.optically thin (Nee et al., 1998).Also it demonstrates a slightly positive dependence with the mid cloud temperature which disagree with Wang et al. (2013) but in agreement with results of Sassen et al. (1992) which shows a positive dependence with the mid cloud temperature. Figure 5 Figure5shows the contour plot of seasonal distribution of lidar ratio during the period of observation.Above 12 km, the lidar ratio values are mainly distributed between 20 to 30 sr ranges.During monsoon periods, LR shows relatively higher values which varies between 20-27sr.By the satellite based observation, the LR values varies between 24-32 sr and during NE monsoon periods, all the observed values are in the range 25-26 sr.The calculated lidar ratio can be compared with the previously reported results.Grund and Eloranta (1990) reported the LR values about15-50 sr using high spectral resolution lidar during the FIRE field .Sassen and Comstock (2001) calculated the mean mid-latitude LR as ~24±38 sr with a median value of ~27 sr using LIRAD.Pace et al. (2002) found the mean LR as ~19±33 sr for the equatorial cirrus.Also Seifert et al. (2007) derived the mean LR during monsoon over tropics and derived a mean value of 32±10 sr.Giannakaki et al.(2007) derived LR as 28±17 sr for mid-latitude cirrus .Das et al.(2009) determined the mean LR value of 23±16 sr by using the simulation of lidar backscatter signals.Statistically; the above results are in agreement with the present study. (a) and 0.3-0.5 for satellite based measurement techniques as in fig.7(b) and in both cases, maximum values are shown during summer period.Between 11-16km range, Atmos.Chem.Phys.Discuss., doi:10.5194/acp-2016-456,2016 Manuscript under review for journal Atmos.Chem.Phys.Published: June 2016 c Author(s) 2016.CC-BY 3.0 License. the higher depolarisation values are observed (Motty et al., 2016) and above 16 km, smaller values < 0.2 are observed may be due to the presence of super cooled or mixed phase of water (Sassen, 1995) and the horizontally oriented ice crystals increases the same (Platt et al., 1978).The NARL Lidar observation showed that during winter periods depolarisation varies between 0.05-0.3and for monsoon periods it is 0.04-0.3.The highest depolarisation values are observed during the summer periods.The depolarisation variation by the satellite observation varies in the range 0.3-0.5 and for summer all observed values were higher and show relatively lower values during NE monsoon periods.During monsoon, since the cloud condensation nuclei are relatively higher the water content is more in the clouds. Figure 8 ( Figure8(a) presents the scatter plot of the dependence of average value of depolarization ratio on the mid cloud height and the values are found to be scattered for both the measurement techniques; but an increasing tendency is observed between 10-16 km.In most of the cases, depolarization ratio values ranges within 0.05 to 0.45. Figure 8 ( Figure 8(b) shows depolarisation ratio plotted as a function of mid cloud temperature and most of the ice clouds having depolarisation values above 0.3 show an increasing tendency with decreasing temperature within −80 to −30 •C (Motty et al.,2016).For water clouds, depolarisation values ranges between 0-0.09 and it doesn't show any temperature dependence.Since at higher altitude region, the cooler cirrus clouds are having bigger sized ice crystals results higher depolarisation values and are in agreement with by Chen et al. (2002). Figure9 Figure9(c) shows the relation between depolarisation ratio and lidar ratio.For the present study, it does not show any particular dependence.According to the hexagonal crystal classification bySassen et al. (1992), the observed data can be classified as in fig.10.Here for the ground based observation, 51.20% of the cloud containing hexagonal thin plate ice ground based and satellites based observation techniques and are in agreement with Das et al. (2010), Heymsfield et al. (2000), and Chen et al. (1997). Atmos.Chem.Phys.Discuss., doi:10.5194/acp-2016-456,2016 Manuscript under review for journal Atmos.Chem.Phys.Published: June 2016 c Author(s) 2016.CC-BY 3.0 License. The effective size of the ice crystal is lower at colder temperature and thus showing the large size at the warmer temperature. The fall-velocity of the ice crystals increases with temperature, indicating the influence of particle growth in cirrus coverage.The higher values of ice crystal fall velocity in cirrus clouds observed indicate rapid fall out of the ice particles which causes the faster dissipation of cirrus clouds over tropics causes the LW dominance and thus have a significant effect on the radiation budget. Figure 1 : Figure 1: Contour plot of seasonal variation of Extinction coefficient for (a) ground based lidar (GBL) and (b) satellite based lidar (SBL) observations. Figure 2 : Figure 2: Variation of extinction coefficient according to mid cloud height and mid cloud temperature based on GBL and SBL measurement techniques. Figure 3 : Figure 3: Contour plot of seasonal variation of optical depth based on GBL and SBL observations. Figure 4 : Figure 4: Variation of optical depth according to mid cloud height and mid cloud temperature based on GBL and SBL measurement techniques. Figure 5 : Figure 5: Countor plot of seasonal variation of Lidar ratio using GBL and SBL observations. Figure 6 : Figure 6: Variation of Lidar ratio according to mid cloud height and mid cloud temperature based on GBL and SBL measurement techniques. Figure 8 : Figure 8: Variation of Depolarisation ratio according to mid cloud height and mid cloud temperature based on GBL and SBL measurement techniques. Figure 9 : Figure 9: The Inter dependence of (a) Optical depth, (b) Lidar ratio and (c) Depolarisation ratio based on GBL and SBL measurement techniques. Figure 11 : Figure 11: The scatter plot of dependence of effective size of ice particles in cirrus clouds with mid cloud temperature by using GBL and SBL.
7,652.4
2016-06-20T00:00:00.000
[ "Environmental Science", "Physics" ]
Bifurcation values for a family of planar vector fields of degree five We study the number of limit cycles and the bifurcation diagram in the Poincar\'{e} sphere of a one-parameter family of planar differential equations of degree five $\dot {\bf x}=X_b({\bf x})$ which has been already considered in previous papers. We prove that there is a value $b^*>0$ such that the limit cycle exists only when $b\in(0,b^*)$ and that it is unique and hyperbolic by using a rational Dulac function. Moreover we provide an interval of length 27/1000 where $b^*$ lies. As far as we know the tools used to determine this interval are new and are based on the construction of algebraic curves without contact for the flow of the differential equation. These curves are obtained using analytic information about the separatrices of the infinite critical points of the vector field. To prove that the Bendixson-Dulac Theorem works we develop a method for studying whether one-parameter families of polynomials in two variables do not vanish based on the computation of the so called double discriminant. Introduction and main results Consider the one-parameter family of quintic differential systems ẋ = y, y = −x + (a − x 2 )(y + y 3 ), a ∈ R. (1) Notice that without the term y 3 , (1) coincides with the famous van der Pol system. This family was studied in [24] and the authors concluded that it has only two bifurcation values, 0 and a * , and exactly four different global phase portraits on the Poincaré disc. Moreover, they concluded that there exists a * ∈ (0, 3 9π 2 /16) ≈ (0, 1.77), such that the system has limit cycles only when 0 < a < a * and then if the limit cycle exists, is unique and hyperbolic. Later, it was pointed out in [11] that the proof of the uniqueness of the limit cycle had a gap and a new proof was presented. System (1) has no periodic orbits when a ≤ 0 because in this case the function x 2 +y 2 is a global Lyapunov function. Thus, from now on, we restrict our attention to the case a > 0 and for convenience we write a = b 2 , with b > 0. That is, we consider the system ẋ = y, (2) Therefore the above family has limit cycles if and only if b ∈ (0, b * ) with b * = √ a * and b * ∈ (0, 6 9π 2 /16) ≈ (0, 1.33). Following [24] we also know that the value b = 0 corresponds to a Hopf bifurcation and the value b * to the disappearance of the limit cycle in an unbounded polycycle. By using numerical methods it is not difficult to approach the value b * . Nevertheless, as far as we know there are no analytical tools to obtain the value b * . This is the main goal of this paper. We have succeed in finding an interval of length 0.027 containing b * and during our study we have also realized that there was a bifurcation value missed in the previous studies. Our main result is: Theorem 1.1. Consider system (2). Then there exist two positive numbersb and b * such that: (a) It has a limit cycle if and only if 0 < b < b * . Moreover, when it exists, it is unique, hyperbolic and stable. The phase portraits missed in [24] are (ii) and (iii) of Figure 1. The key steps in our proof of Theorem 1.1 are the following: • Give analytic asymptotic expansions of the separatrices of the critical points at infinity, see Section 2. • Use these expansions to construct explicit piecewise rational curves, and prove that they are without contact for the flow given by (2). These curves allow to control the global relative positions of the separatrices of the infinite critical points, see Section 5. Figure 1. Phase portraits of systems (1) and (2). When a ≥ 0, then b = √ a. • Provide an alternative proof of the uniqueness and hyperbolicity of the limit cycle, which is based in the construction of an explicit rational Dulac function, see Section 4. By solving numerically the differential equations we can approach the bifurcation values given in the theorem, see Remark 2.6. We have obtained that b ≈ 0.8058459066, b * ≈ 0.8062901027 and then b * −b ≈ 0.000444. As we have said the main goal of this paper is to get an analytic approach to the more relevant value b * , because it corresponds to the disappearance of the limit cycle. Although all our efforts have been focused on system (2), the tools that we introduce in this work can be applied to other families of polynomial vector fields and they can provide an analytic control of the bifurcation values for these families. As we will see, our approach is not totally algorithmic and following it we do not know how to improve the interval presented in Theorem 1.1 for the valuesb and b * . One of the main computational difficulties that we have found has been to prove that certain polynomials in x, y and b, with high degree, do not vanish on some given regions. To treat this question, in Appendix II we propose a general method that uses the so called double discriminant and that we believe that can be useful in other settings, see for instance [1,22]. In our context this discriminant turns out to be a huge polynomial in b 2 with rational coefficients. In particular we need to control, on a given interval with rational extremes, how many reals roots has a polynomial of degree 965, with enormous rational coefficients. Although Sturm algorithm theoretically works, in practical our computers can not deal with this problem using it. Fortunately we can utilize a kind of bisection procedure based on the Descartes rule ( [12]) to overcome this difficulty, see Appendix I. Structure at infinity As usual, for studying the behavior of the solutions at infinity of system (2) we use the Poincaré compactification. That is, we will use the transformations (x, y) = (1/z, u/z) and (x, y) = (v/z, 1/z), with a suitable change of time to transform system (2) into two new polynomial systems, one in the (u, z)-plane and another one in the (v, z)-plane respectively (see [2] for details). Then, for understanding the behavior of the solutions of (2) near infinity we will study the structure of the critical points of the transformed systems which are localized on the line z = 0. Recall that these points are the critical points at infinity of system (2) and their separatrices play a key role for knowing the bifurcation diagram of the system. In fact, it follows from the works of Markus [16] and Newmann [17] that it suffices to know the behavior of these separatrices, the type of finite critical points and the number and type of periodic orbits to know the phase portraits of the system. We obtain the following result: Figure 2. Separatrices at infinity for system (2). Theorem 2.1. System (2) has six separatrices at infinity, which we denote by S 1 , S 2 , S 3 , S ′ 1 , S ′ 2 and S ′ 3 , see Figure 2. Moreover: (i) Each S ′ k is the image of S k under the transformation (x, y) → (−x, −y). (ii) The separatrices S 2 and S 3 near infinity are contained in the curve is an analytic function at the origin that satisfies In particular, S 2 corresponds to x b and S 3 to x b. (iii) The separatrix S 1 near infinity is contained in the curve {y − ϕ(x) = 0} where ϕ(x) =φ(1/x) andφ is an analytic function at the origin that satisfiesφ Remark 2.2. In the statements (ii) and (iii) of Theorem 2.1 the Taylor expansions of the functionsφ andφ can be obtained up to any given order. In fact, in Section 5 we will use the approximation ofφ until order 16. As a consequence of the above theorem we have the following result: Corollary 2.3. All the possible relative positions of the separatrices of system (2) in the Poincaré disc are given in Figure 3. To prove the above theorem we need some preliminary lemmas. Lemma 2.4. By using the transformation (x, y) = (1/z, u/z) and the change of time dt/dτ = 1/z 4 system (2) is transformed into the system where the prime denotes the derivative respect to τ . The origin is the unique critical point of (5) and it is a saddle. Moreover the stable manifold is the u-axis, the unstable manifold, S 1 , is locally contained in the curve {u − ψ(z) = 0}, where ψ(z) is an analytic function at the origin that satisfies see Figure 4. Proof. From the expression of (5) it is clear that the origin is its unique critical point. For determining its structure we will use the directional blow-up since the linear part of the system at this point vanishes identically. The u-directional blow-up is given by the transformation u = u, q = z/u; and by using the change of time dt/dτ = u 2 , system (5) becomes This system has a unique critical point at origin and it is a saddle with eigenvalues ±1. The z-directional blow-up is given by the transformation r = u/z, z = z. Doing the change of time dt/dτ = −z 2 , system (5) becomes This system has a unique critical point at the origin which is semi-hyperbolic. We will use the results of [2, Theorem 65] to determine its type. By applying the linear change of variables r = −ξ + η, z = ξ system (8) is transformed into . Therefore from [2, Theorem 65] we know that the origin is a semi-hyperbolic saddle. Moreover, its stable manifold is the η-axis and its unstable manifold is given by In the plane (r, z) the local expression of this manifold is Finally, in the (u, z)-plane the unstable manifold is contained in the curve (6) and from the analysis of phase portraits of systems (7) and (8) we obtain that the local phase portrait of system (5) is the one given in Figure 4. Lemma 2.5. By using the transformation (x, y) = (v/z, 1/z) and the change of where the prime denotes the derivative respect to τ . System (9) has a unique critical point at the origin and its local phase portrait is the one showed in Figure 5. Moreover, the separatrices S 2 and S 3 are locally contained in the curve {v−g(U) = 0} where U = z/v −1/b and g(U) is an analytic function at the origin that satisfies Figure 5. Topological local phase portrait of system (9). All the solutions are tangent to the v-axis but for aesthetical reasons this fact is not showed in the figure. Proof. From the expression of system (9) it is clear that the origin is its unique critical point. As in Lemma 2.4 we will use the directional blow-up technique to determine its structure since the linear part of the system at this point is identically zero. It is well-known, see [2], that since at the origin , all the solution, arriving or leaving the origin have to be tangent to z = 0. So it suffices to consider the v-directional blow-up given by the transformation This system has not critical points. However, by studying the vector field on the s-axis we will obtain relevant information for knowing the phase portrait of system (9). If s = 0 thenv = −1 andṡ = 0, that is, the v axis is invariant. If v = 0 theṅ v = −1+b 2 s 2 andṡ = s 5 , this implies thatv = 0 if s = ±1/b. In addition, a simple computation shows thatv > 0 at the points (0, ±1/b). Therefore the solutions through these points are as it is showed in Figure 6.(a), and by the continuity of solutions with respect to initial conditions, we have that the phase portrait of system (9), close to these points, is as it is showed in Figure 6.(b). Then by using the transformation (v, z) = (v, sv) and the phase portrait showed in Figure 6.(b) we can obtain the phase portrait of system (9). Recall that the mapping swaps the second and the third quadrants in the v-directional blow-up. In addition, taking into account the change of time dt/dτ = −v 3 it follows that the vector field in the first and fourth quadrant of the plane (v, z) has the opposite direction to the showed in the (v, s)-plane. Therefore the local phase portrait of (9) is the showed in Figure 5. To show that the separatrices S 2 and S 3 are contained in the curve (10) we proceed as follows. First, we will obtain the curve that contains the solution through the point (0, 1/b) in the plane (v, s). Second, by using the transformation (v, z) = (v, sv) we will obtain the corresponding curve in the (v, z)-plane and we will show that such curve is exactly the curve given by (10). Since the curve {v − g(s) = 0} is invariant then ∇(v − g(s)), X = 0 at all the points of {v − g(s) = 0}, where X is the vector field associated to system (11). Thus, we have a function, ∇(v − g(s)), X , for which all its coefficients have to be zero. From this observation we obtain linear recurrent equations in the coefficients, . Simple computations show that the first 3 terms of the Taylor series of g(s) are: Thus, in the plane (v, z), the curve corresponding Finally, if U = z/v − 1/b, we obtain (10). Remark 2.6. The proof of the above lemma gives a natural way for finding a numerical approximation of the value b * . Notice that in the coordinates (v, s) the point (0, 1/b) corresponds to both separatrices S 2 and S 3 . Since it is a regular point we can start our numerical method (we use a Taylor method) without initial errors and then follow the flow of the system, both forward and backward for given fixed times, say t + > 0 and t − < 0. We arrive to the points (v ± , s ± ) with s ± = 0 for t = t ± , respectively. These two points have associated two different points (x ± , y ± ) in the plane (x, y), because of the transformation (v, s) = (x/y, 1/x). Now, we integrate numerically the system (2) with initial conditions (x ± , y ± ) to continue obtaining approximations of the separatrices S 2 and S 3 , respectively. The next step is to compare the points of intersectionx + =x + (b) < 0 andx − =x − (b) > 0 of these approximations with the x-axis. We consider the function b → Π(b) := x + (b) +x − (b) and we use the bisection method to find one approximate zero of Π. Note that if Π(b) = 0 then S ′ 2 = S 3 and by the symmetry of the system S ′ 3 = S 2 , and therefore b * =b. Taking b 0 = 0.8062901027, t + = 0.05 and t − = −0.5 we obtain thatx Following a similar procedure, but now using Lemma 2.4 to have an initial condition almost on S 1 , we get thatb ≈ 0.8058459066. (ii). From (10) and by using the change of variables (v, z) = (x/y, 1/y) we obtain that the separatrices S 2 and S 3 are contained in the curve We can write the function φ(x) as where The function φ 1 (x) is analytical at x = b and it is not difficult to see that it has the following Taylor expansion Then (14) can be written as Hence from (13) and taking φ(x) =φ(x − b)/(x − b) 2 we complete the proof. The proof of (iii) follows by applying the previous ideas, considering the expression given by (6) and the change of variables (u, z) = (y/x, 1/x). Proof of Theorem 1.1 We start proving a preliminary result that is a consequence of some general properties of semi-complete family of rotated vector fields with respect one parameter, SCFRVF for short, see [7,18]. Proposition 3.1. Consider system (2) and assume that for b =b > 0 it has no limit cycles. Then there exists 0 < b * ≤b such that the system has limit cycles if and only if b ∈ (0, b * ). Moreover, for b = b * its phase portrait is like (iv) in Theorem 1.1 and when b > b * it is like (v) in Theorem 1.1. Proof. It is easy to see that the system has a limit cycle for b 0, which appears from the origin through an Andronov-Hopf bifurcation. If we denote by X b (x, y) = (P b (x, y), Q b (x, y)) the vector field associated to (2) then This means that system (2) is a SCFRVF with respect to the parameter b 2 . We will recall two properties of SCFRVF. The first one is the so called nonintersection property. It asserts that if γ 1 and γ 2 are limit cycles corresponding to different values of b, then γ 1 ∩ γ 2 = ∅. The second one is called planar termination principle: [19,20] if varying the parameter we follow with continuity a limit cycle generated from a critical point p, we get that the union of all the limit cycles covers a 1-connected open set U, whose boundaries are p and a cycle of separatrices of X b . The corners of this cycle of separatrices are finite or infinite critical points of X b . Since in our case X b only has the origin as a finite critical point we get that U has to be unbounded. Notice that in this definition, when a limit cycle goes to a semistable limit cycle then we continue the other limit cycle that has collided with it. This limit cycle has to exist, again by the properties of SCFRVF. If for some value of b =b > 0 the system has no limit cycle it means that the limit cycle starting at the origin for b = 0, has disappeared for some b * , 0 < b * ≤b covering the whole set U. Since U fills from the origin until infinity, from the non intersection property, the limit cycle cannot either exist for b ≥ b * , as we wanted to prove. Since for b > 0 the origin is a repellor, by Corollary 2.3 we know by the Poincaré-Bendixson Theorem that the phase portraits (i),(ii) and (iii) in Figure 1 have at least one limit cycle. Then, the phase portraits for b ≥ b * have to be like (iv) or (v) in the same figure. Since the phase portrait (iv) is the only one having a cycle of separatrices it corresponds to b = b * . Again by the properties of SCFRVF, the phase portrait (iv) does not appear again for b > b * . Hence, for b > b * the phase portrait has to be like (v) and the proposition follows. Remark 3.2. In Lemma 4.3 we will give a simple proof that when b = 1 system (2) has no limit cycles, based on the fact that for this value of the parameter it has the hyperbola xy + 1 = 0 invariant by the flow. From the above proposition it follows that b * < 1. This result already improves the upper bound of b * , given in [24], 6 9π 2 /16 ≈ 1.33. Theorem 1.1 improves again this upper bound, but as we will see, the proof is much more involved. Proof of Theorem 1.1. Recall that for a ≤ 0 the function V (x, y) = x 2 + y 2 is a global Lyapunov function for system (1) and therefore the origin is global asymptotically stable. Then it is easy to see that its phase portrait is like (o) in Figure 1. To prove the theorem we list some of the key points that we will use and that will be proved in the forthcoming sections: (R 1 ) System (2) has at most one limit cycle for b ∈ (0, 0.817] and when it exists it is hyperbolic and attractor, see Section 4. (R 2 ) System (2) has an odd number of limit cycles, with multiplicities taken into account, when b ≤ 0.79 and the configuration of its separatrices is like (i) in Figure 3, see Proposition 5.1. (R 3 ) System (2) has an even number of limit cycles, with multiplicities taken into account, when b = 0.817 and the configuration of its separatrices is like (v) in Figure 3, see again Proposition 5.1. The theorem for b ≥ b * is a consequence of Proposition 3.1. Notice that again by this proposition and (R 3 ), b * < 0.817. Hence, the limit cycles can exist only when b ∈ (0, b * ) ⊂ (0, 0.817] and by (R 1 ) when they exist then there is only one and it is hyperbolic and attractor. As a consequence of (R 2 ) and the uniqueness and hyperbolicity of the limit cycle we have that the phase portrait for b ≤ 0.79 is like (i) in Figure 1. To study the phase portraits for the remaining values of b, that is b ∈ (0.79, b * ), first notice that all of them have exactly one limit cycle, which is hyperbolic and stable. So it only remains to know the behavior of the infinite separatrices. We denote by x 2 (b) and x ′ 3 (b) the points of intersection of the separatrices S 2 and S ′ 3 of system (2) with the x-axis (when they exist), see also the forthcoming Figure 13. The properties of the SCFRVF imply that x 2 (b) is monotonous increasing and that x ′ 3 (b) is monotonous decreasing. Hence for b b * the phase portrait of the system is like (iii) in Figure 1. Since we already know that for b = 0.79 the phase portrait is like (i), it should exists at least one value, say b =b, with phase portrait (ii). Since for SCFRVF the solution for a given value of b, say b =b, becomes a curve without contact for the system when b =b, we have that the phase portraits corresponding to heteroclinic orbits, that is (ii) and (iv) of Figure 1, only appear for a single value of b (in this caseb and b * , respectively). Therefore, the theorem follows. Uniqueness of the limit cycle for b ≤ 817/1000 In this section we will prove the uniqueness of the limit cycle of system (2) when b ≤ 0.817. The idea of the proof is to find a suitable rational Dulac function for applying the following generalization of Bendixson-Dulac criterion. Proposition 4.1. Consider the C 1 -differential system and let U ⊂ R 2 be an open region with boundary formed by finitely many algebraic curves. Assume that: (I) There exists a rational function V (x, y) such that does not change sign on U. Moreover M only vanishes on points, or curves that are not invariant by the flow of (15). (II) All the connected components of U \ {V = 0}, except perhaps one, say U, are simple connected. The component U, if exists, is 1-connected. Then the system has at most one limit cycle in U and when it exists is hyperbolic and it is contained in U. Moreover its stability is given by the sign of −V M on U. The above statement is a simplified version of the one given in [9] adapted to our interests. Similar results can be seen in [4,10,14,25]. To give an idea of how we have found the function V that we will use in our proof we will first study the van der Pol system and then the uniqueness in our system when b ≤ 0.615. Although we will not use these two results, we believe that to start studying them helps to a better understanding of our approach. 4.1. The van der Pol system. Consider the Van der Pol system ẋ = y, Due to the expression of the above family of differential equations, in order to apply Proposition 4.1, it is natural to start considering functions of the form For this type of functions, the corresponding M is a polynomial of degree 2 in y, with coefficients being functions of x. In particular the coefficient of Next, fixing f 2 = 6, and imposing to the coefficient of y to be zero we obtain that f 0 (x) = 6x 2 + c, for any constant c. Finally, taking c = b 2 (3b 2 − 4), we arrive to It is easy to see that for b ∈ (0, 2/ √ 3) ≈ (0, 1.15), M b (x, y) > 0. Notice that V b (x, y) = 0 is quadratic in y and so is not difficult to see that it has at most one oval, see Figure 8 for b = 1. Then we can apply Proposition 4.1 to prove the uniqueness and hyperbolicity of the limit cycle for these values of b. We remark that taking a more suitable polynomial Dulac function, it is possible to prove the uniqueness of the limit cycle for all values of b, see [5, p. 105]. We have only included this explanation as a first step towards the construction of a suitable rational Dulac function for our system (2). (2) with b ≤ 651/1000. By making some modifications to the function V b given by (18), we get an appropriate function for system (2). Consider System where P 19 is a polynomial of degree 19. By using for instance the Sturm method, we prove that the smallest positive root of △ 2 (V b ) is greater than 0.85. Therefore by Proposition 5.7 we know that for b ∈ (0, 0.85] the algebraic curve V b (x, y) = 0 has no singular points and therefore the set {V b (x, y) = 0} ⊂ R 2 is a finite disjoint union of ovals and smooth curves diffeomorphic to open intervals. By applying Proposition 4. In Subsection 5.5 of Appendix II we prove that M b does not vanish on R 2 for b ∈ (0, 0.651]. Then by Remark 4.2 all the ovals of {V b (x, y) = 0} must surround the origin, which is the unique critical point of the system. Since the straight line x = 0 has at most two points on the algebraic curve V b (x, y) = 0, it can have at most one closed oval surrounding the origin. Then by Proposition 4.1 it follows the uniqueness, stability and hyperbolicity of the limit cycle of system (2) for these values of the parameter b. (2) with b ≤ 817/1000. The hyperbola xy + 1 = 0 will play an important role in the study of this case. We first prove a preliminary result. Lemma 4.3. Consider system (2). System (I) For b = 1 the hyperbola xy + 1 = 0 is without contact for its flow. In particular its periodic orbits never cut it. (II) For b = 1 the hyperbola xy + 1 = 0 is invariant for its flow and the system has not periodic orbits. Proof. Define F (x, y) = xy + 1 and set X = (P, Simple computations give that for x = 0, Therefore (I) follows and we have also proved that when b = 1, the hyperbola is invariant by the flow. Let us prove that the system has no limit cycle. Recall that the origin is repeller. Therefore if we prove that any periodic orbit Γ of the system is also repeller we will have proved that there is no limit cycle. This will follow if we show that where γ(t) := (x(t), y(t)) is the time parametrization of Γ and T = T (Γ) its period. To prove (21) notice that the divergence of X can be written as div(X) = 3K + 2x 2 + 1 − 3xy. Then, Observe that from (20) we have that as we wanted to see. Proof. Based on the function V b used in the Subsection 4.2 we consider the function We have added the non-vanishing denominator to increase a little bit the range of values for which Proposition 4.1 works. Indeed, it can be seen that the above function, but without the denominator, is good for showing that the system has at most one limit cycle for b ≤ 0.811. To study the algebraic curve V b (x, y) = 0 we proceed like in the previous subsection. The double discriminant introduced in Appendix II is where P 152 is a polynomial of degree 152. It can be seen that the smallest positive root of △ 2 ( V b ) is greater than 0.88. Therefore by Proposition 5.7 we know that for b ∈ (0, 0.88] this algebraic curve has no singular points. Hence the set {V b (x, y) = 0} ⊂ R 2 is a finite disjoint union of ovals and smooth curves diffeomorphic to open intervals. The function that we have to study in order to apply Proposition 4.1 is where N b (x, y) is given in (33) of Subsection 5.6. The denominator of M b is positive for all (x, y) ∈ R 2 . By Lemma 4.3 we know that the limit cycles of the system must lay in the open region Ω = R 2 ∩ {xy + 1 > 0}. In Subsection 5.6 of Appendix II we will prove that N b does not change sign on the region Ω and if it vanishes it is only at some isolated points. Notice also that the set { V b (x, y) = 0} cuts the y-axis at most in two points, therefore by the previous results and arguing as in Subsection 4.2, we know that it has at most one oval and that when it exists it must surround the origin. Therefore we are under the hypotheses of Proposition 4.1, taking U = Ω, and the uniqueness and hyperbolicity of the limit cycle follows. (3) and (4) of Theorem 2.1, respectively. That is, we use algebraic approximations of S i and S ′ i , for i = 1, 2, 3. As usual for knowing when a vector field X is without contact with a curve of the form y = ψ(x) we have to control the sign of In this section we will repeatedly compute this function when ψ(x) is either ϕ i (x), φ i (x) or modifications of these functions. We prove the following result. Moreover it has an odd number of limit cycles, taking into account their multiplicities. (II) For b = 817/1000 the configuration of its separatrices is like (v) in Figure 3. Moreover it has an even number of limit cycles, taking into account their multiplicities. Proof. (I) Consider the two functions x 3 , which are the corresponding expressions in the plane (x, y) of the first and second approximation of the separatrix S 1 . If This implies that the separatrix S 1 in the (x, y)-plane and close to −∞ is below the graphic of ϕ 1 (x). Moreover This inequality implies that the separatrix S 1 in the plane (x, y) cannot intersect the graphic of ϕ 1 (x) for x < 0, see Figure 9. Now, we consider the third approximation to the separatrices S 2 and S 3 , that is we consider the first three terms in (3). It is given by the graph of the function Let us prove that when b ∈ (0, 2/3), the graphs of ϕ 1 (x) and φ 3 (x) intersect at a unique point, (x 0 , y 0 ) with x 0 < 0 and y 0 > 0. For this is sufficient to show that the function (ϕ 1 − φ 3 )(x) has a unique zero at some x 0 < 0. By using the Sturm method we obtain that P 22 (b 2 ) has exactly four real zeros. By Bolzano theorem the positive ones belong to the intervals (0.7904, 0.7905) and (2.6, 2.7). If we fix b 0 ≤ 79/100 then b 0 < 2/3 and moreover according to previous paragraph the graphics of ϕ 1 (x) and φ 3 (x) intersect at a unique point (x 0 , y 0 ) with x 0 < 0 and y 0 > 0. Furthermore, . Therefore the vector field associated to (2) on these curves is the one showed in Figure 10.(a). From Figure 10.(a) it is clear that the separatrix S 1 cannot intersect the set Moreover, since the separatrix S 2 forms an hyperbolic sector together with S 3 we obtain that S 1 cannot be asymptotic to the line x = b 0 . Hence we must have the situation showed in Figure 10.(b). We know that the origin is a source and from the symmetry of system (2) we conclude that for b ≤ 0.79 the system has an odd number of limit cycles (taking into account multiplicities) and the phase portrait is the one showed in Figure 11. (II) We start proving the result when b = b 0 := 89/100 because the method that we use is the same that for studying the case b = 817/1000, but the computations are easier. Recall that we want to prove that the configuration of separatrices is like (v) in Figure 3. That the number of limit cycles must be even (taking into account multiplicities) is then a simply consequence of the Poincaré-Bendixson Theorem, because the origin is a source. Figure 11. For 0 < b ≤ 0.79, system (2) has at least one limit cycle and phase portrait (i) of Figure 1 . We consider the approximation of eight order to S 2 and S 3 given by the graph of the function φ 8 (x). By using again the Sturm method it is easy to see that N φ 8 (x) < 0 for x ∈ (b 0 , x 0 ), where x 0 = 1.924 is a left approximation to the root of the function φ 8 (x), and N φ 8 (x) > 0 for x ∈ (x 1 , b 0 ), where x 1 = −2.022 is a right approximation to the root of the function N φ 8 (x). That is, we have the situation shown in Figure 12.(a). Now, we consider the functionφ 8 At this point, the idea is to show that S 2 intersects the x-axis at a pointx, with −x 2 <x < 0. For proving this, we utilize the Padé approximants method, see [3]. Recall that given a function f (x), its Padé approximant Pd [n,m] (f )(x, x 0 ) of order (n, m) at a point x 0 , or simply Pd [n,m] (f )(x) when x 0 = 0, is a rational function of the form F n (x)/G m (x), where F n and G m are polynomials of degrees n and m, respectively, and such that Consider the Padé approximant Pd [3,3] (φ 8 ). It satisfies that Pd [3,3] (φ 8 )(0) = φ 8 (0) and by the Sturm method it can be seen that there exists x 3 < 0 such that Pd [3,3] (φ 8 )(x 3 ) = 0, Pd [3,3] (φ 8 ) is positive and increasing on the interval (x 3 , 0) and a left approximation to x 3 is −1.595. Moreover it is easy to see that N Pd [3,3] 0). Therefore S 2 cannot intersect neither the graph of y = Pd [3,3] . Hence S 2 intersects the x-axis in a point x contained in the interval (x 3 , 0). This implies that −x 2 <x < 0 as we wanted to see, because −x 2 < x 3 . Hence the behavior of the separatrices is like Figure 12.(b). See also Figure 13. When b 0 = 817/1000 we follow the same ideas. For this case we consider the functions φ 16 (x) andφ 16 (x) = φ 16 (x) − 1/(9b 3 ). Recall that the graphic of φ 16 (x) is the sixteenth order approximation to S 2 and S 3 . It is not difficult to prove that Nφ 16 > 0 on the interval (b 0 , x 2 ), with x 2 = 1.6421 and since the line x = x 2 is transversal to X for y > 0, S 3 intersects the x-axis at a pointx > x 2 . Also we have that N φ 16 > 0 on the interval (−3/100, b 0 ) and using the Padé approximant Pd [5,1] (φ 16 )(x, −3/100) we obtain that S 2 intersect to the x-axis in a pointx ∈ (x 3 , 0) with x 3 > −1.638. This implies that −x 2 <x < 0 as in the case b = 0.89. Hence we have the same situation that in Figure 13. Remark 5.2. As it is shown in the proof of Theorem 1.1, the values 0.79 and 0.817, obtained in the previous proposition, provide a lower and an upper bound for b * . We have tried to shrink the interval where b * lies using higher order approximations of the separatrices, but we have not been able to diminish its size. Appendix I: The Descartes method Given a real polynomial P (x) = a n x n + · · · + a 1 x + a 0 and a real interval I = (α, β) such that P (α)P (β) = 0, there are two well-known methods for knowing the number of real roots of P in I: the Descartes rule and the Sturm method. Theoretically, when all the a i ∈ Q and α, β ∈ Q, the Sturm approach solves completely the problem. If all the roots of P are simple it is possible to associate to it a sequence of n + 1 polynomials, the so called Sturm sequence, and knowing the signs of this sequence evaluated at α and β we obtain the exact number of real roots in the interval. If P has multiple roots it suffices to start with P/(gcd(P, P ′ )), see [23,Sec. 5.6]. Nevertheless when the rational numbers have big numerators and denominators and n is also big, the computers have not enough capacity to perform the computations to get the Sturm sequence. On the other hand the Descartes rule is not so powerful but a careful use, in the spirit of bisection method, can many times solve the problem. To recall the Descartes rule we need to introduce some notation. Given an ordered list of real numbers [b 0 , b 1 , . . . , b n−1 , b n ] we will say that it has C changes of sign if the following holds: denote by [c 0 , c 1 , . . . , c m−1 , c m ], m ≤ n the new list obtained from the previous one after removing the zeros and without changing the order of the remaining terms. Consider the m non-zero numbers δ i := c i c i+1 , i = 0, . . . , m − 1. Then C is the number of negative δ i . Theorem 5.3 (Descartes rule). Let C be the number of changes of sign of the list of ordered numbers [a 0 , a 1 , a 2 , . . . , a n−1 , a n ]. Then the number of positive zeros of the polynomial P (x) = a n x n + · · · + a 1 x + a 0 , counted with their multiplicities, is C − 2k, for some k ∈ N ∪ {0}. In order to apply Descartes rule to arbitrary open intervals we introduce the following definition: Definition 5.5. Given a real polynomial P (x) and a real interval (α, β) we construct a new polynomial N β α (P )(x) := (x + 1) deg P P βx + α x + 1 . We will call N β α (P ), the normalized version of P with respect to (α, β). Notice that the number of real roots of P (x) in the interval (α, β) is equal to the number of real roots of N β α (P )(x) in (0, ∞). The method suggested in [12] consists in writing (α, β) = k i=1 (α i , α i+1 ), with α = α 1 < α 2 < · · · < α k < α k+1 = β in such a way that on each (α i , α i+1 ) it is possible to apply Corollary 5.4 to the normalized version of the polynomial. Although there is no systematic way of searching a suitable decomposition, we will see that a careful use of these type of ideas has been good enough to study the number and localization of the roots for a huge polynomial of degree 965, see Subsection 5.6 in Appendix II. Appendix II: A method for controlling the sign of polynomials in two variables The main result of this appendix is a new method for controlling the sign of families of polynomials with two variables. As a starting point we prove a simple result for one-parameter families of polynomials in one variable. Let G b (x) be a one-parametric family of polynomials. As usual, we write △ x (P ) to denote the discriminant of a polynomial P (x) = a n x n + · · · + a 1 x + a 0 , that is, where Res(P, P ′ ) is the resultant of P and P ′ . Proof. The key point of the proof is that the roots (real and complex) of G b depend continuously of b, because g n (b) = 0. Notice that hypotheses (iii) and (iv) prevent that moving b some root enters in Ω either from infinity or from the boundary of Ω, respectively. On the other hand if moving b some real roots appear from C, they do appear trough a double real root that is detected by the vanishing of △ x (G b ). Since by item (ii), △ x (G b ) = 0 no real root appears in this way. Hence, for all b ∈ I, the number of real roots of any G b is the same. Since by item (i) for b = b 0 , G b 0 > 0 on Ω, the same holds for all b ∈ I. To state the corresponding result for families of polynomials with two variables inspired in the above lemma, see Proposition 5.12, we need to prove some results about the iterated discriminants (to replace hypothesis (ii) of the lemma) and to recall how to study the infinity of planar curves (to replace hypothesis (iii)). F (x, y) be a complex polynomial on C 2 . We write F as F (x, y) = a n y n + a n−1 y n−1 + a n−2 y n−2 + . . . + a 1 y + a 0 , By using the Sylvester matrix S defined above, we have that where S(i | j) means the matrix obtained from S by removing the i-th row and the j-th column. Notice that the elements of the last row of S(2n − 1 | 2n − 1) are only 0, a 0 and a 1 . Therefore, developing the determinant of this matrix from this row we get that det(S(2n − 1 | 2n − 1)) = xQ(x), for some polynomial Q(x). Hence, by using (25), we get that det S = x 2 P (x) with P (x) another polynomial. This implies that △ y (F ) has a double zero at x = 0 and hence △ 2 y,x (F ) = 0. Analogously we can prove that △ x (F ) has a double zero at y = 0 and hence △ 2 x,y (F ) = 0. Corollary 5.8. Consider a one-parameter family of polynomials F b (x, y), depending also polynomially on b. The values of b such that the algebraic curve F b (x, y) = 0 has some singular point in C 2 have to be zeros of the polynomial . By simplicity we will also call the polynomial △ 2 (F b ), double discriminant of the family F b (x, y). As far as we know the above necessary condition for detecting algebraic curves with singular points is new. Remark 5.9. (i) Notice that if in Corollary 5.8, instead of imposing that for b ∈ I, The converse of the Proposition 5.7 is not true. For instance if we consider the polynomial F (x, y) = x 3 y 3 + x + 1 then △ 2 y,x (F ) = △ 2 x,y (F ) = 0, however F x (x, y) = 3x 2 y 3 + 1 and F y (x, y) = 3x 3 y 2 hence {F (x, y) = 0} does not have singular points. Algebraic curves at infinity. Let be a polynomial on R 2 of degree n. We denote by F (x, y, z) = z n F 0 (x, y) + z n−1 F 1 (x, y) + · · · + F n (x, y) its homogenization in RP 2 . For studying F (x, y, z) in RP 2 we can use its expressions in the three canonical charts of We denote by F 1 (x, z) and F 2 (y, z) the expressions of the function F in the planes {(x, z)} and {(y, z)}, respectively. Therefore F 1 (x, z) = F (x, 1, z) and F 2 (y, z) = F (1, y, z). Let [x * : y * : z * ] ∈ RP 2 be a point of { F = 0}. If z * = 0, then [x * : y * : z * ] corresponds to a point in R 2 , otherwise it is said that [x * : y * : 0] is a point of F at infinity. Notice that the points at infinity of F correspond to the points [x * : y * : 0] where (x * , y * ) = (0, 0) is a solution of the homogeneous part of degree n of F , H n (F (x, y)) = F n (x, y), that is F n (x * , y * ) = 0. Equivalently, these are the zeros of F 1 (x, 0) and F 2 (y, 0). In other words, [x * : y * : 0] is a point at infinity of F if and only if x * /y * is a zero of F 1 (x, 0) = F n (x, 1) or y * /x * is a zero of F 2 (y, 0) = F n (1, y). Let Ω ⊂ R 2 be an unbounded open subset with boundary ∂Ω formed by finitely many algebraic curves. It is clear that this subset can be extended to RP 2 . We will call the adherence of this extensionΩ. When a point at infinity of F is also inΩ, for short we will say that is a point at infinite which is also in Ω. Isolated points of families of algebraic curves. To state our main result we need explicit conditions to check when a point of a real algebraic curve G(x, y) = 0 is isolated. Recall that it is said that a point p ∈ R 2 on the curve is isolated if there exists an open neighborhood U of p, such that Clearly isolated points are singular points of the curve. Next result provides an useful criterion to deal with this question. Lemma 5.10. Let G(x, y) be a real polynomial. Assume that (0, 0) ∈ {G(x, y) = 0} and that there are natural numbers p, q and m, with gcd(p, q) = 1, and a polynomial G 0 satisfying G 0 (ε p X, ε q Y ) = ε m G 0 (X, Y ), and such that for all ε > 0, for some polynomial function G 1 . If the only real solution of G 0 (X, Y ) = 0 is (X, Y ) = (0, 0), then the origin is an isolated point of G(x, y) = 0. Proof. Assume without loss of generality that G 0 ≥ 0. We start proving that K := {(x, y) ∈ R 2 : G 0 (x, y) = 1} is a compact set. Clearly it is closed, so it suffices to prove that it is bounded. Since G 0 is a quasi-homogeneous polynomial we know that there exists a natural number m 0 such that m = m 0 pq and G 0 (x, y) = P m 0 (x q , y p ), where P m 0 is a real homogeneous polynomial of degree m 0 . The fact that the only real solution of the equation G 0 (x, y) = 0 is x = y = 0 implies that P m 0 has not linear factors when we decompose it as a product of real irreducible factors. Hence m 0 is even and P m 0 (x, y) As a consequence, Assume, to arrive to a contradiction, that K is unbounded. Therefore it should exist a sequence {(x n , y n )}, tending to infinity, and such that G 0 (x n , y n ) = 1. But this is impossible because the conditions B 2 i − 4A i C i < 0, i = 1, . . . , m 0 /2, imply that all the terms A i x 2q n + B i x q n y p n + C i y 2p n in (26) go to infinity. So K is compact. Let us prove that (0, 0) is an isolated point of {(x, y) ∈ R 2 : G(x, y) = 0}. Assume, to arrive to a contradiction, that it is not. Therefore there exists a sequence of points {(x n , y n )}, tending to 0 and such that G(x n , y n ) = 0 for all n ∈ N. Consider G 0 (x n , y n ) =: (g n ) m > 0. It is clear that lim n→∞ (g n ) m = 0. Write (x n , y n ) = ((g n ) p u n , (g n ) q v n ). Notice that (g n ) m = G 0 (x n , y n ) = G 0 (g p n u n , g q n v n ) = (g n ) m G 0 (u n , v n ). Then G 0 (u n , v n ) = 1 and (u n , v n ) ∈ K, for all n ∈ N. Therefore, taking a subsequence if necessary, we can assume that We have that 0 = G(x n , y n ) = (g n ) m + (g n ) m+1 G 1 (u n , v n , g n ). Dividing by (g n ) m we obtain that 1 = 0 + g n G 1 (u n , v n , g n ), and passing to the limit we get that 1 = 0 which gives the desired contradiction. Notice that to prove that lim n→∞ g n G 1 (u n , v n , g n ) = 0 we need to know that the sequence {(u n , v n )} remains bounded and this fact is a consequence of (27). We remark that the suitable values p, q and m and the function G 0 appearing in the statement of Lemma 5.10 are usually found by using the Newton diagram associated to G. We also need to introduce a new related concept for families of curves. Consider a one-parameter family of algebraic curves G b (x, y) = 0, b ∈ I, also depending polynomially of b. Let (x 0 , y 0 ) ∈ R 2 be an isolated point of G b (x, y) = 0 for all b ∈ I, we will say that (x 0 , y 0 ) is uniformly isolated for the family G b (x, y) = 0, b ∈ I if for each b ∈ I there exist neighborhoods V ⊂ I and W ⊂ R 2 , of b and (x 0 , y 0 ) respectively, such that for all b ∈ V, Next example shows a one-parameter family of curves that has the origin isolated for all b ∈ R but it is not uniformly isolated for b ∈ I, with 0 ∈ I, It is clear that the origin is an isolated point of {G b (x, y) = 0} for all b ∈ R, but there is no open neighborhood W of (0, 0), such that (28) holds for any b in a neighborhood of b = 0. Next result is a version of Lemma 5.10 for one-parameter families. In its proof we will use some periodic functions introduced by Lyapunov in his study of the stability of degenerate critical points, see [15]. Let us recall them. Proposition 5.11. Let G b (x, y) be a family of real polynomials which also depends polynomially on b. Assume that (0, 0) ∈ {G b (x, y) = 0} and that there are natural numbers p, q and m, with gcd(p, q) = 1, and a polynomial G 0 b satisfying , and such that for all ε > 0, Proof. Assume without loss of generality that G 0 b ≥ 0. Let us write the function G b (x, y) using the so-called generalized polar coordinates, x = ρ p Cs(ϕ), y = ρ q Sn(ϕ), for ρ ∈ R + . Using the same notation that in the proof of Lemma 5.10, with the obvious modifications, we know from (26) that Notice that, the fact that for all b ∈ R, the origin of (29) is isolated simply follows plotting the zero level set of G b . Alternatively, we can apply Lemma 5.10 with p = 1, q = 1 and m = 2 to prove that the origin is isolated when b = 0 and with p = q = 1 and m = 4 when b = 0. In any case, Proposition 5.11 can not be used. 5.4. The method for controlling the sign. By hypothesis (i), J = ∅ because b 0 ∈ J. Consider nowb = sup J. We want to prove thatb ∈ ∂I. If this is true, arguing similarly with inf J the result will follow. We will prove the result by contradiction. So assume thatb ∈ I. Notice that if Fb(x, y) takes positive and negative values on Ω, by continuity this would happen for any b near enough tob. This is in contradiction with the fact thatb is the supremum of J. Therefore, either Fb(x, y) ≥ 0 or Fb(x, y) > 0 in Ω. In the first case it is clear that a point (x 0 , y 0 ) where Fb(x 0 , y 0 ) = 0 has to be a singular point of the curve {Fb(x, y) = 0}. Therefore, by Corollary 5.8, △ 2 (Fb) = 0 which is in contradiction with (ii). In the second case it should exist a sequence of real numbers {b n }, with b n ↓b, and a sequence of points {(x n , y n )} ∈ Ω such that lim n→∞ F bn (x n , y n ) = 0. If the sequence is bounded, renaming it if necessary, we arrive to a convergent sequence. Call (x,ȳ) ∈ Ω its limit, where Ω denotes the adherence of Ω. Then Fb(x,ȳ) = 0. By hypothesis (iv), the point (x,ȳ) ∈ ∂Ω and we also know that Fb(x, y) > 0 on Ω. Therefore we have a contradiction and the sequence {(x n , y n )} must be unbounded. This unbounded sequence can be considered in the projective space RP 2 . Then this sequence must converge to a point p of Fb(x, y) = 0 at infinity, which is also in U. Since by hypothesis (iii) this point is uniformly isolated, there exists a neighborhood V ofb and an open neighborhood W of p such that this point is the only real point in RP 2 of the homogenization of F b (x, y) = 0. This is in contradiction with the fact F bn (x n , y n ) = 0 for all n, and the result follows. 5.5. Control of the sign of (19). In this subsection we will prove by using Proposition 5.12, that for b ∈ (0, 0.6512), the function M b given in (19) is positive on Ω = R 2 . To check hypothesis (ii) we compute the double discriminant of M b and we obtain that △ 2 x,y (M b ) is a polynomial in b of degree 1028, of the following form where P i are polynomials of degree i with rational coefficients. By using the Sturm method we localize the real roots of each factor of △ 2 x,y (M b ) and we obtain that in the interval (0, 0.6512) none of them has real roots. In fact P 32 (b 2 ) has a root in (0.6513, 0.6514) and that is the reason for which we can not increase more the value of b. Therefore △ 2 x,y (M b ) = 0 for all b ∈ (0, 0.6512). Finally we have to check hypothesis (iii). Notice that in this case ∂Ω = ∅ and so (iv) follows directly. The zeros at infinity are given by the directions For |b| < 0.7275 it has only the non-trivial solutions x = 0 and y = 0. The homogenization of M b is and hypothesis (iii) is equivalent to prove that (0, 0) is an uniformly isolated singularity for M 1 b (x, z) = M b (x, 1, z) and that (0, 0) is also an uniformly isolated singularity for Hence, The discriminant with respect to X of the homogeneous polynomial T (X, W ) : Since its smallest positive root is greater than 0.673 it holds for b ∈ (0, 673) that T (X, W ) = 0 if and only if (X, W ) = (0, 0). Therefore by Proposition 5.11 the point (0, 0) is an uniformly isolated point of the curve M 1 b (x, z) = 0, for these values of b. For the other point, since we have that and the result follows for b ∈ (0, 2/3) ≈ (0, 0.816), by applying again the same proposition. So, we have shown that for b ∈ (0, 0.6512) all the hypotheses of the Proposition 5.12 hold. Therefore we have proved that for b ∈ (0, 0.651], M b (x, y) > 0 for all (x, y) ∈ R 2 . 5.6. Control of the sign of (23). The numerator of the function M b given in (23) is a polynomial of the following form We will prove that N b ≥ 0 on Ω := {(x, y) : xy + 1 > 0} for all b ∈ (0, 0.817] and if it vanishes this only happens at some isolated points. We will use again Proposition 5.12. Notice that ∂Ω = {(x, y) : xy + 1 = 0}. It is not difficult to verify that {N b (x, y) = 0} ∩ {xy + 1 = 0} = ∅ for b ∈ (0, 0.8171), see Figure 14. It suffices to see that for these values of b, and x = 0, the one variable function N b (x, 1/x), never vanishes. We skip the details. Therefore hypothesis (iv) is satisfied. For proving that hypothesis (ii) of Proposition 5.12 holds we compute the double discriminant △ 2 y,x (N b ). It is an even polynomial in b, of degree 21852, of the following form b 7566 (3b 2 − 4)(159b 4 − 380b 2 + 225) 2 (P 71 (b 2 )) 2 (P 386 (b 2 )) 4 (P 587 (b 2 )) 6 (P 965 (b 2 )) 2 , (34) where P i are polynomials of degree i with rational coefficients. By using the Sturm method it is easy to see that its first 4 factors do not have real roots in (0, 0.8171). We replace b 2 = t in the next three polynomials to reduce their degrees and we obtain P 1 (t) := P 386 (t), P 2 (t) := P 587 (t), and P 3 (t) := P 965 (t). It suffices to study their number of real roots in (0, 0.6678], because 0.6678 > (0.8171) 2 . Our computers have no enough capacity to get their Sturm sequences. Therefore we will use the Descartes approach as it is explained in Appendix I. We consider first the polynomial P 1 (t). Its normalized version N 0.68 0 (P 1 ) has all their coefficients positive. Therefore P 1 (t) has no real roots in (0, 0.68) as we wanted to see. Applying the Descartes rule to the normalized versions of P 2 (t), N 0.561 0 (P 2 ), N 0.811 0.561 (P 2 ) and N 0.812 0.562 (P 2 ), we obtain that the number of zeros in the intervals (0, 0.561), (0.561, 0.811) and (0.562, 0.812) is 0, 1 and 0 respectively. That is, there is only one root of P 2 (t) in (0, 0.812), it is simple and it belongs to (0.561, 0.562). Refining this interval with Bolzano Theorem we prove that the root is in the interval (0.5617, 0.5618). Finally to study P 3 (t) we consider N 11/20 0 (P 3 ), N 7/12 11/20 (P 3 ) and N 52/75 7/12 (P 3 ). By Descartes rule we obtain that the number of zeros of P 3 in the corresponding intervals is 0, 1 and 1 or 3, respectively. By Bolzano Theorem we can localize more precisely these zeros and prove that in the last interval there are exactly 3 zeros. So we have proved that the polynomial P 3 has exactly 4 zeros in the interval (0, 52/75) ≈ (0, 0.693), and each one of them is contained in one of the following intervals In brief, for t ∈ (0, 0.6678] the double discriminant △ y,x (N b ) only vanishes at two points t = t 1 and t = t 2 with t 1 ∈ (0.5614, 0.5615) and t 2 ∈ (0.5617, 0.5618). Therefore we are under the hypothesis (ii) of Proposition 5.12 for b belonging to each of the intervals (0, b 1 ), (b 1 , b 2 ) and (b 2 , 0.8171), where To ensure that on each interval we are under the hypotheses (i) of the proposition we prove that N b does not vanish on Ω for one value of b in each of the above three intervals. We take 1 2 ∈ (0, b 1 ), 7494 10000 ∈ (b 1 , b 2 ), and 3 4 ∈ (b 2 , 0.8171). We study with detail the case b = 1/2. The other two cases can be treated similarly and we skip the details. So we have to study on Ω the sign of the function We consider N 1/2 as a polynomial in x with coefficients in R[y] and we apply Lemma 5.6 with Ω y = (−1/y, ∞) when y > 0 and Ω 0 = (−∞, ∞). Notice that for the symmetry of the function there is no need to study the zone y < 0 because N 1/2 (−x, −y) = N 1/2 (x, y). We introduce the following notation S y (x) := N 1/2 (x, y). We prove the following facts: (i) If we write S y (x) = 1 i=1 0s i (y)x i , then s 10 (y) = k(1 + 3y 2 ) for some k ∈ Q + . Therefore s 10 (y) > 0 for all y ∈ R. Thus, by Lemma 5.6, the function N 1/2 is positive on (x, y) ∈ Ω, as we wanted to see. In fact, its level curves are like the ones showed in Figure 14. The straight lines y = y 1 and y = y 2 correspond to the lower and upper tangents to the oval contained in the second quadrant. To be under all the hypotheses of Proposition 5.12 it only remains to study the function N b at infinity. We denote by N b (x, y, z) its homogenization in RP 2 and by N 1 b (x, z) and N 2 b (y, z) the expressions of the function N b in the planes {(x, z)} and {(y, z)}, respectively. Since H 12 (N b ) = 90b 36 x 8 y 2 [3x 2 + y 2 ], the only non-trivial solutions of H 12 (N b ) = 0 are x = 0 and y = 0. Hence these directions give rise to two points of N b at infinity which are also on the region Ω. They correspond to the points (0, 0) of the algebraic curves N 1 b (x, z) = 0 and N 2 b (y, z) = 0. We have to prove that both points are uniformly isolated. For studying the first one we denote by R(X, Z) the homogenous polynomial accompanying ε 8 and we obtain that △ X (R(X, Z)) = Z 56 b 150 (P 71 (b 2 )) 2 , for some polynomial P 71 of degree 71 and integer coefficients. Since the smallest positive root of this polynomial is greater that 0.92 we can easily prove that for b < 0.92, R(X, Z) = 0 if and only if X = Z = 0. Therefore we can use again Proposition 5.11 and prove that (0, 0) is an uniformly isolated point of the curve for these values of b. So, if we write we can apply Proposition 5.12 to each one of the open intervals to prove that for b ∈ (0, 0.817] \ {b 1 , b 2 } it holds that N b (x, y) > 0 for all (x, y) in Ω. By continuity, for the two values b ∈ {b 1 , b 2 }, we obtain that N b (x, y) ≥ 0. Since △ y (N b ) ≡ 0 either it is always positive or it vanishes only at some isolated points, as we wanted to prove. It can be seen that for b b ≈ 0.81722, N b (x, y) changes sign on Ω because there appears one oval in the set {N b (x, y) = 0}. The valueb 2 ≈ 0.6678492 corresponds to the root of P 3 in the interval (0.6678, 0.6679) that has appeared in the proof as a root of the double discriminant.
15,983.8
2012-02-09T00:00:00.000
[ "Mathematics" ]
Polymorphic transition of tin under shock wave compression : Experimental results In this work, the β-bct polymorphic transition in tin is investigated by means of plate impact experiments. The Sn target surface is observed in a partially released state obtained thanks to a transparent lithium fluoride (LiF) anvil. We report both measurements of interface velocity and temperature obtained using Photon Doppler Velocimetry and IR optical pyrometer on shock-loaded tin from 8 to 16 GPa. We show that the Mabire Model EOS associated to the SCG plasticity model provides an overall good estimate of the velocity profiles. However, depnding on the shock amplitude, its prediction of the temperature profile may be less satisfactory, hence underlining the need for future improvements in terms of phase transition kinetics description. Introduction During the last ten years, breakthrough has been achieved in terms of modeling phase transition of metals under shock loading.For example, the multiphase EOS developed by C. Mabire [1] has been successfully applied to the restitution of phase change in tin.However, the capabilities and limitations of this kind of model remain partially unknown, in particular due to lack of reliable reference temperature data. This paper presents some experimental results on tin from 8 to 16 GPa around the polymorphic transition under dynamic loadings.Interface velocity and temperature measurements at Sn/LiF allow to show the tin transforms from β phase to bct phase. Additionnaly, this paper presents a comparison between the numerical and experimental interface velocity and interface temperature taking into account the heat transfer phenomena. Experiments The tin used in these experiments had a purity of 99.9 % and a density of 7.287 g/cm 3 .The material target were polished to present a surface of high optical quality roughness (R a <20 nm). To improve our understanding of the polymorphic transition phenomenon and to investigate the phase diagram, both measurements of interface velocity and temperature are achieved.But, the experimental measurements must be very reliable to be compared with our multiphase EOS. The diagnostic of PDV allow to obtain the velocity with a satisfying precision (<1%).But, deducing a dependable temperature from the signals provided by a high-speed two-wavelength infrared optical pyrometer is still complex particularly at these low temperatures. Temperature measurement theory High-speed multi-wavelength infrared optical pyrometry is the most effective diagnostic to achieve low temperature measurement below 1000 K [2,3].Although the method to collect radiance is well known, measurement at these low temperatures is still more complicated due to the low amplitude of the radiated energy: any potential background lights must be prevented and generally avoided and under control. But the technical challenge of this measurement is to deduce from the collected radiance an accurate interface temperature depending on the evaluation of the dynamic emissivity.Because, the simultaneous measurement of emissivity of the thermally radiating shocked surface is still difficult to measure [4,5], a way to overcome the lack of knowledge of emissivity consists in artificially increasing the dynamic emissivity of the material up to an apparent value as close as possible to that of 1 (the blackbody emissivity) [6].The detected radiance is then amplified, the range of dynamic emissivity is restricted and so the interface temperature becomes accurate. Using an emissive layer at interface To measure low temperature (<800 K) under dynamic loading, a multi-wavelength pyrometer is the common diagnostic.Its principle relies on the collect of the light generated by the material under shock loading with infrared InSb detectors which convert it a voltage signal at several wavelengths (details of the pyrometer can be found in [6]).In our experiments, the radiance comes from the surface of the material through an anvil window LiF preventing the fully release of the material.Thanks to a calibration, the conversion between measured voltages and interface radiances is determined, however inferring an interface temperature is not straightforward due to the lack of knowledge about the dynamic emissivity.The method [7] consists in: -constraining the possible variation of the emissivity between two extreme values from its static emissivity and 1 (the emissivity is supposed increasing under loading due to degradation of the surface), -and applying these values at the different wavelength channels of pyrometer, an uncertainty range of temperature in which the true temperature is determined. Generally the shortest wavelength (channel 2 in this study) which is the less sensitive to the emissivity gives the more restricted temperature range.The other wavelengths help to evaluate the evolution of the emissivity. A way to restrain the bounds of emissivity is to increase artificially the material emissivity up to an apparent value as close as possible to that of a blackbody applying an emissive layer.The range of dynamic emissivity is then restricted and so the uncertainty on interface temperature [6]. Inferring the material temperature Besides the difficulty in deducing an accurate temperature measurement, assessing to the material temperature needs to understand the heat transfer phenomena at interface.Between two adjacent materials under dynamic loading, only thermal conduction provides heat transfer.Thanks to Fourier's law, interface and material temperatures can be linked depending on thermal properties of each material [8]: where k is the thermal conductivity, ρ the density and C V the specific heat, T 1 , T 2, T int the temperatures of the two adjacent materials and the interface temperature. Considering an interface with a glue layer, the α value estimated from the static value is largely over 18 whatever the metallic material, so the interface and material temperatures are almost identical showing the advantage of a glue layer due to its insulator properties. At a complex interface and particularly with the presence of an emissive layer and glue layer, the thermal conductivity of the emissive layer must be enough important and its thickness enough thin to enable an efficient heat exchange and to obtain the temperature of the emissive layer as close as possible to that of the material.The glue layer, as for it, allows the interface temperature to be similar to the temperature of the emissive layer and so to that of the material. With the use of a thin ReSi 2 layer at interface and thanks to its efficient thermal-mechanical properties, the measured temperature is close to that of the material, which is in fact the value to be identified and its uncertainties have been also significantly reduced and can be used to compare our results to a calculation. Experimental configuration Six plate impact experiments (Fig. 1) were performed to observe the behaviour of tin [9] under shock wave compression and release around its β-bct polymorphic transition at different impact velocity (Table 1). The effect of an emissive layer (EL) on the temperature measurement at the interface Sn/LiF has been studied to explore the role of this layer: the increase of the emissivity and the effect of thermal conduction. Experimental results The temperature and velocity profiles are presented in Fig. 2. It is shown that interface velocity profiles clearly indicate direct β-bct polymorphic transition consistent with the expected compression path.For the shot Sn02 and Sn03, three distinct waves compression appeared typical of the transition: the first corresponds to the elastic precursor, the second to the plastic wave of the β phase and the third to the plastic wave of the bct phase. During the release, a singularity is noticed displaying the reverse transition from bct to β.The Sn01 experimentation presents a single shock wave in compression: the impact stress is too high to observe the double wave.During the release, the reverse transition is observable indicated by the presence of a singularity.For the experimentations Sn04 and Sn05 which are chosen on the β-bct 01026-p.2frontier, no polymorphic transition is observed.In the last experimentation Sn06, the tin stays only in β phase, so no distinctive feature indicates a polymorphic transition. The temperature profiles at interface are presented in Fig. 2. The temperature at the interface between tin and anvil is described for each channel of the pyrometer with a constant emissivity equal to the static emissivity of the tin or equal to the static emissivity of the EL in the case of the presence of an EL at the interface. As mentioned for the velocity profiles, the temperature shapes show distinct features function of the impact stress consistent with the compressive path.A plateau of temperature is observed during the supported shock except for the experimentations Sn02 and Sn04.The reverse transition is clearly noticeable as well during the release: a change of slope appears may be due to a variation of the emissivity.However, the double structure of the direct polymorphic transition is not precisely detectable, might be due to the response time of the pyrometer. As noticed on the temperature on each channel, around the double structure of the transition for Sn02 and Sn04, inferring the temperature from the collected radiance with a constant emissivity is not satisfactory: the emissivity of the tin seems to change under its phase transition so the temperatures of both channels present different behaviour which could indicate the presence of a mixture of the two phases or a degradation of the surface roughness.These signals could not be considered to deduce an accurate result without difficulty.To overcome the possible variation of the emissivity, an emissive layer has been incorporated at the interface on the experimentations Sn03 and Sn05 allowing to obtain a profile easy to interpret and to determine a temperature with a satisfying precision: the presence of a plateau emphasize the role of an EL which seems to avoid the effects of the β-bct transition on the surface roughness or on the variation of emissivity. The profiles in velocity and temperature reveal coherent singularities during the loading and unloading indicating the polymorphic transition of the tin. Analysis of the polymorphic transition β phase to bct phase To describe the phase-diagram of tin, each β and bct phases are represented by a Mie-Grüneisen equation of state with parameters already determined in a previous study to reproduce static and dynamic data [1].The phase boundary is determined by the equality of the Gibbs free energy. The singularities in the loading and unloading path are illustrated in the Fig. 3. in the temperature-strain plane.The Hugoniot states, the interface states and the singularities during the release are represented respectively by squares, triangles and circles obtaining with the experimental data. The double structure of the wave during the loading noticed on the experimentations Sn02, Sn03, Sn04 and Sn05 reveals an hysteresis of the transition from the β phase to bct phase, because the Hugoniot states (squares points) are beyond the static boundary.Similarly, the reverse polymorphic transition bct to β during the release displayed on Sn01, Sn02, Sn03, Sn04 and Sn05 temperature and velocity profiles clearly indicates an hysteresis, because these singularities (circle points) are above the transition boundary and decrease as a function of stress. For the Sn06 experimentations, the value of temperature and velocity show that tin remains in the solid β phase during all the loading and release.This could be confirmed by the absence of any singularities in the velocity profile.Nevertheless, in the release, the temperature profile shows a change of slope which might suggest a reverse transition. Simulations of the experiments were performed with a 1-D lagrangian code Unidim including thermal conduction modelling and the multi-phase equation of state of tin developed by Mabire [1].The plasticity model chosen for the tin is a SCG model [10].The comparison between experiments and calculations are illustrated in Fig. 4, Fig. 5 and Fig. 6. taking into account the hypothesis of an hysteresis during the direct and reverse transition. Concerning the velocity profiles of all experiments (Fig. 4), the calculations accurately reproduced the direct transition β to bct phase during the loading and the reverse transition during the unloading: the double structure of the shock wave and the singularity in the release are correctly displayed. In Fig. 5, experimental and numerical temperatures in the Sn01 experimentation indicate the double structure in the loading and a reverse transition of tin occuring during release at about 415 K and these both temperatures are similar.For the Sn06 experimentation in Fig. 5, even if the level on the plateau is correctly reproduced, experimental data seems to suggest a bct-β transition not indicated in the calculations.Due to the complexity to deduce a precise temperature at interface Sn/LiF around the β-bct transition during the shock wave compression, only the temperature with an emissive layer are compared with calculated data in Fig. 6.The calculations correctly reproduce the shape.As noticed for the experimental and numerical temperatures of Sn03 and Sn05, a softened ramp is observed with the presence of an EL suggesting a thermal conduction effect before reaching the thermal equilibrium.This illustrates the limitation of an EL which allows to obtain an accurate experimental temperature but which prevents from displaying a β-bct transition during the loading.Concerning the level of the calculation, it seems to be overestimated demonstrating an unsatisfying calculation around the transition due to the simplistic phase kinetic model described by the hysteresis. Conclusion Experiments of tin from 8 to 15 GPa were performed to investigate its polymorphic solid-solid transition under dynamic loading.For the first time, both velocity and temperature measurement has been performed at the interface allowing to improve our understanding of mixture phenomenon.Temperature measurements with an emissive layer have been achieved with sufficient precision to be compared to a model of phase transition.Velocity and temperature profiles display direct and reverse transition showing coherent singularities.These experimental results reveal clearly the existence of a hysteresis for the solidsolid transition during the loading and the release already studied by Mabire.The Mabire EOS associated to the SCG plasticity model provides an overall good estimate of the velocity profiles and their singularities.However, depending on the shock amplitude, its prediction of the temperature profile may be less satisfactory around the double structure of the direct β-bct transition.Hence, this underlines the limit of a model of a simple hysteresis, and the need for future improvements in terms of phase kinetics description. Fig. 2 . Fig. 2. Interface temperature and velocity diagrams for the impact experiments on Tin. Fig. 3 . Fig. 3. Thermodynamic path of tin for the impact Sn01, Sn02, Sn04 and Sn06 with experimental data plotted on the phase diagram. Fig. 5 . Fig. 5. Experimental and numerical comparisons for the temperature profile for Sn01 and Sn06 without an emissive layer. Fig. 6 . Fig. 6.Experimental and numerical comparisons for the temperature profile around the double structure of the transition for Sn03 and Sn05 with an emissive layer. Table 1 . Parameters for the impact experiments on tin.
3,360
2012-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
The Future of NMR Metabolomics in Cancer Therapy: Towards Personalizing Treatment and Developing Targeted Drugs? There has been a recent shift in how cancers are defined, where tumors are no longer simply classified by their tissue origin, but also by their molecular characteristics. Furthermore, personalized medicine has become a popular term and it could start to play an important role in future medical care. However, today, a “one size fits all” approach is still the most common form of cancer treatment. In this mini-review paper, we report on the role of nuclear magnetic resonance (NMR) metabolomics in drug development and in personalized medicine. NMR spectroscopy has successfully been used to evaluate current and potential therapies, both single-agents and combination therapies, to analyze toxicology, optimal dose, resistance, sensitivity, and biological mechanisms. It can also provide biological insight on tumor subtypes and their different responses to drugs, and indicate which patients are most likely to experience off-target effects and predict characteristics for treatment efficacy. Identifying pre-treatment metabolic profiles that correlate to these events could significantly improve how we view and treat tumors. We also briefly discuss several targeted cancer drugs that have been studied by metabolomics. We conclude that NMR technology provides a key platform in metabolomics that is well-positioned to play a crucial role in realizing the ultimate goal of better tailored cancer medicine. Introduction Metabolomics, the analysis of the complete set of metabolites in a defined biological compartment, is a relatively novel approach, yet it is predicted to rapidly become a standard tool in the biotechnology and pharmacological sectors. The global metabolomics market was recently estimated to reach over 860 million US dollars by the year 2017 [1]. Metabolomics is starting to gain broader acceptance as the technology advances and as the scientific literature on the topic matures. To date, nuclear magnetic resonance (NMR) metabolomics has already been successfully applied to study various cancers such as ovarian [2], breast [3], pancreatic [4], oral [5], esophageal [6], lung [7], prostate [8], bladder [9], and colorectal malignancies [10,11]. It is clear that cancer patients present with metabolic profiles that are different from those of healthy controls and patients with benign diseases [2,4,5,6,7,9]. Moreover, the site [12], the stage [13], and the location [10] of the tumors have been shown to further alter the metabolome. It is well known that cancer cells have an altered metabolism, most often mentioned in the context of the Warburg effect [14]. However, the characteristic increase in glycolysis is of a complex nature, which was recently addressed using multiple metabolomics approaches [15,16]. The identification of specific cancer biomarkers has been predicted to be one of the catalysts that will lead to faster growth and expansion of the field. A simple blood test providing metabolomic biomarkers would be considerably cheaper than genome sequencing, or a complete proteome analysis, but it may be able to fulfill the same aims: earlier detection of cancer and to provide information that can aid in the choice of an optimal cancer treatment. To date, there are no FDA approved metabolomics tests for cancer, however metabolomics is currently being used by the FDA for biomarker discovery [17]. There has been a recent shift in how cancers are being viewed and treated. Currently, tumors are defined not only by where they are located (e.g., colon, breast, and brain) but also by their molecular characteristics. The presence of mutations affecting hormone receptors and oncogenes, such as the HER-2 receptors in breast, and K-RAS in colorectal tumors, has started to play a part when determining treatment plans [18,19]. However, the majority of cancers currently lack such markers and the present markers are far from absolute. There is considerable heterogeneity within the current definitions, exemplified by the fact that patients who are given an identical diagnosis react differently to the same therapy and have different outcomes. Important cancer subtypes are currently ignored by routine protocols but the tumor heterogeneity is being addressed with the development of targeted therapeutics. These drugs represent a new type of therapeutic agents that are designed to provide increased tumor specificity, higher efficiency, and less side effects [20]. Metabolomics can facilitate the shift from the dominating -one-size fits all‖ approach to a more tailored type of cancer medicine by identifying subgroups of patients that will benefit from a specific drug, as well as by identifying patients that are likely to experience toxicity or develop resistance. Metabolomics tools can identify new biological targets, find new use for drugs already on the market, and can further be applied in drug development studies to provide insight into the biological mechanisms of action of the drug and its off-target effects. For example, NMR metabolomics has already been used to evaluate the efficiency of both radiation and chemotherapy [21,22]. NMR spectroscopy is one of the three main methods currently being used for metabolomics studies, alongside gas chromatography and liquid chromatography coupled with mass spectrometry (GC-MS, LC-MS). In this systematic review we discuss the role of NMR metabolomics in drug development and personalized medicine, as we present what has been reported to date and touch on some basic aspects of NMR metabolomics. Since, this review forms part of a special issue -NMR-based Metabolomics and Its Application‖, MS-based studies will not be discussed, even though they have also made important contributions to the field of cancer metabolomics. Here, we aim to provide metabolomics researchers and cancer researchers with insight in the advantages of NMR metabolomics as we survey the current status of the field. Nuclear Magnetic Resonance Spectroscopy Metabolic fluxes are highly dynamic, changing according to a broad range of factors. The complexity and constant changes require consistency in the measurements, in order to avoid additional variance (for further discussion, see below). NMR spectroscopy is a highly reproducible tool and currently one of the major players in the field of metabolomics. The samples are never in direct contact with the equipment, resulting in minimal contamination between samples. Analytical variation is considerably lower than the biological variation, as highlighted by the multi-site COMET study by Lindon et al. [23]. Each sample can, moreover, be re-run with only minor changes in results. Another part of the reproducibility lies in the minimal requirements for sample preparation. For biofluids, samples are close to their native state and variability due to sample preparation is kept to a minimum. Urine is easily obtainable, inexpensive, and currently one of the most commonly used biofluids in metabolomics studies [24]. Despite having low protein content, it produces complex spectra and many peaks are left unidentified. Metabolic profiles of serum and plasma are easier to interpret and they are almost fully assigned, but this will often require removal of larger molecules such as proteins [25][26][27]. Native serum can also be used, and provides valuable information on the content of lipids and lipoproteins [28]. However, the presence of these high molecular weight biomolecules results in broad signals that overlap with the peaks for the smaller metabolites. While it is possible to identify the metabolites in such spectra, filtration is needed, in our experience, for reliable and reproducible quantification. A 3-kDa micro centrifuge filter will usually remove all lipids and proteins, resulting in NMR spectra with flat baselines and well-defined, quantifiable peaks of the small molecule metabolites, as has been described [29]. Such spectra can be quantitatively analyzed using a database of known metabolite spectra, such as those found in the Chenomx software for example [30]. The broad protein and lipid particle peaks in NMR spectra of serum and plasma can also be removed during spectral acquisition by the application of special NMR pulse sequences such as the Carr-Purcell-Meiboom-Gill (CPMG) spin-echo experiment [31,32]. However, such experiments can alter the spectral intensities measured for metabolites and hence they need to be used judiciously. In addition, J-resolved 2D NMR spectra can be deployed to provide improved spectral resolution in crowded regions of serum, plasma, or urine 1 H NMR spectra [33][34][35][36], but these experiments have yet to find wide applications in large scale clinical metabolomics studies. NMR detects molecules based on their magnetic properties. Multiple nuclei can be used for NMR analysis, including 1 H, 31 P, and 19 F with a natural abundance of 100% and isotopes such as 13 C and 15 N that require stable isotope labeling, because of their low natural abundance (1.1% and 0.4% respectively). 1 H NMR is the most predominant in the field and it produces high-resolution spectra with good sensitivity because of its inherent high sensitivity and its high abundance in biological systems. In this review, the main focus will be on the applications of 1 H NMR metabolomics. 1 H NMR will henceforth simply be referred to as NMR. It is interesting to note that in a clinical context the term magnetic resonance spectroscopy (MRS) is often used interchangeably with NMR, mostly to avoid the use of the word nuclear, which can introduce concerns for those unfamiliar with the principles of the technique. The NMR instrumentation has undergone major improvements in recent years. High-field, ultra-shielded magnets equipped with cryoprobes are now available, offering higher sensitivity and smaller magnetic stray fields. The latter is important as it makes it easier to introduce such modern instruments in hospital settings. Biofluids, such as the aforementioned serum, plasma, and urine, are most commonly used for analysis, but the use of cerebrospinal fluid, saliva, synovial fluid, and fecal water has also been reported in a clinical context [37][38][39][40]. For studies of intact tissues (e.g., biopsy material), high-resolution magic angle spinning (HR-MAS) can be used to achieve spectra with a quality that is comparable to those of extracted molecules [41]. As the name indicates, in HR-MAS the higher resolution is achieved by spinning the samples at a specific angle (-the magic angle‖) [41]. In a regular high resolution NMR experiment, tissues would give rise to broad peaks, with increased overlap of signals and with peaks disappearing into the baseline. HR-MAS also makes it possible to avoid extraction procedures that otherwise would effect the metabolic composition. On the other hand, changes in the metabolites are hard to avoid during the HR-MAS experiment, as the enzymes in the tissues are still active. Using samples from patient-based metabolomics studies is likely to be more representative of the processes occurring in the human body, when compared to the analysis of cultured cell lines. However, even far progressed tumors rarely constitute more than 1% of the total body weight; consequently it is unlikely that all changes observed in body fluids are due to the cancer itself, as there will also be contributions from the immune response. Cell lines are less influenced by external factors and are in some cases also preferable when testing new drugs or drug combinations. Such studies may thus be a crucial first step in drug development and personalized cancer medicine, but the results should eventually be validated in studies of cancer patients. NMR Metabolomics NMR is a well-established tool in protein structural, protein ligand binding, and protein-protein interaction studies [42][43][44]. It is the foundation for structure-based drug design studies and it has been used to provide insight into drug-target interactions [45,46]. In recent years NMR has been utilized to analyze the metabolome. The human metabolome represents the ongoing processes in our body, including homeostasis maintenance such as energy metabolism and dynamic fluxes such as response to disease. It is highly influenced by various factors such as diet, gender, and age, as well as body composition [47] and intake of drugs, as discussed below. Because of its high sensitivity and rapid response to changes in the environment, the metabolome is often claimed to best reflect the phenotype, when compared to other -omes‖ such as the genome and the proteome [48]. The complexity requires reliable and reproducible experimental tools as well as carefully thought-out study designs to obtain meaningful results. The rapid changes within the metabolome in response to therapeutic agents and disease are an advantage. The presence of disease or a toxic response can often be detected before clinical symptoms manifest, offering possibilities for earlier diagnosis or the prevention of side effects. Drug response and disease progression can, moreover, be monitored over time in longitudinal studies. Metabolomics is objective in its nature and non-or minimally invasive when using biofluids. The main drawback of NMR is the relatively low sensitivity, necessitating a minimum abundance of metabolites in the micromolar range and thus larger sample sizes are required than for mass spectrometry. While GC-MS and LC-MS offer higher sensitivity, issues concerning peak identification, quantitation, and reproducibility have not yet been fully addressed. The capabilities and limitations of GC-MS are discussed in a recent review by Koek et al., who also propose changes on how to optimize experimental procedures, validation and data processing and analysis [49]. The potential of NMR and MS metabolomics were rapidly identified by Griffin and Shockcor as providing a useful approach to study cancer cells, and by Nicholson and colleagues to evaluate drug toxicity [50,51]. A complete listing of metabolomics approaches is presented in the review paper by Gowda et al. who also report on the types of cancers that have been investigated [52]. For an overview of the applications of NMR metabolomics and the important part it can play in tumor characterization the reader should consult the article by Bathen and colleagues [53]. A recent review on systems biology by Dunn et al. can also be consulted, which describes the use of NMR and MS to evaluate the effects and health aspects of diet and drugs in mammals [54]. In the following, it is important to note that this review relies on the biological interpretations presented by the authors of the articles herein cited. Consequently, we encourage the reader to refer to the original articles for specific results and discussions of their respective biological impact. Targeted Therapeutics The group of targeted cancer therapeutics includes agents with widely different molecular characteristics. Small-molecule drugs and monoclonal antibodies dominate over the less common proteasome inhibitors and retinoids. Targeted cancer drugs can furthermore be grouped according to their molecular targets. FDA approved inhibitors of signal transduction pathways, growth factor receptors, and apoptosis have been studied by metabolomics approaches, as presented in Table 1 and as described briefly below. One of the studies applied MS metabolomics and will not be discussed [55]. Approved drugs belonging to the other drug classes, consisting of anti-angiogenic agents, monoclonal antibodies, promoters of the immune system and regulators of gene expression and other cellular functions have to the best of our knowledge not yet been studied by metabolomics approaches. A complete listing of all FDA approved targeted cancer therapeutics can be found on the homepage of the National Cancer Institute [20]. Metabolomics studies investigating potential new agents will be discussed in the below Section 2. Tamoxifen citrate (Nolevadex) is one of the best known cancer therapeutics, which has been administered to treat tumors of the breast since 1977 [56]. It has been approved to reduce breast cancer incidence in the high-risk population, to prevent recurrence and invasiveness of breast tumors, and to increase survival in ER-positive patients [56]. NMR metabolomics has been used to study the impact on the liver, after long-term expose to Tamoxifen [57]. In this study, 344 female Fisher rats were given the estrogen antagonist Tamoxifen, the synthetic estrogen Mestranol or Phenobarbital, acting on the central nervous system, for six months, after which they were sacrificed and their livers were removed. Both polar and lipid extracts from the livers were analyzed, showing higher concentrations of fatty acids in the Tamoxifen treated animals. Higher levels of succinate, acetate, and formate were detected in both Tamoxifen and Mestranol treated rats. The authors concluded that these changes likely were due to the estrogen receptor (ER)-activity of the two drugs [57]. Tamoxifen is also approved to treat breast tumors to avoid metastasis [56]. Metastatic breast cancers are difficult to diagnose and invasive biopsies are often needed for clinical evaluation. In a recent pilot study, sera from over 500 patients with metastatic breast cancers were analyzed to investigate the potential of Lapatinib treatment [58]. Serum was collected pre-treatment and at multiple times during treatment with Placitaxel, which was administered in combination with either Lapatinib or placebo. No correlation could be found between the metabolic profiles and toxicity or other outcomes, when all patients were included or when intra-individual comparisons were made. However, for HER-2 positive patients given the combination treatment, overall survival (OS) and time to progression (TTP), marking the time between randomization and progression of breast cancer or death, could be predicted by the metabolic profiles of sera taken at nine weeks. Larger TTP values were correlated to lower concentrations of phenylalanine and glutamate and higher concentrations of glucose. The authors suggested that patients with higher sensitivity to Lapatinib and Paclitaxel treatment may be identified in the future by using an NMR metabolomics approach [58]. However, these findings need further validation in order to draw more detailed biological conclusions and to realize such a test. Imatinib (Gleevec) is considered one of the biggest successes in the area of targeted cancer drugs. The agent targets the receptor tyrosine kinase of the BCR-ABL fusion protein, that is present in the majority of cases of chronic myeloid leukemia [59]. Several follow-up studies have been conducted since the first clinical trial was performed in 1998, supporting the initial promising results [60][61][62]. A complete cytogenic response in up to 87% of patients has been reported, with 5-and 6-year overall survival rates of 89% and 88% respectively and with no additional side effects [60,61]. The drug is also used to treat gastrointestinal stromal tumors. The metabolome of Gleevec resistant cells was studied by Dewar et al and will be further described below [63]. Lastly, Bortezomib (Velcade) is a proteasome inhibitor that is used as second or third-line therapy for multiple myeloma and mantle cell lymphoma. Its synergistic effects with Belinostat, investigated by NMR metabolomics, will be described further below. Drug Discovery and Development There is a continuous drive to develop more targeted drugs. However, drug discovery is an expensive and time-consuming process. Up to 10,000 compounds might be originally investigated in order to develop one single drug, a process that typically takes up to 15 years [66]. Any development of side effects will slow down the process and every drug that fails will add to the overall cost of drug development. Hence, there is a need for tools that can provide mechanistic information about putative new drugs and their potential off-target effects. Unraveling the biological mechanism of a drug can, moreover, provide some direction as to how a drug or its use can be improved. As previously mentioned, the identification of tumor specific metabolic patterns can lead to new molecular targets. Metabolomics approaches can be used to evaluate drugs that are already in use. By having a deeper understanding of how a drug works, we can identify new combinations of drugs that have higher potency and/or lower toxicity as well as identifying diseases that may respond to an unconventional drug. Such a strategy could lead to new and better use of the drugs that have already been developed. Evaluating Toxicity Amongst the very first applications of metabolomics were investigations of toxic effects of drugs and studies of their potential mechanism of action in animal models [67,68]. In the field of oncology, avoiding toxic effects is crucial, as drug toxicity presents itself as a serious limitation of the efficacy of chemotherapy [69]. Adverse side effects appear when drugs become distributed to other, healthy, parts of the body where they exert a negative impact. This results in patient suffering, a lower quality of life, and possibly in severe cases termination of the treatment. To limit toxic effects, lower doses are often used resulting in sub-optimal results [70]. Another way to decrease side effects is to make a drug more specific, so that the effect is exhibited primarily within the tumor. Such higher tumor-specificity can be achieved by using locoregional drugs, or prodrugs, that only become active once on site. 13 C and 1 H NMR were used by Sorg et al to evaluate the anticancer potential of prodrugs of glycoconjugated agents [71]. In this study, NMR spectroscopy was primarily used to assign the structure of the newly synthesized drugs. Naser-Hijazi et al. used 19 F to quantify the prodrug 5-fluoro-2'-deozyuridine and its metabolites in hepatocellular sarcoma tissue after exposure to the fluorine-containing cytotoxic agent [72]. Optimal dose and appropriate infusion times were determined based on the level of the degradation products as well as the tumor volume. Interestingly, low levels of the end product, α-fluoro-β-alanine, were found in the tumor. This finding was interpreted as an indication of the drugs release from the liver and reentering into the blood stream and the tumor cells [72]. Capecitabine, another prodrug of 5-fluorouracil, was evaluated by Backshall and colleagues in order to assess the cytotoxicity in patients with inoperable colorectal tumors [73]. The primary aim of this study was to identify subpopulations with a pre-disposition for side effects. Toxicity was predicted by connecting pre-treatment metabolic profiles to toxic events that were experienced post-treatment. The high-toxicity group was characterized by higher concentrations of choline containing phospholipids, polyunsaturated fatty acids, and other lipids of low-density lipoproteins, whereas tyrosine and an unassigned peak at 7.2-7.3 ppm were found to be higher in the group that experienced no or low toxicity (Figure 1). It was proposed that the extra lipids bind and interfere with proteins involved in drug metabolism, when present at higher concentrations. However, it was also stated that the lipid profile might also be reflective of an ongoing inflammatory response, hence further experiments are needed in order to confirm the proposed mechanisms. The changes in the metabolic profiles appeared to be gradual, allowing for the study of the onset and the progression of Capecitabine toxicity. Weight but not BMI, age, or gender was identified as a confounder for toxicity grade. Targeting the lipidome pre-treatment was suggested to be one way to reduce the level of toxicity [73]. Analyzing multiple biofluids [29], or using multiple metabolomics approaches, will likely provide more reliable results, validating one set of data with the other. Metabolomics approaches can in the same manner also be used alongside other targeted or untargeted -omics‖ techniques, acting in a complementary fashion. A growing area of importance within the -omics‖ and systems biology field is combining metabolomics with genome wide association studies (mGWAS). In such work one attempts to connect metabolic patterns in responders and non-responders to specific genetic traits, as recently reviewed [74]. Another example of combining methods of different nature is presented in an article by Wang et al. using urine, plasma and liver samples [75]. In this study, NMR spectroscopy, histopathological assessments, and current biomarkers were used of to investigate the unknown mechanism of liver toxicity caused by the anti-angiogenic drug Z24. Time-dependent changes were observed in the metabolomics data, providing information about the initial response, progression and recovery after Z24 treatment. The most pronounced changes were seen in urine with increased levels of citrate, succinate, acetate, and 2-oxo-glutamate and decreased levels of trimethylamine-N-oxide and creatinine. Plasma and liver levels of glucose and choline compounds were decreased post Z24 treatment. Polar and lipid fractions of the liver extracts showed higher levels of lactate and glutamine and triglycerides respectively. These observations led the authors to propose that the toxic response might result from the impairment of mitochondrial functions, ultimately resulting in cell death [75]. It is important to acknowledge that the overall aim of the therapy may potentially affect the choice of the dosage and treatment. If the aim is to cure, more pronounced adverse effects are usually tolerated in order to achieve the treatment goal. However, if the aim is to provide palliative care, only lower levels of adverse effects and patient suffering will be accepted. Evaluating Resistance and Sensitivity As previously mentioned, Gleevec represents one of the major recent advances in targeted cancer therapies. However, as noted for many other anti-cancer drugs, some cells develop resistance against the treatment. Dewar et al studied and compared the metabolic profiles of two chronic myelogenous cell lines, with one cell line showing increased tolerance to Gleevec after having been exposed to the drug over time [63]. Both cell extracts (-fingerprints‖) and cell culture media (-footprints‖) were studied. The most pronounced detectable differences between the two cell lines were reported to be the total concentration of creatine and creatine-phosphate, with the resistant cell line showing increased levels. High performance lipid chromatography (HPLC) experiments confirmed the higher content of creatine containing compounds in resistant cells and could further identify that creatine-phosphate dominated the creatine pool, in contrast to the responding cells that presented a 1:1 ratio. The higher conversion of creatine to creatine-phosphate could also be monitored by 31 Wei et al. recently performed a study evaluating NMR and LC-MS metabolomics to predict the response to chemotherapy in breast cancer patients [76]. Serum samples were collected prior to treatment, consisting of chemotherapy followed by surgery. When comparing non-responders to partial and complete responders, isoleucine, glutamine and threonine (NMR) and linolenic acid (LC-MS) were detected to significantly correlate to treatment response. Unexpectedly, the outcome could not be related to any differences in ER, HER-2 or progesterone (PR) status, but larger sample sizes than the current number of patients (n = 28) would be needed to further evaluate subgroups [76]. Pre-treatment metabolic profiles of human glioma cells have also been reported to indicate sensitivity versus resistance to chemotherapy [77]. Being able to predict drug response would not only make it possible to offer responders the appropriate drug as first line treatment, but also to provide non-responders with an alternative treatment at an earlier stage. Treatment Response to Different Drugs The effect of a drug on cell metabolism is difficult to predict, even when the type of anti-cancerous effect of the drug is known. Triba et al. investigated the differences in metabolic profiles following treatment with either the chemotherapeutic agent doxorubicin, or the calcium chelating agent BP7033, the latter only having recently been shown to induce apoptosis [78]. Both treatments were found to reduce cell growth to a comparable extent in the murine melanoma cells that were used for evaluation. It was hypothesized that BP7033 was achieving apoptosis by forming calcium complexes. Interestingly, the cell growth inhibition of BP7033 was not of a calcium chelating nature. Control cells and the two treated cells formed three distinct groups in the multivariate scatter plot, indicating strong differences in the spectral data. Doxorubicin was concluded to act on neutral lipid metabolism and influenced the levels of inositol and lysine. The drug was further suggested to induce apoptosis, an event that correlated with decreased levels of acetate, glutamine, and alanine and an increased signal at 1.30 ppm, most likely corresponding to the methylene content of fatty acids, according to Triba and colleagues. BP7033 treated cells had high levels of glutamine and altered phospholipid metabolism displaying strong signals originating from glycerophosphocholine and phosphocholine. These findings were suggested to provide a foundation for future in vivo studies concerning drug mechanism; they also act as a reminder of how two drugs can achieve similar results (i.e., growth inhibition) through different mechanisms [78]. Treatment Response in Different Cell Lines Bayet-Robert et al applied two dimensional (2D) HR-MAS NMR to evaluate drug toxicity and drug profile in four human cancer cell lines (HepG2, 143B, MCF7, and PC3) and in non-malignant fibroblasts [79]. The cell lines were phenotyped pre-treatment and presented with common attributes such as increased levels of choline containing compounds and sulfur derivatives and decreased levels of free amino acids, such as glutamine, leucine, asparagine, alanine, arginine, and lysine (decreased in all four cell lines). The metabolic profile of the HepG2 cell line, representing the most differentiated cells, were also the most diverse with higher levels of total fatty acids, no increase in phosphocholine (PC) and with a slight increase in lactate levels. Chemotherapy resulted in few global effects, such as increased fatty acids (143B and MCF7) and an increase in PC (HepG2 and 143B), however most of the changes were cell line specific. The results highlight that the differences between cell lines are profound and will effect how one drug exhibits its effect and that this aspect needs to be considered when conducting metabolomics studies [79]. 2D A representative 2D 1 H total correlation spectroscopy (TOCSY) spectrum can be found in Figure 3. Tiziani et al. treated three leukemia cell lines with Bezafibrate (BEZ) and Medroxyprogesterone (MPA), two unconventional drugs for the treatment of leukemia [80]. These two drugs are known to exhibit sub-type dependent effects, with the mechanisms of action not fully understood. One of the cell lines was chosen because it is known to respond to the combination treatment by promoting differentiation, whereas the other two cell lines had demonstrated increased apoptosis in prior studies. An increase in the level of reactive oxygen species (ROS) was observed as a global effect. Metabolic alterations could be detected after 24 hours of treatment, however these effects were inferior to the metabolic characteristics of the different cell types that produces strong clustering in an unsupervised scatter plot (Figure 4 top) supporting the results from Bayet-Robert et al. [79]. Interestingly, cells treated with single agents could easily be distinguished from those treated with the combinational therapy, in an unsupervised model (Figure 4 bottom). The combination of the drugs gave more pronounced effects than the agents alone, which the authors concluded to be mainly due to a stronger impact on TCA cycle intermediates and the levels of free amino acids. TCA cycle perturbations have previously been connected to ROS production and oxidative stress [81] To test this hypothesis, all three cell lines were treated with hydrogen peroxide (H 2 O 2 ). A coinciding accumulation of succinate and decreased levels of alpha-ketoglutarate was observed both after combination treatment and after exposure to H 2 O 2 , supporting the notion that oxidative stress plays a major role in the mechanism of action of the drug. There were also indications of the conversion of pyruvate to malonate via acetate and oxaloacetate after drug or H 2 O 2 treatment [80]. Hepatocellular and pancreatic cell lines were used in a similar manner to uncover the post-treatment phenotype of two anti-proliferative drugs, with completed phase II and III clinical trials respectively [65,82,83]. Belinostat and Bortezomib were investigated as combination therapy, to simultaneously inhibit histone deacetylases (HDACs) and proteasomes. Two hepatocellular and two pancreatic cell lines were evaluated post-treatment by immunoblotting and by 1 H, 13 C, and 31 P NMR. HDAC inhibition was more pronounced following combination therapy than after treatment with Belinostat as a single-agent. The metabolic signature of the synergistic effect was interpreted as reflecting a decrease in proteasome activity in the form of high abundance of free amino acids and antioxidants, a response that has been coupled to decreased migration and proliferation and increased apoptosis. Both agents induced apoptosis alone, but the synergetic effect resulted in changes up to 16-and 38-fold for the pancreatic and liver cell lines respectively. The synergistic effects of the two drugs also lead to stronger anti-proliferating effects, demonstrated by Combination Index (CI) values < 1 and by achieving IC50 values at lower concentrations. Immunoblotting data was stated to provide overall support for the mechanisms of action of the two drugs. There is great need for new therapeutic strategies for liver and pancreatic tumors. Both cancers have shown resistance to standard chemotherapy and there has been very limited success in increasing the overall survival in liver cancer patients with current treatments [65]. The two drugs has previously been shown to exhibit toxic effects, leading to termination of a phase II trial for relapsed, refractory multiple myeloma [84]. Hence, these results illustrate a need for a better understanding of the synergetic effects. The metabolic link to apoptotic response has also been investigated by Pan et al. [85]. In their work, two neural tumor and two glioma cell lines, were exposed to the chemotherapeutic agent cisplatin. One glioma and one neural tumor cell line were found to have higher susceptibility to the drug, resulting in nuclear condensation and fragmentation, indicating apoptosis. In responding cells, but not in non-responders, cell death was shown to be associated with increased levels of lipids, uridine diphospho-N-acetylglucosamine (UDP-GlcNAc) and uridine diphospho-N-acetylgalactosamine (UDP-GalNAc). This response was suggested to be due to an increased glucose intake, a decrease in utilization of these compounds and a mechanism to attract macrophages in order to remove the dead or dying cells. Concentrations of the glycosylated uridine metabolites and involved enzymes are known to be altered by a broad range of stresses, and has been suggested to play part in tumorigenesis and metastasis [86,87]. In this work, the two compounds UDP-GlcNAc and UDP-GalNAc were found to connect cisplatin treatment specifically to cell death in brain tumor cells [85], an observation that should be confirmed in future studies, to evaluate its clinical usefulness as a potential indicator of response. Evaluating Dose Response Dosage of a drug is coupled to the efficiency of a treatment as well as to the development of side effects. Docetaxel, a cytotoxic agent acting on microtubules, has been found to have a dose-dependent drug effect, showing perturbations in mitosis and necrosis with low doses, and increased levels of cell cycle arrest and apoptosis as the dose increases. Bayet-Robert et al. investigated the metabolic response induced in breast cancer (MCF7) cells after exposure to a low and a high dose [88]. As hypothesized, a dose-dependent response was observed along with variations in the levels of 40% of the identified metabolites. The cytotoxic effects of the high dose correlated to an accumulation of polyunsaturated fatty acids and a suggested increased activity of glutathione S-transferase, correlating to the depletion of the precursor glutamate. In contrast, the low dose resulted in high levels of homocysteine, which was interpreted as an indication of enzyme inhibition. Higher levels of myo-inositol, probably related to an increased production of phosphatidylinositol, were furthermore observed for cells given the lower dose. The drug response was moreover seen to consist of two phases, one initial response with higher levels of alanine, acetate and polyamines and a delayed response characterized by an increase in the levels of phospholipids. Both doses led to an accumulation of phosphocholines [88]. Clearly, these metabolomics data provided insight into the metabolic processes that were affected and the study also identified when they occurred. Evaluation Unconventional Therapies Bayet-Robert et al. also evaluated the potential cytotoxic effects of three marine natural products (MNP) on MCF7 breast cancer cells [89]. All three products resulted in cell death, but presented a different mechanism of action. Treatment with Kahalalide F, a compound currently undergoing phase II clinical trials, lead to swelling of the cells and to non-apoptotic cell death. Metabolomics data had in the past indicated an impact on lipid membranes, here supported by an accumulation of polyunsaturated fatty acids, phospholipids and total content of fatty acids. The pyrrole alkaloid Lamellarin D lead to accumulation of metabolites involved in the malate-aspartate shuttle, such as glutamate, aspartate, ethanol and lactate. A blockage of the electron transport chain located in the inner membrane of the mitochondria was suggested as a plausible mechanism. The third compound, Ascididemin, strongly induced apoptosis and led to an increase of gluconate, citrate, alanine, and phosphoethanolamine, which led the authors to propose a perturbation of citrate metabolism. A decrease in DNA was observed in cell lines treated by all three MNPs [89]. Metabolomics approaches have also been used to evaluate traditional Chinese medicine [90] and antibacterial agents [91]. In such traditional medicine, the scientific basis is often weak limiting its use in larger populations. By applying metabolomics and other objective techniques to study individual components, it would be possible to evaluate such compounds and potentially gain a deeper understanding of traditional and alternative medicine. Survival and Outcome Overall survival (OS) is used to determine and explain prognosis and to develop a treatment plan. For colorectal patients, OS is predicted from the presence or absence of K-RAS mutations, blood cell counts and Eastern cooperative oncology group (ECOG) performance status as well as from the serum levels of certain proteins such as lactate dehydrogenase and the metabolite bilirubin. Bertini et al. collected serum samples from 153 metastatic colorectal patients and 139 healthy controls [92]. For patients, blood was collected prior to the initiation of third-line treatment, consisting of a combination of Cetuximab and Irinotecan. Cancer patients and healthy controls were clearly distinguished from each other by multivariate statistical analysis of the spectral data (Figure 5a). The cancer patients presented with metabolic alterations interpreted as perturbations in energy metabolism, with variation in citrate, alanine, and pyruvate concentrations, and a more pronounced inflammatory response characterized by the N-acetyl group of glycoproteins and the -CH 2 -COOR group of lipids. The two latter metabolites were furthermore correlated to short OS. Creatine, valine, and lipid signals further contributed to the strong separation between patients with low versus high values of OS ( Figure 5). Intriguingly, the statistical analysis did not find K-RAS or ECOG to be reliable predictors of OS, in contrast to metabolomics data and the serum marker C-reactive protein (CRP). This study was part of a phase II trial studying the effects of the Cetuximab/Irinotecan combination treatment [92]. Figure 5. Scatter plots of PLS-DA of validation set based on training set models for (left) patients with metastatic colorectal cancer (dots) healthy participants (triangles) and (right) short overall survival (OS) group (dots) and long OS group (triangles). Items adapted [92]. Patient outcome after chemotherapy has also been found to correlate to the abundance of carnitine and acetyl carnitine in multiple myeloma patients [93]. In the study by Lodi et al., paired serum and urine samples were used to identify metabolic patterns correlating to disease progression, which resulted in the detection of a difference in the metabolome of patients who relapsed and patients that went into remission. Current markers for multiple myeloma can predict outcome on a population level. This study aimed to identify markers useful to the individual patients [93]. Identifying Subtypes Contradictory results in OS has been reported for breast cancer patients treated with the anti-angiogenic drug Bevacizumab (Avastin), which resulted in revoking the FDA approval of the drug for the treatment of breast cancer [94]. It was proposed by Borgan et al that the reported differences in drug response were due to the great diversity of tumors that had been studied, and that the utility of Bevacizumab could only be demonstrated after having identified responders from non-responders [95]. It was hypothesized that specific subtypes of breast tumors were more susceptible to the drug and that several metabolic characteristics correlating to drug response could be identified. Two xenograft models of basal-and luminal-like breast cancers were used and exposed to either chemotherapy (Doxorubicin) alone, or in combination with Bevacizumab, often used to enhance the potency of other drugs to treat several types of cancers. The basal-like breast cancer xenografts responded to the combination therapy whereas the luminal-like counter part did not, a difference partly ascribed to the adverse effects on the levels of glycerophosphocholine (GPC). The decrease in GPC in responding cells was seen both following treatment with the single agents alone and as combination therapy. The increased GPC levels in laminal-like cells were observed along with increased concentrations of phosphocholine as well as total choline and decreased levels of taurine. The same research group had previously investigated the impact on tyrosine kinases and other parts of the proteome using the same xenograft models and therapeutic agents. The results showed similar patterns with the transcriptomic data presented in this article. As hypothesized, certain subgroups of breast cancers were shown to have a higher benefit of Bevacizumab and potentially other anti-angiogenic drugs [95]. Case Reports Metabolomics has been proposed as a tool to evaluate treatments on an individual level and to compare tumor and healthy specimens from the same patient. However, the literature on this topic is still very sparse. Two examples will be given to illustrate the concept, even though the methodology goes a little beyond the focus on NMR metabolomics of this mini-review. Abaffy et al. compared the metabolome of malignant melanoma tissue with a sample of healthy nearby skin taken from a 49-year old male [96]. The aim of the study was to identify possible biomarkers that could be used for early detection of skin cancer. Untargeted GC-MS detected nine metabolites to be elevated in the cancerous tissue, including dodecane, nonanal, 1,3,5-trimethyl benezene, and 1-hexadecanol. Moreover, 23 metabolites were present in the malignant tissue only. These included decane, undecane, 4-methyldecane, ethylene oxide, isopropanyl palmitate, and Bis (2-ethylhexyl) phthalate. The strongest candidate, 1-hexadeconol was increased 35-fold. All mentioned metabolites have previously been reported to be increased in melanoma cells [96]. Vriens et al. evaluated glucose metabolism and vessel perfusion and permeability in two male patients with metastatic colon cancer undergoing Bevacizumab treatment as part of their cancer therapy [97]. Measurements were taken at baseline, after three cycles of therapy and at a late stage (9-12 cycles). Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) revealed signs of normalization of tumor vasculature and a reduction in glucose turnover. Despite treatment, both patients died during the study having an OS comparable and slightly higher to the mean survival rate [97]. Discussion and Conclusions NMR metabolomics has been used to evaluate drugs and tumor characteristics in order to help develop new agents and to maximize the beneficial effects of drugs that are currently on the market. NMR is a key method in metabolomics and the technology continues to improve, allowing NMR spectroscopy to remain involved in future drug research and development, facilitating the movement toward personalized medicine. To this date, only a limited number of approved anticancer drugs have been investigated by metabolomics methods. However, over time we believe that metabolomics approaches will come to play a prominent role in future clinical studies when developing new drugs. Acknowledging that unique molecular characteristics will determine how a drug is tolerated and how this in turn influences the outcome of the patient is of utmost importance. Better specific molecular targeted drugs and suitable stratification of patients are two examples of how cancer treatment can become better tailored towards the tumor and patient. To be able to quickly identify the best drug (or drug combination) for a specific patient should lead to a more efficient treatment, reduced patient suffering and enhanced health-economical benefits. Furthermore, if such a first line treatment is successful, subsequent second and third-line treatments can perhaps be avoided in many cases. In closing, it is important to reiterate that metabolomics of biofluids has the advantage of being relatively cheap and that it is relatively non-invasive or at least minimally invasive. The obtaining of blood and urine samples is already a well-accepted procedure in common clinical practice. Identifying subgroups of patients that might benefit from a certain drug by taking a simple blood sample would not only benefit the patient but would also provide health-economical advantages. In order to realize such a test, many more appropriately designed metabolomics studies are needed, investigating the metabolic differences in responders and non-responders, in patients not experiencing toxic effects versus those who do and lastly, to identify those groups of patients where combination therapies might be advantageous. The biology of a drug is sometimes poorly understood, which currently leads to a less than optimal usage, but NMR (as well as GC-MS and LC-MS) metabolomics can provide much better insight. In a similar manner can metabolomics studies lead to a better understanding of tumor biology, as exemplified by the recent identification of potential new potential disease biomarkers such as the metabolites that have been discussed in this review. Ultimately a better understanding of the specific drug targets could lead to a next generation of rationally-designed new drugs. In conclusion, NMR is a reliable and highly reproducible experimental tool that has already proven to be extremely valuable in metabolomics studies. It allows for the detection of a broad range of polar soluble metabolites, while being nondestructive and keeping the sample in a close to native state, thus having a minimal impact on its metabolic composition. We suggest that NMR spectroscopy will continue to play an important role in metabolomics studies and in future cancer medicine.
10,322.2
2013-05-17T00:00:00.000
[ "Chemistry", "Medicine" ]
The Relationship between Free Volume and Cooperative Rearrangement: From the Temperature-Dependent Neutron Total Scattering Experiment of Polystyrene Although many theories have been proposed to describe the nature of glass formation, its microscopic picture is still missing. Here, by a combination of neutron scattering and molecular dynamics simulation, we present the temperature-dependent atomic structure variation of polystyrene at the glass formation, free volume and cooperative rearrangement. When it is close to glass formation, the polymer is confined in tubes, whose diameter is the main chain–main chain distance, in a “static cage” from its neighbors. This definition can not only account for the kinetic pathway dependence of Williams-Landel-Ferry (WLF) free volume, but also be testified in a set of six polymers. However, the free volume which allows a monomer to move cannot be found in any frame of its real-space image. Monomers, thus, have to move cooperatively to be out of the cage. During glass formation, dynamic heterogeneity develops, and string-like cooperative rearrangement region (CRR) grows over a long range of time and length scales. All of these CRRs tend to walk through loose “static cages”. Our observation unifies the concepts of free volume and cooperative rearrangement. The former is a statistical average leading to a polydisperse “static cage” formation; while a loose “static cage” provides the way that CRRs move. Introduction Vogel first proposed the concept of free volume in 1921 [1]. In theory, it looks easy to understand, i.e., a polymer can move only when it has the space to do so. However, the free volume in a polymer melt has never been measured directly, and different models have to be chosen to estimate the occupied volume [2][3][4][5][6]. Before comparing the free volumes from different groups, their distinct definitions have to be clarified. The relationship between the "void" measured by those techniques (such as the positron annihilation lifetime spectroscopy, gas absorption and birefringence measurements) and free volume is hard to judge [7]. In scattering methods, the only experimental observation which may be related to the molecular packing density and possibly the "free volume" has been the small change in the scattering function, S(q), at the scattering vector (q) in the range of the monomer dimension [8]; while in the simulation, some groups have tried to use thermodynamic approaches to calculate the free volume [9][10][11][12] As a result, the William-Landau-Ferry (WLF) theory is often believed to be phenomenological [13,14]. In contrast to this free volume approach, Adam and Gibbs proposed that the polymer could still move if they performed this in a cooperative way at a low temperature [15]. Their starting physical idea is that the relaxation dynamics at low temperature are the result of a sequence of individual events in which a subregion of the system relaxes to a new local configuration [8]. Confocal microscopy can be used to directly track 2D and 3D dynamics of colloidal particles in supercooled fluids [16]. Cooperative rearrangement regions (CRRs) and heterogeneous dynamics are observed in both repulsive and attractive glasses. CRRs are observed to be string-like in repulsive glass and compact structures in attractive glass [17]. Therefore, the microscopic picture of glass dynamics is very important to clarify the problem. Richet P. et al. summarized the nature and history of glass in the recent Encyclopedia [18]. Additionally, scientists have tried almost all methods to study glass dynamics. For small molecules, such as solvents and metal alloys, the Pair Distribution Function (PDF) analysis with X-ray or neutron diffraction can reveal the atomic structural changes. However, it cannot be used in an amorphous polymer field because of the limitation of the observation range [19]. The microscopic image of glass formation in the polymer field is still missing. Neither real nor reciprocal space observation methods can directly see the monomer movement. The observation method in real space, such as the confocal microscope, has a resolution hundreds of times larger than the size of monomers; thus, the monomer cannot be seen. The reciprocal observation method, such as X-ray photon correlation spectroscopy, is actually used to observe probe motion, while the size of the probe is also hundreds of times larger than that of the monomer, and adding the probe would affect the dynamics [20]. As a result, we still do not have an intuitive image of free volume, and we do not know whether polymers rely on free volume or CRR motion during glass formation either. Developments in Neutron Total Scattering techniques facilitate the continuous structural measurements covering the length scale from 0.01 angstrom to 10 nanometers. Based on the fact that the deuterated polymer has the same atomic structure [21], but a different neutron contrast with its hydrogenate counterpart, the new instrumentations NIMROD and NOVA when combined with the deuterium labelling technique and molecular dynamics (MD) simulation allows us to visualize the most probable all-atom positions in a disordered polymer. In the previous study, we carried out a series of neutron total scattering measurements (three Polystyrene (PS) homopolymers), with the same molecular weight and molecular weight distribution, i.e., perdeuterated PS-d8, phenyl deuterated PS-d5, hydrogenous PS-h8 and three of their binary blends) and corresponding MD simulations at different temperatures during glass formation. When the Fourier transforms of the real space MD simulation are in agreements with all of those scattering profiles at different temperatures, as well as the neutron scattering of backbone-deuterated PS-d3 and X-ray results in the literature [22,23], the real space MD images represent the most probable all-atom positions in PS [24]. In this manuscript, we further assume MD images can be regarded as frames of a film, demonstrating both the statics and dynamics of glass formation at different length and time scales. The temperature dependence of free volume and cooperative rearrangement can be revealed experimentally. The manuscript consists of three parts. In the first part, we obtain a "static cage" structure from the neutron profiles of PS and propose an equation to predict fractional free volume in the WLF equation, while in the second part, coarse-graining simulations are conducted at longer length and time scales at different temperatures based on the force fields of the all-atom simulation. Then, general microscopic images with CRRs and dynamic heterogeneity can be seen without monomer details. Finally, a microscopic picture of glass formation is posed. We believe that dynamic slowing down induces glass formation. Free volume, CRRs, dynamic heterogeneity and the α/β split can all be linked by it. In polymer melts at temperatures away from glass formation, thermal fluctuation enables neighbor segments around a polymer chain to move freely. The decrease in temperature lowers the amplitude of thermal fluctuation, leading to the decrease in free volume. As a consequence, both the static and dynamic cages form; polymer chains are confined in tubes whose average diameter is the main chain-main chain distance at glass formation in the "static cage", although the molecular weight of PS in this study is much lower than its entanglement molecular weight. The fractional free volume is defined by the volume ratio of the space outside of the tube over the "static cage" size. Here, the fractional free volume which consists of the WLF equation is a statistical average, and the numbers of the monomer in the "static cage" are polydisperse, i.e., some of the "static cages" are crowded and some of them are loose. However, the spherical free volume which enables the monomer to move cannot be found in any frame of real-space images. Monomers, thus, have to move cooperatively, such as a string, to come out of the cage. Dynamic heterogeneity develops over a long range of time and length scales. Additionally, faster strings always move in loose "static cages". Material Samples for total scattering were prepared by classic anionic polymerization of styrene and deuterated styrene purchased from Sigma-Aldrich (Shanghai, China) and Reer Technology Ltd (Shanghai, China) with sec-butyllithium as the initiator in benzene at 25 • C [24]. Three homopolymers, PS-d8 (totally deuterated), PS-d5 (phenyl deuterated) and PS-h8 (hydrogenated), with almost the same molecular weight (M w~8 900 g/mol) and molecular weight distribution (M w /M n~1 .1), were synthesized. Sample Preparing and Neutron Total Scattering Neutron total scattering experiments were carried out on the NIMROD diffractometer at the ISIS Pulsed Neutron Source (STFC Rutherford/Appleton Laboratory, Didcot, UK) and the NOVA diffractometer at the Materials and Life Science Experimental Facility (MLF) in J-PARC (Tokai, Japan) [25,26]. A simultaneous scattering vector (Q) range of 0.02-50 Å −1 was achieved. The scattering measurements were performed on 24 × 24 × 1 mm 3 sample in null scattering Ti/Zr flat plate cells or Aluminum cells with about 1 mm thick window. The samples were heated and monitored by two heaters and two thermometers. The sample temperature was first kept at 453 K for 20 min to eliminate thermal history. Then, it was deceased to the measurement temperatures at a cooling rates of 1 K/min. Neutron total scattering was conducted at 453 K, 438 K, 423 K, 405 K, 393 K, 358 K, 343 K and 328 K, respectively. The measurement time of each sample at each temperature was from 2 to 4 h depending on the content of hydrogen. Because of the low cooling rate, there should have been no hysteresis result [27]. Empty cell backgrounds and a 3 mm thick vanadium plate calibration standard were also measured for an equivalent amount of time. Each raw scattering data were corrected for instrument and sample holder backgrounds, attenuation and multiple scattering using the instrument specific software Gudrun [28]. The reduced scatterings were then normalized against the known scattering of the vanadium calibration standard and converted to the microscopic differential scattering cross-section ∂σ(Q)/∂Ω vs. Q for total scattering analysis. Molecular Dynamics Simulations The all-atom simulation (AA simulation) system contains 47 PS chains of length 88 at different temperatures as our experiments. Fully atomistic simulations were carried out by the software package Gromacs-2016.5 under isothermal-isobaric (NPT) conditions at 1 bar using the Nosé-Hoover thermostat (coupling time 0.2 ps) and Parrinello-Rahman barostat (coupling time 1.0 ps) [29]. An integration time step of 1 fs was used. The nonbonded interaction cutoff was r c = 1.0 nm. Additionally, the force field of PS was the all-atom Optimized Potential for Liquid Simulation (OPLS-AA) [30][31][32]. More details can be found in our previous work [18]. The coarse-grained (CG) model was conducted by the structure-based iterative Boltzmann inversion (IBI) method 26. This method assumes the total potential of the system Polymers 2021, 13, 3042 4 of 15 U CG consists of two parts, a bonded (U CG -bonded) and a nonbonded (U CG -nonbonded) part. The bonded potentials were approximated by the potentials of mean force of CG degrees of freedom (bond lengths (r), angles (θ) and dihedral torsions (ϕ)). The independent bonded potentials, assuming these potentials are uncorrelated, are usually given by simple Boltzmann inversion: where the P CG (r, θ, ϕ) is the distribution function of bond length (r), bond angles (θ) and torsion angles (ϕ). For the nonbonded interaction between the CG beads, the potential of mean force can be used as a first guess in an iterative refinement: where g target (r) is the target radial distribution function (RDF) from the reference atomistic simulation. Then, modifying the potential according to the difference between the calculated and target RDFs is iterated in the following way until the two RDFs match: where g calculated i (r) is the RDF calculated with the potential and U CG i (r) in the ith iteration. At last, to correct the pressure, a linear perturbation was added to the potential: where A is a small constant. This linear modification of the potential and the structurebased iterations of Equation (3) were performed concurrently until the target pressure was obtained. More details can be found in Ref. [26]. In our coarse-grained model, one CG bead represented one PS monomer and it was centered on the corresponding centers of mass. All CG simulations in this work were carried out by HOOMD-blue package [33,34]. The Nosé-Hoover thermostat (coupling time 0.18 ps) was used. The nonbonded interactions were truncated beyond 1.3 nm with a neighbor list cut-off of 1.4 nm. The time step was set up to 5 fs. Here, 453 K was set as T = 1.0. Additionally, glass formed when T = 0.4, its corresponding real temperature was 181.2 K. The cooling rate for CG simulation was 0.02 K/ps. There are three T g in the manuscript. The first T g was measured by neutron total scattering from the temperature-dependent iso-thermal expansion factor; the other two in AA and CG simulations were from the inflection point of the temperature-dependent density curves. The T g in neutron scattering was the same with that in AA simulation (it was also the same with DSC measurement). However, the T g in CG simulation was much lower than the other two. The reason can be attributed to the construction method of CG model. It is a structure-based methodology without the dynamic correction. The loss of degrees of freedom will accelerate the dynamic behavior of the system, then the T g will decrease in CG model in comparison with the AA model. Because it is still a reliable way to extend the length and time scales, and widely used in the simulations in polymer field [35], we investigated the dynamics of the system by CG simulation. Fourier Transforms of MD Simulations Neutron total scattering profiles of six samples (PS-d8, PS-d5, PS-h8, 50 mol% PS-d8/50 mol% PS-h8, 50 mol% PS-d8/50 mol% PS-d5 and 50 mol% PS-d5/50 mol% PS-h8) as shown in Figure 1b and Figure S1 were compared with the Fourier Transforms of molecular dynamics simulations. The sizes of the simulation boxes were about 90 Å. According to the Periodic Boundary Conditions (PBCs), the smallest accessible scattering vector was about 2π/90 = 0.07 Å −1 . Scattering profiles with scattering vector lower than 0.07 Å −1 had to be calculated without the PBCs. To keep us in a safe condition, all of the scattering curves "Static cage" structures can be directly seen from the combination of radial and number distributions ( Figure 2). There were 47 PS chains inside the 3D box, and each chain had 88 monomers. Therefore, there were only 94 end groups which had more freedom to move; most of the monomers (97.7%) were in the middle of the chains. End groups were only confined at one side ( Figure 2a). Their first peak in g(r) proceeded to 3.5 Å, where n(r) continued to show that there was only one neighbor monomer from its own chain; it went to r = 7.5 Å when n(r) = 8; the "static cage" size for the chain end was 12.0 Å (g(r = 12.0) = 1), which, on average, contains about 41.9 monomers. On the other hand, monomers in the middle of the chain were confined at both sides ( Figure 2b). Their first peak in g(r) proceeded to 3.1 Å, where n(r) showed there were two neighbors from their own chain; it also continues to r = 7.5 Å when n(r) = 8, after that distances, the monomer number became statistically identical whether the center monomer was at the chain end or in the middle of the chain. The "static cage" size for repeating units in the middle of the chains was 13.7 Å (g(r = 13.7) = 1), which, on average, contains about 62.9 monomers. The shape of g(r) did not change with the decrease in temperature, only the "static cage" shrank. When it came close to glass formation, PS chains would be confined inside the tube from their neighbor. Although the internal motions still existed, they did not affect the viscosity of the melt. There were three things to note here. The first thing was that the number of monomers and chains inside the "static cage" were polydisperse. On average, 1 monomer was confined by 62 neighbor monomers and 5 neighbor chains for the monomer in the middle of chain; 1 monomer was confined by 38 neighbor monomers and 5 neighbor chains for the chain end. Therefore, some of the "static cages" were crowder, and some of them were looser ( Figure 3). The second thing was that the "static cage" was different from the "dynamic cage" in the Mode Coupling Theory (MCT). It is generally believed that the size of the "dynamic cage" is significantly smaller than the typical intercolloid distance [8], and its duration increases quickly with a decreasing temperature [36]. The final thing was that the MC-MC distance could be obtained from the Fourier (c) the decomposition of q 1 peak of PS-d8 at 393K, the yellow dash lines were used to guide the eye. Define Cooperative Rearrangement Regions There were 2 steps to categorize the CRR in a polymer system. First, choose the 10% fast monomers. The time interval was when χ 4 reached its maximum. During this time interval, the mean squared displacement (MSD) of each monomer could be derived. Then, we chose those monomers with the 10% largest MSD as the 10% fast ones. Second, decide their cooperativity. Delaunay triangulation was used first to identify the nearest neighbor of each fast monomer, and those fast monomers which are in the same tetrahedron are put in the same group. Then, a cut-off inter-monomer distance of 6.5 Å was set to find out if every two fast monomers in the same groups were adjacent. We further judged if every two monomers in the same group were at the same chain or their velocities' angle was less than 45 • . The two monomers needed to meet at least one requirement to prove that they moved cooperatively. Finally, merge groups that have an intersection. The algorithm was transferred into codes by Python 3. The mirror coordinates from the coarse-grained MD simulation were used. From Neutron Total Scattering to 3D Most Probable All-Atom Structure of PS The 3D most probable atomic structure of PS is given in Figure 1a. Its temperature dependence Fourier Transforms for PS-d8 (Figure 1b), PS-d5, PS-d3 and their binary blends ( Figure S1) were all consistent with the corresponding neutron total scattering curves. Therefore, all frames of MD images represented the 3D most probable all-atom structure of PS. When it was higher than the glass formation temperature, the PS melt was still an ergodic system. One simulation frame represented a spatial average which had the same Polymers 2021, 13, 3042 6 of 15 time average of PS chains; thus, all of the MD images could be regarded as frames of a film, which showed the dynamics of glass formation. The first peak, q 1 , was from the segment-segment interaction (Figure 1b). The segmentsegment distance (2 π/q 1 ) decreased from~10.0 Å at 450 K to~9.7 Å at 393 K, indicating the formation and shrinkage of the "static cage". It was a composition of the main chain-main chain (MC-MC), main chain-phenyl (MC-PR) and phenyl-phenyl (PR-PR) interactions ( Figure 1c). Because of its amorphous nature, the q 1 peak was broad. The negative contribution from the main chain-phenyl (MC-PR) and phenyl-phenyl (PR-PR) in the q 1 range made the main chain carbon-main chain carbon (MC-MC) distance, 9.5 Å, smaller than 2 π/q 1 at 393 K. "Static cage" structures can be directly seen from the combination of radial and number distributions ( Figure 2). There were 47 PS chains inside the 3D box, and each chain had 88 monomers. Therefore, there were only 94 end groups which had more freedom to move; most of the monomers (97.7%) were in the middle of the chains. End groups were only confined at one side ( Figure 2a). Their first peak in g(r) proceeded to 3.5 Å, where n(r) continued to show that there was only one neighbor monomer from its own chain; it went to r = 7.5 Å when n(r) = 8; the "static cage" size for the chain end was 12.0 Å (g(r = 12.0) = 1), which, on average, contains about 41.9 monomers. On the other hand, monomers in the middle of the chain were confined at both sides ( Figure 2b). Their first peak in g(r) proceeded to 3.1 Å, where n(r) showed there were two neighbors from their own chain; it also continues to r = 7.5 Å when n(r) = 8, after that distances, the monomer number became statistically identical whether the center monomer was at the chain end or in the middle of the chain. The "static cage" size for repeating units in the middle of the chains was 13.7 Å (g(r = 13.7) = 1), which, on average, contains about 62.9 monomers. The shape of g(r) did not change with the decrease in temperature, only the "static cage" shrank. When it came close to glass formation, PS chains would be confined inside the tube from their neighbor. Although the internal motions still existed, they did not affect the viscosity of the melt. There were three things to note here. The first thing was that the number of monomers and chains inside the "static cage" were polydisperse. On average, 1 monomer was confined by 62 neighbor monomers and 5 neighbor chains for the monomer in the middle of chain; 1 monomer was confined by 38 neighbor monomers and 5 neighbor chains for the chain end. Therefore, some of the "static cages" were crowder, and some of them were looser ( Figure 3). The second thing was that the "static cage" was different from the "dynamic cage" in the Mode Coupling Theory (MCT). It is generally believed that the size of the "dynamic cage" is significantly smaller than the typical inter-colloid distance [8], and its duration increases quickly with a decreasing temperature [36]. The final thing was that the MC-MC distance could be obtained from the Fourier Transform of Figure 2b, and it was included in Figure 1c (see Figure S2; we used this method to calculate the MC-MC distance in other polymer systems thereafter). Transform of Figure 2b, and it was included in Figure 1c (see Figure S2; we used this method to calculate the MC-MC distance in other polymer systems thereafter). From A 3D Most Probable All-Atom Structure of PS to a Generalized Equation of Excess Free Volume In a typical polydisperse static cage, PS chains are confined in the tubes formed by their neighbors when glass forms. For the "static cage" with the average number of monomers ( cage n ), the excess free volume (Vfree,exs) is the volume outside of the tubes in the "static cage". Then, the fractional excess free volume is: where Vtotal is the volume of the system; Vapparent is the volume inside the tubes which will not contribute to viscosity; Vcage is the average size of "static cage"; , MC MC Tg q π − is the tube radius when freezing segment motion in the "static cage", it can be monitored by X-ray or neutron diffraction directly; Lcontour(n) is the contour length of the polymer chain which has n repeating units. From a 3D Most Probable All-Atom Structure of PS to a Generalized Equation of Excess Free Volume In a typical polydisperse static cage, PS chains are confined in the tubes formed by their neighbors when glass forms. For the "static cage" with the average number of monomers (n cage ), the excess free volume (V free,exs ) is the volume outside of the tubes in the "static cage". Then, the fractional excess free volume is: where V total is the volume of the system; V apparent is the volume inside the tubes which will not contribute to viscosity; V cage is the average size of "static cage"; π q MC−MC,Tg is the tube radius when freezing segment motion in the "static cage", it can be monitored by X-ray or neutron diffraction directly; L contour (n) is the contour length of the polymer chain which has n repeating units. If the system is close to equilibrium, the average volume of the static cage is the mass of the monomer in the cage divided by its macroscopic density, i.e., V cage = m cage ρ(T g ) = n cage m monomer N A ρ(T g ) , here, m monomer is the molar mass of monomer, N A is the Avogadro constant and ρ(T g ) is the density at glass formation. Then, Equation (5) becomes: Equation (6) partly explains why free volume depends on a different thermal history. T g here is measured by neutron scattering, i.e., from the temperature-dependent isothermal expansion factor [18], so it could compare with the DSC result. When we changed the temperature or pressure, we mainly changed V cage . Glass formed when V cage was so small that the out-of-cage motion was slow enough to avoid being in the measurement range of the instrument. For the polymer with a vinyl backbone, L contour (n), the contour length of the polymer chain which had n repeating units, could be further simplified: where L and θ are the bond length and angle, respectively. In history, the temperature dependence of free volume was derived from the combination of Doolittle and WLF equations: where α = ( 1 v )( ∂V ∂T ) P , is the thermal expansion factor of a melt or glass. Because all of the parameters in Equations (6) and (7) can be calculated, we could compare their results with the literature directly. In Figure 4, we analyzed the data for a set of six polymers and compared them with WLF results in the literature (the calculation results are listed in Tables 1 and S1). The symbols are the fractional free volumes for the six polymers at glass formation, and the solid lines are trends of the free volume over temperatures. The dash lines are the WLF results in the literature. To our knowledge, it's a good estimation of the WLF free volume [6]. Because there are a few literatures on both computer simulations and scattering experiments with those parameters and their measurement temperatures were away from the glass formation temperature, we could only give limited results and examples now. Prof. Sanditov et al. conducted some researches about the "fluctuation volume" during the glass formation, and they defined "fluctuation volume" as the volume for the delocalization of active atoms [37]. In the future, we will carry out more neutron scattering experiments, combined with an MD simulation to verify our model. Table 1 and Table S1). Tables 1 and S1). From the Slow Down Dynamics in the Cage to Heterogeneity How the monomer moves during glass formation is an important question. Because Equation (6) was from the static scattering measurement, it still cannot explain the effect of aging [41]. Whether the monomer moves via free volume or cooperative rearrangement depends on if we can find the exact free volume in the MD images. First, we tried to use probe spheres with different radii to detect every MD image. Here, two traditional ways were used to define the unoccupied volume in the system (see Figure S3), and we tried to find out the connections between the unoccupied volume and free volume. However, as shown in Figure S4, we could not find any free volume-accommodating monomer, whether we used the hard sphere model or soft interaction potentials. Therefore, we had to turn to cooperative rearrangement. To observe dynamic heterogeneity, the simulation time had to be longer than the duration for a monomer to escape out of the cage. Therefore, a coarse-graining model, based on the force fields of the all-atom simulation had to be adopted. The all-atom and CG models were representing the same polystyrene because they had the same interaction potentials. The structures of the CG model at T = 1.0 came from the structures of the all-atom model at 453 K. Although the CG model could only give relative results, it is reliable and widely used to analyze the dynamical characteristics of the system. In the CG model, glass formed when T = 0.4. Table 1. The temperature-dependent fractional excess free volume when it is close to glass formation. Calculation results according to both our model (Equations (6)- (8)) and WLF equation in the literature (if α G could not be found in the literature, only α L was used). Polymer T g (K) Calculation Results According to Equations (6)-(8) WLF Results from Literature a Fractional Excess Free volume at T g (%) [13]. b See Ref [38]. c See Ref [39]. d See Ref [40]. Figure 5a is the mean square displacements (<r msd 2 >) of monomers at different temperatures. Leporini and co-workers defined the cage time (β relaxation) in an order of 1 ps [42,43]. Betancourt et al. also simulated the α/β split of a coarse-grained polymer melt and observed an a/β relaxation time of~1.4 ps [44]. Similar trends were evident here. When it was much higher than glass formation (T = 0.8), <rmsd 2 > continued to increase linearly with time. At T = 0.45, <r msd 2 > became a plateau first, as monomers explored its dynamic cage created by its neighbor. Then, at a longer time, <rmsd 2 > grew with cage rearrangement [13]. The development of the α/β split could also be seen from the temperature dependence of the incoherent intermediate scattering function (Figure 5b). The incoherent intermediate scattering function, F incoh (q, t), given by: where 2π/q 0 is the main chain-main chain distance. Figure 5b shows a similar tendency of the α/β split. Some researchers showed that heterogeneity develops in the system as it cools down, which causes the violation of the Stokes-Einstein relation [45][46][47][48]. Note that polymer relaxation occurs as a multi-scale hierarchical process involving the cooperative molecular motion. Through the Zwanzig-Mori-Akcasu formalism [49], the solution of the time-position correlation function reduced to an Eigen value problem. However, no one knows how to derive the independent Eigen values here, we only discussed the diffusion motion of monomers thereafter. First, we defined the two-time self-correlation function, Q 2 (a, ∆t) = 1 where a is a preselected length scale to be probed, ∆r 2 i is the mean square displacement of monomer i in time ∆t, N is the number of polymer chains (N = 47) and n is the degree of polymerization (n = 88) in the simulation box. Then, χ 4 is the temporal fluctuations of Q 2 , Therefore, χ 4 is linked to the number of monomers participating in a correlated rearrangement. the α/β split. Some researchers showed that heterogeneity develops in the system as it cools down, which causes the violation of the Stokes-Einstein relation [45][46][47][48]. Note that polymer relaxation occurs as a multi-scale hierarchical process involving the cooperative molecular motion. Through the Zwanzig-Mori-Akcasu formalism [49], the solution of the time-position correlation function reduced to an Eigen value problem. However, no one knows how to derive the independent Eigen values here, we only discussed the diffusion motion of monomers thereafter. To quantify dynamic heterogeneity, the four-point susceptibility, χ4, was calculated [50]. First, we defined the two-time self-correlation function, To directly see the size and shape of the dynamic heterogeneity, we followed the methods in the literature [16,17]. When it was close to glass formation, monomers moved and rearranged to out of the cage, and the distinction between the different CRR could only hold over a finite duration. Here, monomers with the 10% largest displacements over a given time (0.99 μs at T = 0.45, and at 4.60 μs at T = 0.42, respectively) were defined as mobile (Figure 7a and Figure 7b). All of the CRRs are string-like [51]. At T = 0.45, almost all of the monomers in a stringlike CRR belong to the same polymer chain (Figure 7c). More than 36% of the chain ends belong to CRR, indicating their importance to the chain mobility. It can also be seen that most of the fast monomers moved along its backbone; we can, thus, conclude that their interaction was transmitted by the backbone of the Gaussian coil, and most of the fast monomers were crawling along it. It also verifies the tube model in Equations (6) and (7). Zou et al. used X-ray tomography to study the packing of the granular polymer chain, and they found that the suppression of pair-wise contacts between monomers that did not share a bond provided the rigidity [52]. The further decrease in temperature to T = 0.42 led some of the CRRs from different chain to "synchronize" with each other (Figure 7d). The moving direction of strings from different polymer chains "synchronized" over a longer time (4.6 μs), forming larger CRRs. Only 19% of the chain ends belonged to CRRs, showing that their mobility became similar to those in the middle of the chain because of To directly see the size and shape of the dynamic heterogeneity, we followed the methods in the literature [16,17]. When it was close to glass formation, monomers moved and rearranged to out of the cage, and the distinction between the different CRR could only hold over a finite duration. Here, monomers with the 10% largest displacements over a given time (0.99 µs at T = 0.45, and at 4.60 µs at T = 0.42, respectively) were defined as mobile (Figures 7a and 7b). All of the CRRs are string-like [51]. At T = 0.45, almost all of the monomers in a string-like CRR belong to the same polymer chain (Figure 7c). More than 36% of the chain ends belong to CRR, indicating their importance to the chain mobility. It can also be seen that most of the fast monomers moved along its backbone; we can, thus, conclude that their interaction was transmitted by the backbone of the Gaussian coil, and most of the fast monomers were crawling along it. It also verifies the tube model in Equations (6) and (7). Zou et al. used X-ray tomography to study the packing of the granular polymer chain, and they found that the suppression of pair-wise contacts between monomers that did not share a bond provided the rigidity [52]. The further decrease in temperature to T = 0.42 led some of the CRRs from different chain to "synchronize" with each other (Figure 7d). The moving direction of strings from different polymer chains "synchronized" over a longer time (4.6 µs), forming larger CRRs. Only 19% of the chain ends belonged to CRRs, showing that their mobility became similar to those in the middle of the chain because of the dynamic slow down. We characterized the nature of the string-like CRRs qualitatively at different temperatures ( Figure 8). The size distribution of CRRs was polydisperse. P(N c ) ∼ N −υ c with ν = 2.10 at T = 0.45 and 1.74 at T = 0.42 (Figure 8a). Donati et al. simulated spatial correlations of mobility and immobility in a glass forming a Lennard-Jones liquid [53]. Their result proved that larger CRRs dominate the relaxation process when ν < 3 [13,42]. The decrease in ν from T = 0.45 to T = 0.42 proves that larger string-like CRRs with a longer relaxation time dominated the dynamic slow down when it was closer to glass formation. Note that there were only 88 monomers in one polymer chain, and some of the CRRs had more than 100 monomers (Figure 8a). CRRs can, thus, propagate to larger length and time scales with the decrease in aging temperature and the increase in aging time. CRR had an average of 2.5 adjacent neighbors at T = 0.45, and 3.0 neighbors at T = 0.42 (Figure 8b), reflecting its string-like nature. It can further be seen from its fractal dimension (Figure 8c Relationship between "Static Cage" and "Dynamic Cage" Our concept of the "static cage" looks different from the "dynamic cage" in MCT. The former decreases its size, and the latter increases its duration with the decrease in temperature at glass formation. The "static cage" and "dynamic cage" can be united as a consequence of a dynamic slow down. With the decrease in thermal fluctuation, both of them formed, and confined the movement of monomers. The "static cage" is a statistical average. We could, thus, calculate the WLF fractional free volume according to the ratio between the occupied volume in the cage and the cage size qualitatively. In fact, because polymer glass is amorphous, the number of monomers inside the "static cage" must be polydisperse. Some of them are crowded, and some of them are loose. Thermal fluctuation balances them dynamically. On the other side, the monomer moves cooperatively to escape from the dynamic cage. Figure S6 counts the monomer number distributions around fast CRR and all monomers in the "static cage" at different temperatures. It indicates that the string-like fast monomers prefer to move inside loose "static cages" during the characteristic lag time. Therefore, the decrease in thermal fluctuation leads to the formation of a polydisperse "static cage", while the latter "provides" the dynamic path way of CRRs. Conclusions In the present work, we experimentally observed the temperature-dependent atomic structure of a PS melt, and demonstrated whether PS moves via free volume or cooperative rearrangement during glass formation. A simple equation was posed to calculate the fractional excess free volume in the polymer system. The key parameter in the equation, Relationship between "Static Cage" and "Dynamic Cage" Our concept of the "static cage" looks different from the "dynamic cage" in MCT. The former decreases its size, and the latter increases its duration with the decrease in temperature at glass formation. The "static cage" and "dynamic cage" can be united as a consequence of a dynamic slow down. With the decrease in thermal fluctuation, both of them formed, and confined the movement of monomers. The "static cage" is a statistical average. We could, thus, calculate the WLF fractional free volume according to the ratio between the occupied volume in the cage and the cage size qualitatively. In fact, because polymer glass is amorphous, the number of monomers inside the "static cage" must be polydisperse. Some of them are crowded, and some of them are loose. Thermal fluctuation balances them dynamically. On the other side, the monomer moves cooperatively to escape from the dynamic cage. Figure S6 counts the monomer number distributions around fast CRR and all monomers in the "static cage" at different temperatures. It indicates that the string-like fast monomers prefer to move inside loose "static cages" during the characteristic lag time. Therefore, the decrease in thermal fluctuation leads to the formation of a polydisperse "static cage", while the latter "provides" the dynamic path way of CRRs. Conclusions In the present work, we experimentally observed the temperature-dependent atomic structure of a PS melt, and demonstrated whether PS moves via free volume or cooperative rearrangement during glass formation. A simple equation was posed to calculate the fractional excess free volume in the polymer system. The key parameter in the equation, π q MC−MC,Tg , can be derived by a combination of scattering experiments and MD simulations. Additionally, its calculation result can be verified with the measurement result of the WLF equation directly. However, this excess free volume is just a statistical average, the free volume that can accommodate the monomer can never be discovered in any frame of its real space image. Therefore, free volume is a statistical average, and monomers, thus, have to move cooperatively to escape out of the cage. String-like cooperate rearranged regions develop their lengths and duration when it is close to glass formation. We believe that both free volume and dynamic heterogeneity are consequences of the dynamic slow down. The former is a statistical average. It leads to the formation of the polydisperse "static cage". Some of the "static cages" are crowded, and some of them are loose. The polydisperse "static cage" defines the way that CRRs move. The dynamic balance between the crowded and loose "static cage" may reflect the amplitude of dynamic heterogeneity. Our work provides a universal microscopic picture of how a polymer moves during glass formation. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/polym13183042/s1, Figure S1: Neutron total scattering and MD simulations; Figure S2: Main chain-main chain distances in reciprocal space from both the decomposition of q1 peak and the Fourier Transform of g(r) in Figure 2; Table S1: Parameters to calculate Equation (5) for six polymers; Figure S3: Real space image of the unoccupied volume at 393 K, 423 K and 453 K; Figure S4: The temperature dependence unoccupied volumes with different probe sizes; Figure S5: Snapshots of another way to color the fast monomers in CRRs; Figure S6: Monomer number distribution in "static cage" around all monomers and fast 10% ones at different temperatures. Data Availability Statement: The data that supports the findings of this study are available within the article and its supplementary material.
9,487.8
2021-09-01T00:00:00.000
[ "Materials Science", "Physics" ]
Mammalian Abasic Site Base Excision Repair Base excision repair (BER) is one of the cellular defense mechanisms repairing damage to nucleoside 5′-monophosphate residues in genomic DNA. This repair pathway is initiated by spontaneous or enzymatic N-glycosidic bond cleavage creating an abasic or apurinic-apyrimidinic (AP) site in double-stranded DNA. Class II AP endonuclease, deoxyribonucleotide phosphate (dRP) lyase, DNA synthesis, and DNA ligase activities complete repair of the AP site. In mammalian cell nuclear extract, BER can be mediated by a macromolecular complex containing DNA polymerase β (β-pol) and DNA ligase I. These two enzymes are capable of contributing the latter three of the four BER enzymatic activities. In the present study, we found that AP site BER can be reconstitutedin vitro using the following purified human proteins: AP endonuclease, β-pol, and DNA ligase I. Examination of the individual enzymatic steps in BER allowed us to identify an ordered reaction pathway: subsequent to 5′ “nicking” of the AP site-containing DNA strand by AP endonuclease, β-pol performs DNA synthesisprior to removal of the 5′-dRP moiety in the gap. Removal of the dRP flap is strictly required for DNA ligase I to seal the resulting nick. Additionally, the catalytic rate of the reconstituted BER system and the individual enzymatic activities was measured. The reconstituted BER system performs repair of AP site DNA at a rate that is slower than the respective rates of AP endonuclease, DNA synthesis, and ligation, suggesting that these steps are not rate-determining in the overall reconstituted BER system. Instead, the rate-limiting step in the reconstituted system was found to be removal of dRP (i.e. dRP lyase), catalyzed by the amino-terminal domain of β-pol. This work is the first to measure the rate of BER in anin vitro reaction. The potential significance of the dRP-containing intermediate in the regulation of BER is discussed. Base excision repair (BER) 1 pathways are employed to repair damaged or modified bases in DNA. Because similar BER pathways are found in prokaryotic and eukaryotic cells, the extensive knowledge about prokaryotic BER has facilitated studies of this repair mechanism in mammalian cells. BER has been examined in vitro with crude extracts from Escherichia coli, Saccharomyces cerevisiae, Xenopus laevis oocyte, bovine testis, and various mammalian cells (1)(2)(3)(4)(5) and reconstituted using purified proteins from both prokaryotes and eukaryotes (1,(5)(6)(7)(8). Mammalian cells can repair abasic sites, an intermediate of BER, using at least two distinct pathways: one involving single nucleotide gap filling by DNA polymerase ␤ ("simple" BER) and an "alternate" pathway that involves proliferating cell nuclear antigen (PCNA). In this latter pathway gap-filling DNA synthesis appears to be catalyzed by DNA polymerase ␦ or ⑀ and results in a repair patch of 2-6 nucleotides (9). In addition, Klungland and Lindahl (10) have described a BER pathway that repairs reduced AP sites. This pathway also generates a repair patch 2-6 nucleotides in length, but in this case gapfilling DNA synthesis could be performed by DNA polymerase ␤ (␤-pol) or ␦. Like the pathway described above (9), this BER pathway was stimulated by PCNA (10). A working model for the simple BER pathway is outlined as follows (for review see Refs. 11 and 12). The glycosidic bond linking the damaged base and deoxyribose is cleaved either spontaneously or by a DNA glycosylase activity removing the inappropriate base to generate an abasic or AP site in doublestranded DNA. The phosphodiester backbone of the AP site is cleaved 5Ј to the sugar moiety by AP endonuclease, leaving a 3Ј-hydroxyl group and a deoxyribose phosphate (dRP) group at the 5Ј terminus. Excision of the deoxyribose phosphate group is catalyzed by 2-deoxyribose-5-phosphate lyase, an activity that is intrinsic to the amino-terminal 8-kDa domain of ␤-pol. The ␤-pol dRP lyase activity functions via ␤-elimination (13) and produces a single-nucleotide gap with a 3Ј-hydroxyl and 5Јphosphate at the gap margins. DNA polymerase ␤ then fills the single-nucleotide gap, and a DNA ligase seals the resulting nick. The identity of the DNA ligase that completes the simple BER pathway in mammalian cells is unresolved. There is genetic and biochemical evidence implicating the products of both the LIG1 and LIG3 genes in BER. Cell lines deficient in either DNA ligase I or DNA ligase III activity are hypersensitive to DNA alkylating agents (14,15), and extracts from these cell lines are defective in BER (16,17). Furthermore, protein-protein interactions between ␤-pol and XRCC1, the protein partner of DNA ligase III, and between ␤-pol and DNA ligase I have been characterized (7,18,19). Recently, we described the partial purification of a BER-proficient multiprotein complex from bovine testis nuclear extracts (19). DNA polymerase ␤ and DNA ligase I were identified as components of this complex, but no other ligases were present. Further studies have determined the stoichiometry and thermodynamic properties of this interaction and revealed that stable complex formation between DNA ligase I and ␤-pol is mediated through the noncatalytic amino-terminal domain of DNA ligase I and the 8-kDa amino-terminal domain of ␤-pol (20). Together, these results support the notion that a complex of ␤-pol and DNA ligase I catalyzes the latter steps of simple BER. To define the influence of this and other putative proteinprotein interactions on catalytic activities of enzymes that participate in simple BER, we reconstituted BER of a DNA substrate containing an AP site with three purified human enzymes: AP endonuclease, DNA polymerase ␤, and DNA ligase I. By characterizing isolated, individual reactions within the BER pathway, we determined the rate-limiting step. Because the overall repair of the AP site occurred at a rate similar to that of dRP removal (dRP lyase step), we suggest that ␤-pol dRP lyase activity could determine the choice between the simple and alternate BER pathways. The DNA substrate for the dRP lyase functional assay was a 49-base pair (bp) fragment constructed by annealing two oligodeoxyribonucleotides (Operon Technologies, Inc., Alameda, CA) to introduce a G-U base pair at position 21: 5Ј-AGCTACCATGCCTGCACGAAUTAAGCA-ATTCGTAATCATGGTCATAGCT-3Ј and 3Ј-TCGATGGTACGGACGT-GCTTGATTCGTTAAGCATTAGTACCAGTATCGA-5Ј. The uracil-containing strand was labeled at the 3Ј-end for the dRP lyase assay as described (21). Annealing-Lyophilized oligodeoxyribonucleotides were resuspended in 10 mM Tris-HCl, pH 7.4, and 1 mM EDTA, and the concentrations were determined from their UV absorbance at 260 nm. Complimentary oligodeoxyribonucleotide or template primers were annealed by heating a solution of 10 M template with an equivalent concentration of oligomers and primer to 90°C for 3 min and incubating the solution for an additional 15 min at 50 -60°C, followed by slow cooling to room temperature. Recombinant Human Enzymes-Human recombinant ␤-pol was overexpressed from plasmid pWL-11 and purified as described (22). Oligonucleotide site-directed mutagenesis was performed essentially as described previously (23). Human AP endonuclease was expressed in E. coli strain BL21/DE3pLysS from pXC53 carrying the HAP gene and purified as reported (24). An amino-terminal 84-residue deletion mutant of uracil DNA-glycosylase (UDG) that retains glycosylase activity was overexpressed in E. coli and purified to apparent homogeneity using the protocol described (25). Recombinant human DNA ligase I was overexpressed in baculovirus-infected cells and purified as described (26). In Vitro Base Excision Repair Assays-The simple base excision repair pathway was reconstituted using the recombinant human enzymes under the following conditions. The reaction mixture (10 l) contained 50 mM Hepes, pH 7.5, 2 mM dithiothreitol, 0.2 mM EDTA, 100 g/ml bovine serum albumin, 10% glycerol, 4 mM ATP, 1 M 51-bp substrate with a uracil at position 22, and 0.3 M [␣-32 P]dCTP. The enzyme mixture was assembled by mixing 10 nM each UDG, AP endonuclease, and ␤-pol with 100 nM DNA ligase I at 37°C for 5 min. Bovine serum albumin replaced DNA ligase I or other constituent enzymes in the reactions that were assembled in the absence of an individual enzyme. The assembled enzyme mixture was incubated with DNA substrate at 37°C for 30 min. To follow the reaction time course, the assembled reaction was incubated in the absence of MgCl 2 for 5 min at 37°C. During the incubation period, uracil is removed by UDG, creating an AP site. To initiate further repair, 5 mM MgCl 2 (essential for AP endonuclease and ␤-pol) was added, and the reaction mixture was incubated at 37°C. After various periods, aliquots were removed, and the reaction was terminated by adding an equal volume of gel loading buffer (40 mM EDTA, 80% formamide, 0.02% bromphenol blue, and 0.02% xylene cyanol). After 2 min at 95°C, the reaction products were separated by electrophoresis in a 15% polyacrylamide gel containing 7 M urea in 89 mM Tris-HCl, 89 mM boric acid, and 2 mM EDTA, pH 8.8. The gel was dried, and the reaction products were visualized by autoradiography. The assay reactions described in Figs. 2 and 3 (25 l) contained the same buffer as above. The rate of reconstituted BER was measured using a 51-bp DNA substrate (1 M) that was first treated with UDG (10 nM) in the absence of MgCl 2 for 5 min at 37°C. Subsequently, 10 nM AP endonuclease, 10 nM ␤-pol, 100 nM DNA ligase I, 4 mM ATP, 10 mM MgCl 2 , and 10 M[␣-32 P]dCTP were added simultaneously, and aliquots were removed at the indicated times and quenched with 10 l of 0.25 M EDTA. The samples were mixed with an equal volume of gel loading buffer, and the products were separated in 12% polyacrylamide denaturing gels. Individual bands were measured using a PhosphorImager (Molecular Dynamics), and the concentrations of DNA present in the 51-mer product band was calculated from [␣-32 P]dCTP standards included in the gels. The ligase assay was performed in the same manner, except that after UDG treatment, the DNA was incubated for 10 min with AP endonuclease, ␤-pol, and [␣-32 P]dCTP. DNA ligase I (10 nM) and 4 mM ATP were simultaneously added, and aliquots were removed at various time intervals and quenched in EDTA, and the products were separated on denaturing gels. The DNA synthesis assays (␤-pol) (see Fig. 2) were performed under the same conditions on a DNA substrate possessing a single-nucleotide gap in place of uracil in the 51-bp substrate. No other BER proteins were present in the assay. Quenched products were dried onto DEAE cellulose filters (DE-81, Whatman, Inc.), and the unincorporated [␣-32 P]dCTP was removed by rinsing the filters four times in 0.3 M ammonium formate, pH 8.0, and rinsing twice in 95% ethanol. Incorporated dCMP was quantified by scintillation counting. The DNA synthesis assay was repeated in Fig. 3, except that the substrate was the 51-residue uracil-containing substrate pretreated with UDG, AP endonuclease, and then Ϯ ␤-pol (in the absence of dCTP). [␣-32 P]dCTP (10 M) and DNA ligase I (100 nM) were then added simultaneously and quenched at various times, and the amount of DNA synthesis was measured using a portion of the reactions in the filter assay described above. Portions of the same reactions were then separated on a 12% polyacrylamide denaturing gel, and the amount of the 51-residue product that incorporated [ 32 P]dCMP was quantified. The dRP lyase and dRP lyase/DNA gap filling synthesis assay (see Fig. 4) was performed under the following conditions. The standard reaction mixture (10 l) contained 50 mM Hepes, pH 7.5, 2 mM dithiothreitol, 4 mM ATP, 10 nM 32 P-labeled DNA, 10 nM each of UDG, AP endonuclease, and wild-type ␤-pol, or a K72A mutant of ␤-pol and 100 nM DNA ligase I. The reaction was initiated by adding 10 mM MgCl 2 and 1.33 M[␣-32 P]dCTP and incubated at 37°C for 1, 5, or 10 min. The reactions were terminated with EDTA, and the product was stabilized by addition of NaBH 4 to a final concentration of 340 mM and incubated for 30 min at 0°C. The stabilized DNA product was recovered by ethanol precipitation in the presence of 0.1 g/ml tRNA and resuspended in 10 l of gel loading buffer. After incubation at 75°C for 2 min, the reaction products were separated by electrophoresis in a 20% polyacrylamide gel and visualized by autoradiography. Reconstitution of Uracil-initiated Base Excision Repair Using Purified Human Proteins- The two substrates in our in vitro BER system were [␣-32 P]dCTP and a 51-bp duplex DNA with dUMP at position 22 in one strand (Fig. 1A). The labeled in vitro BER DNA products are the 51-residue molecule with [ 32 P]dCMP at position 22 and/or the unligated 22-residue intermediate. In the experiment shown in Fig. 1B, the BER system was reconstituted with four purified human enzymes (UDG, AP endonuclease, ␤-pol, and DNA ligase I). The reaction mixture was incubated for 30 min, after which time the 51residue product and a very small amount of the unligated 22-residue intermediate accumulated (lane 1). The ratio of 51-mer to 22-mer observed here was similar to that observed earlier with bovine tissue and mouse cell line nuclear extracts (5,27). The enzyme and substrate requirements for in vitro BER were further investigated. UDG, AP endonuclease, ␤-pol, and DNA were each required for radiolabeled product formation (Fig. 1B, lanes 2-4 and 7). In the absence of DNA ligase I, accumulation of the unligated 22-residue intermediate was observed (lane 5). With all the enzymes present, the incorporation of [ 32 P]dCMP into BER products was examined as a function of time (Fig. 1C). The 22-mer intermediate accumulated modestly before being ligated to the downstream oligomer. Conditions for Kinetic Studies of Base Excision Repair in Vitro-To identify potential regulatory steps in BER, we compared time courses of the reconstituted reaction and the individual steps in the reaction pathway at 37°C. These kinetic studies were performed at the enzyme concentrations described in Fig. 1 (10 nM each UDG, AP endonuclease, and ␤-pol and 100 nM DNA ligase I) using saturating substrate concentrations (1 MDNA, 10 MdCTP, 4 mM ATP) to allow multiple catalytic turnovers. The decision to use a higher concentration of DNA ligase I than the other enzymes was based on preliminary results indicating that the unligated intermediate was accumulating during BER reactions performed using 10 nM DNA ligase I (data not shown). Because the persistence of this intermediate obscured study of enzymatic steps preceding the ligase step in this pathway (see below) and did not reflect results obtained with tissue nuclear extract (5), the higher DNA ligase I concentration was used throughout this work to facilitate study of the other steps. The time course of the reconstituted BER reaction revealed a slight lag phase preceding a linear steady-state rate ( Fig. 2A). Extrapolation of the linear phase to the abscissa suggested a lag of approximately 5 s. These results are consistent with a model in which the concentration of an intermediate initially accumulates in the reaction mixture. After this lag, the overall base excision reaction proceeded at a velocity of 0.6 nM s Ϫ1 (Fig. 2A). The fact that there was a lag and that the rate of the overall reaction after the lag was relatively slow suggested that an intermediate accumulates and limits the overall reaction. Identification of the Rate-determining Steps for AP Site BER in Vitro-To delineate which BER activities were rate-determining for product formation, we compared the rates of the various enzymatic activities ( Table I). Note that the velocity measured for the overall BER reaction was 0.6 nM s Ϫ1 . The first activity measured was the rate of DNA synthesis. With a single-nucleotide gapped DNA substrate, we found that ␤-pol formed the 22-residue product at a reaction velocity of 4.5 nM s Ϫ1 (Fig. 2B). Addition of DNA ligase I had a slight stimulatory effect on this DNA synthesis reaction rate (data not shown). Thus, the rate of the ␤-pol-mediated gap-filling reaction on a gapped DNA substrate was faster than the rate of the overall BER reaction and was not rate-limiting. The presence of a dRP group at the 5Ј margin of the gap did not affect this rate (see Fig. 3A). The velocity of the DNA ligase I step under conditions of the reconstituted system was then measured and found to be 4 nM s Ϫ1 . Because the rate of ligation is faster than the rate of the reconstituted BER system (at high ligase concentration), one of the earlier intermediates must be accumulating during the BER reaction in vitro. Reducing the ligase concentration to 10 nM would result in a rate lower (0.4 nM s Ϫ1 ) than the rate measured for the reconstituted BER reaction, confirming the observation noted above that DNA ligase I activity can limit the rate of BER when present at the lower concentration. From Table I, the rate of in vitro BER can be compared with the velocity of each of the purified enzymes under the conditions of the in vitro BER system. The results discussed above establish that DNA synthesis was not rate-limiting. As noted in Table I, AP endonuclease was also found to possess high catalytic efficiency and was not rate-limiting. Consistent with this suggestion, we found that preincision of the AP site containing DNA did not alter the velocity of overall BER (data not shown). Instead, the rate of 51-mer product formation for in vitro BER was almost identical to that of the ␤-pol dRP lyase activity (Table I). This suggests that the dRP lyase step is rate-limiting during in vitro BER and that the dRP-containing intermediate accumulates in the BER pathway. These data are paralleled by k cat /K m determinations ( Table I) that identify the dRP lyase step as the least efficient. Significantly, these data also indicate that despite several potentially beneficial protein-protein interactions in the reconstituted BER system, the overall BER rate is not faster than the slowest individual enzymatic activity. The hypothesis that removal of the dRP moiety is the ratedetermining step in the in vitro BER system was tested directly by measuring the velocity of polymerase-dependent nucleotide incorporation in the presence or absence of a dRP moiety. The presence or absence of the dRP group did not influence DNA synthesis by ␤-pol (velocity ϭ 4 nM s Ϫ1 , Fig. 3A). This was similar to the DNA synthesis velocity measured for ␤-pol in the absence of other BER proteins and was higher than the velocity of the dRP lyase itself (Table I). When a sample from the same reaction mixture was analyzed by gel electrophoresis, the rate of 51-mer product formation was found to be lower for the reaction in which the dRP group had not been enzymatically removed prior to DNA synthesis. In fact, the velocity of 51-mer BER product formation in the reaction performed with substrate containing the dRP group (0.8 nM s Ϫ1 , Fig. 3B) coincided with the velocity measured for the dRP lyase and was similar to the value measured for the overall BER reaction (Table I). This , and ATP (4 mM) were mixed with the substrate simultaneously, and the rate of formation of the 51-nucleotide product was visualized on a denaturing polyacrylamide gel. BER product formation in the 25-l reaction mixture is indicated on the ordinate. The observed velocity for BER ϭ 0.6 nM s Ϫ1 . B, the rate of DNA synthesis by ␤-pol was measured on the 51-nucleotide substrate (300 nM) containing a single nucleotide gap at an enzyme concentration of 10 nM. Substrate with incorporated [ 32 P]dCTP (10M) was quantitated using the filter binding assay described under "Experimental Procedures." The initial velocity was 4.5 nM s Ϫ1 . C, the velocity of the ligase step was measured on the 51-bp substrate containing a G-U mismatch (1M) that was preincubated in the presence of UDG, AP endonuclease, ␤-pol (10 nM each), and [␣-32 P]dCTP (10M) to create an internally labeled nicked substrate. The rate of formation of the 51-mer ligation product was then quantified in the presence of DNA ligase I (10 nM) and ATP (4 mM). The observed velocity was 0.4 nM s Ϫ1 . FIG. 3. Measurement of the DNA synthesis and dRP lyase velocities of ␤-pol on a BER intermediate substrate. The 51-nucleotide substrate containing a uracil at position 22 was treated with UDG and AP endonuclease to create a single base gap bearing a dRP moiety at the 5Ј-phosphate in the gap. A portion of this substrate was then treated with the dRP lyase activity of ␤-pol (in the absence of dCTP) to remove the deoxyribonucleotide phosphate group from the DNA. These enzymatic steps created the substrate intermediates present immediately before and after the dRP lyase activity of ␤-pol. A, DNA synthesis velocities on both substrates (dRP present (open circles) versus dRP group absent (closed circles)) were measured using the filter assay described under "Experimental Procedures." The radioactivity present in the 22-and 51-nucleotide product bands was plotted on the ordinate. The observed velocities were similar (4.0 versus 3.9 nM s Ϫ1 , respectively), indicating that the dRP moiety does not affect DNA synthesis by ␤-pol. B, the velocity of combined DNA synthesis, dRP lyase activity, and DNA ligase I activity was measured by loading the reactions used in A on a 12% polyacrylamide denaturing gel. The radioactivity present in the 51-nucleotide product bands was then measured by scintillation counting, and the product formed was plotted on the ordinate. For the substrate bearing the dRP group (open circles), the observed velocity was 0.8 nM s Ϫ1 , and for the substrate in which the dRP group was enzymatically removed (closed circles), the velocity was 2.7 nM s Ϫ1 . result clearly indicates that the dRP lyase step is a rate-determining step in the in vitro base excision repair system. The rate of product formation observed for substrate lacking the dRP moiety was approximately 3.5-fold higher (velocity ϭ 3 nM s Ϫ1 ) and was similar to the velocity of the DNA synthesis and DNA ligation step. Sequence of Steps in the Reconstituted BER System-The sequence of the steps following incision by AP endonuclease was investigated using both [␣-32 P]dCTP and 3Ј-32 P-labeled DNA to simultaneously measure gap filling and dRP lyase activity, respectively, in the same reaction mixture. dRP lyase activity was measured by a difference for the 3Ј end labeled substrate and the product strands (Fig. 4, bands 2 and 3, respectively). The 29-residue product migrates slightly faster than the 29-residue substrate. The product of DNA synthesis is a 22-residue DNA strand that is well separated from the 3Ј end-labeled 29-residue molecules. Gap-filling DNA synthesis was almost complete after 1 min and was not influenced by the presence of the dRP flap (Fig. 4, lanes 4 and 13). This provides further evidence that the dRP group in the gap does not inhibit the DNA synthesis step. Interestingly, the dRP lyase activity was slightly stimulated (approximately 2-fold) by the presence of DNA synthesis. However, the rate of dRP lyase activity in the presence of DNA synthesis remained slower than the rate of DNA synthesis. To further confirm that gap-filling synthesis could proceed in the presence of the dRP flap, we conducted reactions with a ␤-pol mutant, K72A, that is dRP lyase-deficient (Ͻ10% of wildtype activity) but DNA synthesis-proficient (28). In the BER system, the K72A enzyme was able to perform gap-filling at the same rate as the wild-type enzyme (Fig. 4, lanes 4 -6 and lanes 13-15), despite the dRP group being attached at the 5Ј margin of the nick. These results indicate that ␤-pol gap filling can occur before ␤-pol-dependent removal of the dRP flap. It is also interesting to consider the DNA ligation step. As shown in Fig. 4 (lanes 16 -18), DNA ligase I sealed the gapfilled and dRP-cleaved molecule (band 3) in the BER reaction. Removal of the dRP group was required for DNA ligase I to function, because the gap-filled, dRP flap-containing molecule was not ligated (band 2, lanes [7][8][9]. Taken together, these results indicate that the predominant pathway for AP site BER is AP endonuclease cleavage followed by DNA synthesis; a slow removal of the dRP moiety then occurs that limits overall BER. The resulting nicked DNA is then ligated. DISCUSSION A wide variety of exogenous and endogenous chemicals damage DNA bases, initiating their removal by the numerous damage-specific DNA glycosylases present in cells (for review, see Ref. 29). Spontaneous cleavage of the N-glycosidic bond linking both undamaged and damaged nitrogenous bases to the deoxyribose sugar moiety generates additional abasic sites, estimated to total between 2,000 and 10,000 per day per human cell (30). These damaged sites must be repaired in a timely manner, because such noncoding lesions represent gaps in the templates used by polymerases, increasing the likelihood of mutation and aberrant RNA transcripts (31)(32)(33). The base excision repair systems in the cell are believed to constitute the primary mechanism for the repair of this form of DNA damage. This work demonstrates that base excision repair of an abasic site can be reconstituted in vitro using three human enzymes: AP endonuclease, DNA polymerase ␤, and DNA ligase I. This is the same repair system that was studied by Nicholl et al. (8) and is similar to the pathway described by Kubota et al. (7) containing AP endonuclease and ␤-pol but that utilizes a DNA ligase III-XRCC1 complex in place of DNA ligase I. The possibility that the DNA ligase III-XRCC1 complex can take the place of DNA ligase I in the simple BER pathway was not FIG. 4. dRP lyase and DNA gap filling synthesis. The 49-bp oligonucleotide duplex DNA (10 nM) labeled with [ 32 P]d-dAMP at the 3Ј-end was incubated with either wild-type ␤-pol or a K72A mutant, and the dRP lyase and DNA synthesis reactions were carried out as described under "Experimental Procedures." The reaction mixture was incubated at 37°C, and aliquots were withdrawn at 1, 5, and 10 min. At the end of the reaction, the DNA products were stabilized by addition of NaBH 4 (340 mM) and incubated for 30 min at 0°C. The stabilized DNA products were recovered and separated by electrophoresis in a 20% polyacrylamide gel containing 8 M urea. A photograph of an autoradiogram is shown. Lanes 1-3 and 10 -12, dRP lyase activity; lanes 4 -7 and 13-15, dRP lyase and gap filling synthesis without DNA ligase (Lig.) I; lanes 7-9 and 16 -18, dRP lyase, gap-filling DNA synthesis and ligation with K72A and wildtype ␤-pol, respectively. The arrows 1, 2, 3, and 4 refer to the positions of 51-mer BER product, DNA substrate for dRP lyase, dRP lyase product, and 22-mer DNA synthesis products, respectively. ϩ denotes added; Ϫ denotes omitted. directly examined in this work, so a role for DNA ligase III in BER is not excluded here. However, a BER-proficient complex containing ␤-pol and DNA ligase I, but not DNA ligase III, has been partially purified from bovine testis nuclear extract (19). Based on genetic and biochemical analysis of DNA ligase-deficient mammalian cell lines (14 -17), it is possible that DNA ligase I and DNA ligase III-XRCC1 participate in distinct BER pathways whose in vivo substrate specificity remains to be elucidated. The simple BER pathway characterized in this study is distinct from the PCNA-dependent alternate BER pathways, components of which also play a role in semiconservative DNA replication and nucleotide excision repair (9,34,35). Our interest in base excision repair stems from studies of mammalian ␤-pol, which had been proposed to be a DNA polymerase active in repairing short (1-6 nucleotide) gaps in DNA (5). The identification of interactions between ␤-pol and AP endonuclease (36) and between ␤-pol and DNA ligase I (19,20), all of which are components of a BER-proficient complex, strongly supports the notion that the latter steps of simple BER are catalyzed by the sequential actions of these enzymes. Experiments by various groups (9,27,37) have shown that simple base excision repair is the predominant type of base excision repair used by human cells and mouse fibroblast cells; therefore, it appears likely that the enzymes and reactions studied here constitute the predominant base excision repair pathway operating in human cells. Importantly, this work measured the catalytic rate of the overall reconstituted BER reaction as well as individual steps, allowing identification of the dRP lyase step as the activity likely to be regulating this pathway. This is the first measurement of the rate of mammalian base excision repair in an in vitro reaction and, to our knowledge, the first rate determination of any mammalian DNA repair system. Within the reconstituted BER system, the dRP lyase activity was found to be the rate-determining step, because the velocity of this step (0.75 nM s Ϫ1 ) was similar to the velocity measured for the overall BER reaction ( Table I). The rate of DNA ligase I (4 nM s Ϫ1 ) would be expected to be partially rate-limiting at lower DNA ligase I concentrations (e.g. 10 nM). DNA synthesis was rapid (velocity ϭ 4.5 nM s Ϫ1 ) and not rate-limiting. An additional finding was that the AP endonuclease product (dRP-containing intermediate) did not limit DNA synthesis. Thus, gap filling was not disrupted when the dRP moiety was still bound to the 5Јphosphate in the gap (see Figs. 3 and 4). Instead, the presence of a dRP group in the gap was found to inhibit DNA ligase I activity (see Fig. 4). After removal of the dRP, the rate of BER was similar to the rate of DNA synthesis and ligation (see Fig. 3). DNA ligase I was unable to seal the nick while the dRP flap was present, presumably because the flap interfered with ligation or DNA ligase binding. Alternatively, binding by ␤-pol at the gap may prohibit DNA ligase I from binding to the 5Јphosphate. Although it is known that the 8-kDa domain of ␤-pol possesses the dRP lyase active site and interacts with DNA ligase I (20,28,38), the precise molecular mechanism of this interaction has not been elucidated. Noting that the dRP lyase step is rate-determining in the BER system and that the dRP group must be removed prior to ligation, it is clear that dRP removal plays a significant functional role in the regulation of base excision repair. These observations also allow us to propose the following order for the AP site base excision repair enzymatic activities: AP endonuclease, DNA synthesis, dRP lyase activity, and then ligation (Fig. 5). The identification of the dRP lyase step as a rate-determining step in simple base excision repair now permits a closer examination of the pathways capable of repairing AP sites in DNA. Data obtained from prokaryotic and eukaryotic systems (9, 10, 39 -41) suggest that at least two classes of base excision repair pathway may operate in cells, including human cells: the simple, ␤-pol-mediated pathway described here, as well as alternate, PCNA-dependent pathways that can utilize DNA polymerase ␤, ␦, and/or ⑀. It seems plausible that the choice of pathway would be linked to the status of the dRP group. Should the dRP be processed quickly by the dRP lyase activity of ␤-pol, the simple BER mechanism would likely complete the repair of the gap. If the dRP group were to persist in the DNA, however, it seems possible that the components of an alternate base excision repair pathway might bind to the dRP flap (with or without ␤-pol bound at the site) and complete the repair event. Thus, the status of the dRP group might function as a "switch" between the simple and alternate repair pathways. Although both pathways are known to occur in competition with each other in human cells (9), characterization of the signal that initiates each alternate BER pathway on gapped DNA has not been elucidated.
7,299.6
1998-08-14T00:00:00.000
[ "Biology", "Chemistry" ]
ENGLISH USE IN THE ADVERTISEMENT TEXTS OF HOTEL 99 JEMBER Advertisement is one of short functional texts used to gain customers’ attention. Its visual portray determines whether this persuasive passage will successfully grab the interest of readers and direct them to finally use the product or ignore it. Batchia and Ritchie (2006: 19) proposes that “English is important vehicle for global advertisement”. Additionally, Piller (2003: 17) states that “English is the most commonly used language in advertising especially in non-English speaking countries or multilingual country.” These two intriguing ideas of using English in advertisement direct the advertiser of Hotel 99 Jember to insert some English Code. Through its official Instagram (hotel.99.jember) the advertiser publicly publishes some postings about the hotel. Those postings attract the researchers’ curiosity to know why English is existing there and what the benefits of English insertions to the hotel are. Laying on Zohreh and Monireh’s finding (2013: 87), this investigation is conducted to match the four influencing reasons (gaining attention, persuasion, prestige, and technology) affecting the advertiser to put English in the advertisement texts. Interview is chosen as the way to investigate the advertiser’s reasons. Further, customers’ responses are also the case examined in this research. By having customers’ responses, it is worth to know whether English use effectively influences them to choose the hotel or not. Finally, this research is not only beneficial to the researchers to deeply study about English use but also for the hotel to thoroughly know the effect of their advertisement texts in Instagram. INTRODUCTION English use is now becoming attractive to Indonesian people. Onishi (2010) found an Indonesian family who sent their children to private school where English was the main language of instruction which brought them to speak English more fluently than Indonesian official language and its primary reason was the social standing. For the same reason, Sadtono's study (2013: 46) revealed that Indonesian people thought that English would lift their social status up into middle or upper class. Further, Mutisari and Mali (2017: 100) discovered that "…the participants tended to have very positive perceptions of English as a means to reach out to the international communities and to support the communicative functions of the Indonesian language." All findings show that people seem proud to possess highly social status when they are able to reach their outside world and English is the tool. The phenomena contribute the permeation of English use into Indonesian language, including the use of English in advertisement. Advertisement is one of ways to communicate to the consumers. It is beneficial either for the company or the consumers. The company may get a shortcut to promote their products, and the consumers may get more information about the various products useful for them. By seeing its mutually reciprocal corresponding, an advertisement is better designed ravishingly. It should lure the consumers for it will increase revenues. One of ways to bring it into existence is the use of English. English as international language will color the advertisement so that it can attract consumers' attention. As it is mentioned by Piller (2013: 12) that English is a common language used in advertisement by people whose native tongue is not English. Additionally, Batchia and Ritchie (2006: 19) stated that English is important for global advertisement since it can be accessed by anyone around the globe. Mutiara's research (2014) on English use in advertisement of language center in Indonesia showed that English use gave big effect toward the customers' choice although this was not the main reason. English was said to represent modernity and prestige. This article discusses the use of English in advertisement texts of Hotel 99 Jember. Hotel 99 Jember is a newly built hotel located at Jalan Darmawangsa 99 Jubung Sukorambi Jember. It was firstly opened on September 9, 2018. This hotel is considered as belonging to melati class with facilities: 55 rooms, air conditioning, internet Hi-Speed Wifi, praying room, and resto café. Most guests are local people who work as salesmen, travelers, business men, and government officials. As this hotel is still new, the promotion is highly spread. The promotion is widely expanded through some social media platforms such as traveloka, pegipegi, and instagram. Farther, despite their local guests, the hotel displays the advertisements texts with some English codes inserted. Hence this investigation is focusing on the reasons why the advertisers insert some English codes and on the s' response towards the use of English codes in the advertisement texts. RESEARCH METHOD This data of this research are taken from advertisement texts as the English codes are listed; from interview script to know the reasons why the advertisement texts consist of English codes; and from questionnaires distributed to guests to know the effects of English use. Advertisement texts are taken from the official instagram of Hotel 99 Jember (https://www.instagram.com/hotel.99.jember/). Interview is done to match whether Zohreh and Monireh's finding (2013: 87) on four reasons (gaining attention, persuasion, prestige, and technology) are also being the reasons why the advertiser inserts English codes. Therefore, these following questions are set. A. The English Codes in Advertisement Texts Based on the analysis on the 20 advertisement texts in Instagram, these are the English codes found. There are some codes are repeated. No. English Among 18 English codes found on the advertisement texts, the two highest numbers of occurrence are Booking and Book now. Those two English codes are the terms that are very common in reserving room in a hotel. The word book means "to arrange a place on a flight, a room in a hotel, a ticket for an event, etc. for a particular time in the future" (https://dictionary.cambridge.org/dictionary/english/book). In most of the advertisement texts, the advertiser invites the readers to stay in the hotel by using a specific request utterance book with its various word formation (booking, book now, and also booking now). Other English codes that are frequently used are the addressing terms for guests. The hotel addresses them with varied forms: guys (5), traveler/s (3), vibes (1), good people (1), backpacker (1), and you (1). All addressing terms show the friendliness of the hotel because all terms are commonly used informally. Nordquist (2019) explains that "terms of address may be formal (Doctor, The Honorable, His Excellence) or informal (honey, dear, you). Formal terms of address are often used to recognize academic or professional accomplishments, while informal terms of address are often used to show affection or closeness." This indicates that the hotel will comfort the guest as a family. Further, some English codes used are pointing out the common time to stay in hotel, namely Friday (which is the last weekdays) and weekend. The word Friday seems to be the reminder that holiday or weekend will be coming. Travelers or backpackers usually spend their weekend by going somewhere. The hotel offers the comfortable place to spend their night. This offering sounds decent since it is accompanied by the statement we serve you better. The other statement in the advertisement text that pampers the guests is start from 80k/night. The price of the hotel is quite affordable, so travelers and backpackers will not be broke. Finally, English codes can be a useful instrument to picturing the hotel. Hotel 99 Jember is a new hotel which offers the comfort of home and gives the warm welcomes. B. The Reasons of English Use in Advertisement Texts Further, this research is also aimed at revealing the reasons why the advertiser inserts some English codes. This purpose is achieved by conducting interview. The result of interview strongly fortifies Zohreh and Monireh's finding. The advertiser mentions all 4 reasons. Here is the answer of the first question on the interview (Menurut pendapat Anda, apakah alasan menggunakan bahasa Inggris di teks iklan untuk menarik perhatian tamu? -Do you think that the use of English in advertisement texts is for attracting the guests' attention?). "Iya benar. Beberapa perusahaan yang aku pegang semua iklannya aku selipkan bahasa Inggris. Karena jika kita punya sebuah usaha apa saja target pasarnya adalah universal. Apa lagi ini yang di jual adalah hotel dan alat promosinya adalah Instagram disana semua orang bisa melihatnya dengan sangat mudah. Jadi, bahasa Inggris sangat membantu untuk menarik perhatian tamu tidak hanya dari tamu lokal tapi ada juga dari Instanbul dan Kuala Lumpur kemaren yang memberikan respon iklan di Instagram kemaren. Jadi saya memang sengaja menyelipkan bahasa Inggris pada teks iklan, ya untuk mengikuti jaman juga sih. Kalau anak muda bilang sih biar hits. Jadi mereka akan tertarik untuk menginap di Hotel 99 Jember khususnya para backpackers." ("Yes, it is right. I am handling some companies and I intentionally insert English words in all the advertisements because whatever the businesses are we should set them universally targeted. Moreover, this advertisement sells hotel's services and the tool to promote is Instagram. All people can access it very easily. Therefore, the English use is very helpful to attract the attention of the guests who are not only from local area but also overseas. Yesterday, 2 overseas people from Istanbul and Kuala Lumpur gave their response on our Instagram. Hence, I intentionally insert English words in the advertisement to follow the development of era or the young people commonly mention it with the term "hits". So, they will be interested in staying in the Hotel 99 Jember especially backpackers.") The result of interview above obviously presents the intention of English code use. It is purposely used for attracting the readers' attention. The advertiser's answer implies that his expectation of the advertisement meets the purpose. It does not merely attract local guests but also overseas. Two overseas people gave response toward their advertisement in Instagram. The next question is "Menurut pendapat Anda, apakah alasan menggunakan bahasa Inggris di teks iklan untuk meyakinkan tamu?" -Do you think that the use of English in advertisement texts is for persuading guests? The following is the advertiser's answer. "Iya, benar. Jadi selain saya menulis teks iklan dengan menyelipkan bahasa Inggris seperti "we serve you better" saya juga memasang highlight di Instagram yang isinya price list dan fasilitas Hotel 99 Jember. ("Yes, it is right. I do not only write the advertisement texts by inserting English words such as "we serve you better" but also I make a highlight in Instagram @hotel.99.jember. The content is price list and facilities which are offered. Here, it gets many responses from the customers. There are many direct messages asking about the correct price (does the price really start from 80k/night?) Therefore, my reason for using English is to make them sure to the facilities which are offered by the hotel and of course to influence them to stay in the Hotel 99 Jember.") The advertiser confirms that English use in advertisement is for persuading the readers. This is in line with Gerristen's proposition (2007: 315) that inserting English words in the advertisement would have an additional persuasive effect. Some readers were influenced by the advertisement texts. They contacted the hotel to confirm which means that the goal of advertising the hotel is achieved. The following question is "Menurut pendapat Anda, apakah alasan menggunakan bahasa Inggris untuk menambah nilai Hotel 99 Jember?" -Do you think that the use of English in advertisement texts gives more values to the hotel? Here is the answer. ("Yes. It is exactly correct. When I was at my first time in Jember in 2013, all café's menu were still using Indonesian but now I hardly see cafe's menu in Jember using Indonesian. Most of them are using English. In 2013, kopi hitam cost two thousand rupiahs. Now the name changes into "black coffee" and the price is IDR 5k. Then, koptail costs 10k. Besides, it seems great when we are hanging out in the brother's cafe although the taste of coffee is actually same with the other coffee shops. So, it becomes the interesting phenomenon, isn't it? Along with the booming era of creative industries, the English use raises the prestige. It even works to promote hotel. This English use is very useful to add value or prestige of Hotel 99 Jember although this hotel is still new and it has been existing less than a year.") Gomez (2010:52) clarifies that when the advertisers use English words in the advertisement, they reach two final goals: to increase the value of prestige to the product or the brand name advertised and to enhance the advertisement's ability to draw attention. The result of the interview shows that the advertiser's answer meets the first goal of Gomez namely to increase the value of prestige. The advertiser explains that the use of English codes gives more impact on economic value. Comparing price on the menus using different languages. C. The Reasons of English Use in Advertisement Texts The results of distributing questionnaire to the thirty guests of Hotel 99 Jember show that there are nine out of thirty guests who know the information about Hotel 99 Jember from Instagram. Then, the total of guests who know Hotel 99 Jember from Traveloka and Pegipegi are also nine people. Meanwhile, the guests who know the information of Hotel 99 Jember from Google Maps are four people. Lastly, the total guests who know Hotel 99 Jember from the other information are eight people. They get the information from their friends, telephone to the hotel and from brochure. Then, in the second question the results presents the total guests who are interested in staying in the Hotel 99 Jember because of the occurrence of advertisement in Instagram are seven people. Then, the guests who are interested in staying in the Hotel 99 Jember because of the location of the hotel are three people. Last, the number of the guest who are interested in staying in the Hotel 99 Jember because of the price and the facilities which are offered by the hotel are twenty people. In the question number 3, it is answered by the guests who choose poin A (Instagram) in the question number 2. Thus, the result shows that the total of guests who are aware of the English use in advertisement texts which are posted in Instagram are five people. Meanwhile, the guests who are not aware towards the English use in advertisement texts posted in Instagram are two guests. Next, in the question number 4, it is also answered by the guests who choose point A (Instagram) in the question number 2. The results show that the number of guests who are attracted to the English use in the advertisement texts so that they choose Hotel 99 Jember to stay are two guests. Meanwhile, the guests who are not attracted to the English use in the advertisement texts are five people. Then, in the question number 5 it is answered by the guests who choose point B (Location) in the question number 2. The result shows that there are two guests who are attracted to the hotel because of the location of the hotel which is strategic. Then, there is one guest who is not attracted to the hotel because the guest argues that the hotel's location is not strategic . In the next question, it is a question number 6. It is answered by the guests who choose point C (Price and Facilities) in the question number 2. The result shows that there are twenty guests who agree that the price to stay in the Hotel 99 Jember is affordable. These are the big number which represents the hotel's price is cheap enough to the guests. Next, in the last question (question number 7), It is also answered by the guests who choose point C (Price and Facilities) in the question number 2. The result shows that there are eighteen guests who agree that the facilities which are served by the hotel are satisfying. Meanwhile, there are two guests who disagree that the facilities which are served by the hotel are satisfying. This becomes correction to the hotel to correct the facilities. It can be concluded that the occurrence of English code mixing in the advertisement texts of which are posted in the official Instagram @hotel.99.jember does not attract the guests' attention. It is proven by the results that although there are nine guests out of thirty guests who know Hotel 99 Jember from Instagram, the guests who are aware of the English use in advertisement texts which are posted in Instagram are five out of thirty people. Then, there are only two out of thirty people who are interested in choosing Hotel 99 Jember to stay because of the English use in the advertisement texts. Most of the guests are interested in staying in the Hotel 99 Jember because of the price and the facilities which are offered by the hotel. It shows that the number of guests who are interested in staying in the Hotel 99 Jember because of the affordable price are twenty people. Further, the number of guests who are interested in staying in the Hotel 99 Jember because of the satisfied facilities are eighteen people. Those are the big number to reach. However, there are still two guests who are disappointed to the facilities of Hotel 99 Jember. This condition becomes a correction to the hotel. Thus, the hotel should repair the facilities both in tools services and also staff services. Last, from the results of distributing the questionnaires, it is a good idea to maintain the hotel's services which get positive responses from the guests. Besides, by these questionnaires, the hotel knows in what part that this hotel needs to be repaired. As it is mentioned before that there are still some guests who are not satisfied with the hotel's facilities. It becomes the home work for the hotel. If the hotel improves the facilities, the guests may choose Hotel 99 Jember to stay again. Besides, it is possible that the guests who are satisfied to Hotel 99 Jember will recommend Hotel 99 Jember to their families or friends. If it happens, the hotel's tagline that "Best Guest House in Jember" is really proven. Additionally, although this hotel is still new and it is a Melati class hotel, the English use in the advertisement texts is a good beginning. Since this hotel is promoted with using some English words, the audience who visit the official Instagram @hotel.99.jember are not only from local people but also from other countries such as Turkey, Malaysia and Singapore. So, English is used to contact people around the world. It is stated by the advertiser who controls Instagram. On the other hand, the result of the questionnaires shows that there are some guests who are aware of the English use in the advertisement texts. Moreover, there is a guest who is interested in staying in the Hotel 99 Jember because of the English use in the advertisement texts. It means that the English use in the advertisement texts is interesting. Besides, when this hotel is developing in the five or ten years later, it is possible to make this hotel into Star class hotel. Therefore, the English use is very useful to add the value of this hotel. D. Prestige The third reason for using English code mixing is prestige. Martin (2002: 375) states that "English serves as a sign of modernity, technological superiority and prestige." By this statement, the occurrence of English is functioned to add an extra value of the advertisement. This extra value is called as prestige. It is used to promote all products such as fashion, make up, food and kind of services such as property, education, health and absolutely hotel services. Moreover, Gomez (2010:52) clarifies that when the advertisers use English words in the advertisement, they reach two final goals. The first is to increase the value of prestige to the product or the brand name advertised. The second is to enhance the advertisement's ability to draw attention. Those statements above are in line with the result of interview. The manager, the advertiser and the front office of Hotel 99 Jember also agree that the reason to use English is to add a value of the Hotel 99 Jember. They state that although this hotel is still new, the English use in the advertisement texts helps them to exist as other hotels in Jember. E. Technology Technology is a domain in advertising where English words are used frequently. Nowadays, technology is used as tool to promote advertisement through internet and social media such as Instagram, Twitter, Facebook, WhatsApp etc. The occurrence of internet and social media can be accessed by people around the world. By this condition, the advertiser should use English to contact lots of people. Kelly and Holmes (2005: 125) state that "the domains where English words appear most frequently in the advertisement texts which caused by international market, fashion and advanced technology." It means that along with the development of technology, the English use cannot be inevitable. The result of interview also shows that the manager, advertiser and the front office staff state that the English use in advertisement texts is influenced by the development of technology. Moreover, the advertisement is promoted in Instagram. F. Other Reasons a. Efficient Words The manager of Hotel 99 Jember adds that the reason for using English code in the advertisement texts is because English word is a simple language that can cover some Indonesian word. He states that "Penggunaan bahasa Inggris di iklan itu sangat tepat karena bahasa Inggris itu singkat dan mengena. Komposisi iklan yang bagus didukung oleh desain yang bagus dan kalimat yang singkat, padat tapi mengena. Nah bahasa Inggris ini sangat tepat seperti istilah booking, kata ini simple tapi sangat mengena". ("The English use in the advertisement texts is absolutely appropriate because English is simple and persuasive. The composition of the good advertisement is supported by great design and the simple word but persuasive. Here, English is the right choice such as the term "booking". It is simple but persuasive.") From the statement above the other reason why it should use English in the advertisement texts because English is efficient to use. b. Going Public The manager of Hotel 99 Jember mentions that the other reason for using English code in the advertisement texts is because Hotel 99 Jember wants to go public. Moreover, this advertisement is posted in Instagram which can be accessed by all people around the world. He states that "selain itu, alasan untuk menggunakan bahasa Inggris itu untuk go public, agar Hotel 99 Jember dikenal banyak orang. Meskipun hotel ini masih baru tapi kita mau show up bahwa kita ada di kota Jember." ("Besides, the reason for using English in the advertisement texts is to go public, to make this hotel famous to many people. Although this hotel is still new, we want to show up that we are ready to serve in Jember city.") From this statement, it is clearly shown that the English is used to make this hotel go public and exist among many hotels in Jember. c. Showing Identity The advertiser states that "saya sengaja menggunakan bahasa Inggris karena saya dari sastra Inggris. Selain itu mau menunjukkan bahwa yang bikin iklan ini anak muda, makanya bahasanya bahasa Ingris, keren gitu.kan?" ("I intentionally use English in the advertisement texts because I am from English department. Besides, I want to show that the advertisement is created by young people. So, the language is English. It is interesting, isn't it?") From this statement, it shows that the reason for using English is to show the identity of the advertiser. It is because he is still young and he is from English department. Thus, he inserts some English codes to create the advertisement very interesting and attracting the readers. d. Marketing Strategy The advertiser also states that the other reason for using English in the advertisement texts is marketing strategy need. There are many businesses and entrepreneurships which are using online market as a media of promotion. Therefore, it is important to create marketing strategy to compete with the others by inserting English codes in the advertisement texts. He states "akhir-akhir ini banyak sekali bisnis yang menggunakan pasar online. Jadi harus ada strategi marketing yang digunakan agar bisa bersaing dengan yang lain salah satunya dengan menggunakan bahasa Inggris ini." ("Nowadays, there are many businesses which are using online market. Thus, it should have marketing strategy to compete with the others by using English in this advertisement text.") e. Long Term Investment The advertiser states that "alasan utama saya menggunakan bahasa Inggris di teks iklan Hotel 99 Jember adalah untuk investasi jangka panjang. Tidak masalah sekarang Hotel 99 Jember masih berkelas Melati karena masih baru berdiri. Tapi sangat mungkin jika 5 atau 10 tahun ke depan Hotel 99 Jember menjadi hotel berbintang seperti Dafam, Royal atau mungkin Aston. Jadi nanti akan menjadi kebanggan, karena sejak awal berdiri hotel ini sudah menggunakan bahasa Inggris untuk promosi." ("The main reason for using English in the advertisement texts is for long term investment. It is okay now that Hotel 99 Jember is still Melati class because this hotel is still newly established. However, it is very possible that in five or ten years later this hotel will be in the same class as Dafam Lotus, Royal Hotel or maybe Aston Hotel. If it happens, it becomes a pride for Hotel 99 Jember because since this hotel is firstly established, English is used to promote.") From that statement, it clearly shows that the reason for using English in the advertisement is a long term investment. It is because in doing business we should set them in one or two decade later. In other words, we should think long term investment. By these findings, it can be concluded that the reasons of Hotel 99 Jember for using English code mixing in the advertisement texts are various. Although English codes which are inserted in the advertisement texts do not give significant effects, it is a good idea to still keep using English as the reasons are closely related to the purpose of building the hotel, namely being well-known and five-star hotel. So far, it has been proven that the number of guests is increasing. In the first month the hotel was visited by ten guests but now all rooms are fully booked every day. Moreover, the front office said there is double or triple check-in in one room. Although the hotel has fifty five rooms but the guest book reaches sixty guests. Therefore, it is important to maintain this achievement. CONCLUSION This study discusses about the analysis of English code mixing phenomenon that occurs on the advertisement texts of Hotel 99 Jember. Theory of code mixing from Muysken (2000) and the proposition of reasons for using English code mixing from Zohreh and Monireh (2013) are applied in this study. It is used to analyse the type of code mixing used, and to find out the reasons for using English code mixing. Based on the analysis in this study, the results show that there are three types of code mixing used on the advertisement texts of Hotel 99 Jember. They are insertion, alternation and congruent lexicalization types. Those types are found in twenty advertisement texts which are analyzed. It means the theory from Muysken (2000) works on this study. The number of insertion type used is eighteen. In this type, the English words used are dominantly a command word namely booking. Then, the number of alternation type is twenty three. This type becomes the dominant type which appears from the twenty advertisement texts of Hotel 99 Jember. In some advertisement texts, the process of alternation appears twice or more. The persuasive words namely book now and booking now are mostly used in this type. In the last type, congruent lexicalization occurs six times in six advertisement texts. It is the smallest number compared the other types. The kind of English words used in this type is some greeting clauses such as hello travellers and happy weekend guys. Furthermore, the data from the questionnaire shows that there are five out of thirty guests who are aware of the English use in the advertisement texts. Moreover, there are two guests who are interested in staying in the Hotel 99 Jember because of the English use in the advertisement texts. It means the English use in the advertisement texts is interesting although it does not give significant effect to the attention of the guests in choosing Hotel 99 Jember to stay. Most of the guests are interested to stay in the Hotel 99 Jember because of the price and the facilities which are offered by the hotel. Therefore, from the results of distributing the questionnaires, it is a good idea to maintain the hotel's services which get positive responses from the guests. The guests are satisfied to Hotel 99 Jember. They will recommend Hotel 99 Jember to their families or friends. From the further analysis, the results of interview show that the proposition of Zohreh and Monireh (2013:87) who mention the reasons for using English code mixing in the advertisement texts (attracting attention, persuasion, prestige and technology) work on this study. However, the other result of interview shows that the other reasons exist as the reasons of using English code mixing. They are efficient words, go public, showing identity, marketing strategy and long term investment. These are stated by the interviewees. As the conclusion, there are various reasons for using English code mixing on the advertisement texts of Hotel 99 Jember. It is a good beginning because English is used to promote since this hotel is newly established. Although this hotel is Melati class, it is possible for this hotel to develop into Star class hotel in the future. Therefore,
6,923
2019-12-12T00:00:00.000
[ "Business", "Linguistics" ]
High School Physics Teacher ’s Profile In Teaching For Improving Student ’ s Energy Literacy The research has been conducted to find out the profile of high school physics teacher in learning that develop student energy literacy. The method used in the research is descriptive-analytics using questionnaires that are distributed to high school physics teachers in MGMP district/city. The data are energy literacy understanding, energy literacy implementation in daily life, ability in developing learning, obstacles, and training needs. Based on the results of data analysis, it can be concluded that high school physics teacher has not yet understood the concept of energy literacy, although it has been implemented in daily life. Physics learning has not yet developed student energy literacy. Therefore it is necessary to develop blended training mode in MGMP. INTRODUCTION Energy use has a major impact on people's living standards and every major economic sector. The latest issue on climate change mitigation demands attention to energy efficiency and energy use reduction to achieve sustainable economic growth (Chen, et.al., 2013). Indonesia is a major source of energy and energy users. The problems facing Indonesia in the energy sector are fossil energy reserves that are dwindling, the limited access to energy for the community, and the development of alternative energy that is constrained by technological mastery and low financing (BPPT, 2016). Strategies in dealing with energy issues are increasing the capacity of fossil fuel exploration, utilization of new and renewable energy resources, migration of energy consumption patterns in various sectors (BPPT, 2016). It is no less important is the awareness and participation of the community in addressing energy problems (Akitsu, et al., 2017). People are expected to have energy literacy (Cetin & Nisanci, 2010). Research on energy literacy has been widely practiced over the past two decades. Energy Literacy is an educational endeavor that helps pave the way for a safer energy future by empowering individuals to choose appropriate energy-related behaviors throughout their daily lives (DeWaters & Powers, 2011). The framework of developing the energy literacy instrument has been developed covering three aspects of outcome components namely cognitive, affective, and behavioral (DeWaters & Powers, 2011). Contextualized instruments can assess the energy literacy of SMP and SMA students in a multidimensional way (Chen, et al., 2015). Utilization of computer-based tests can reach a wider population (Chen, et al., 2014). Energy and climate literacy should be combined and ideally incorporated into the school curriculum (McCaffrey, 2015). Energy Literacy can be achieved through the education sector (Bamisile, et al., 2016;Hendrickson, et al., 2014;Singhirunnusorn, et al., 2011). Schools in Indonesia are expected to contribute to the energy literacy of students as well as the community. This hope is in line with the literacy movement programmed by the Indonesian government through the Ministry of Education and Culture. In addition, learning what happened today just transferring knowledge to students, and still centered on the teacher. This leads to not 1 Pusat Pengembangan dan Pemberdayaan Pendidik dan Tenaga Kependidikan IPA 2 Program Studi Pendidikan IPA Sekolah Pascasarjana UPI getting the experience to understand the concept in full by the student (Sukardiyono & Rosana, 2017). Energy literacy is more closely related to science, physics, chemistry, and biology (DOE, 2017). In terms of competency attitudes, knowledge and skills in the 2013 curriculum, high school physics subjects have the potential to grow and develop energy literacy (Yusup, 2017). The role of high school physics teacher is quite important in membelajarkan physics that can cultivate energy literasi. The issues to be answered in this paper are: 1. How is the understanding of high school physics teachers on the concept of Literacy Energy in Learning? 2. How far is the implementation of physics learning in order to develop student energy literacy? 3. How does the training program fit the teacher's needs? METHODS The instrument was developed in a questionnaire form referring to Table 1. The discussion focused on the teacher's claim in responding to the questionnaire. Respondents were randomly selected by distributing an online questionnaire to high school physics teachers in the district / city MGMP. The participating physics teachers responded to a questionnaire of 196 from NAD, Jambi, Lampung, North Sumatera, West Sumatera, South Sumatra, Riau Islands, Riau, Kep. Bangka Belitung, Banten, DKI Jakarta, West Java, Central Java, East Java, DIY, Bali, West Kalimantan, South Kalimantan, South Sulawesi, and West Papua. Quantitative analysis is conducted to describe teachers' perceptions of energy literacy and their contribution in physics learning from planning, implementation and assessment of learning. Understanding the concept of energy literacy in learning Understanding the concept of energy literacy in the studied study involves: explaining the concept of energy literacy, applying energy literacy in everyday life, determining high school physics content related to energy literacy, and identifying potential local environments (context) that can be used as a source of learning about literacy energy. Based on the response to the questionnaire that has been dispersed, the data obtained as presented in Table 2. From the data it can be seen that high school physics teachers do not know comprehensively the concept of energy literacy, although in fact they have implemented or behaved to apply energy literacy in everyday life. As a member of the community, this condition is good, but as a teacher who is obliged to instill energy literacy, then the teacher must master his theory as well. The respondents had no difficulty in determining the physical content and identifying the environmental potential to be used. However, such claims need to be compared with their claims to the development of their learning. Implementation of Physics Learning The ability of physics teacher in implementing physics learning which grow energy literacy can be seen from the indicators of part B in Table 1. Based on the response to the questionnaire that has been spread, the data obtained as presented in table 3. Table 3 shows that high school physics teachers still have difficulty in planning and implementing learning that fosters student energy literacy. Teacher Training Needs Training needs analysis (TNA) is obtained by identifying the difficulties that teachers face in implementing the learning as well as the desired form of training. Based on the response to the questionnaire that has been spread, obtained the data as follows. Most respondents claim to have difficulties in integrating energy literacy content in physics learning, in the form of: a. Lack of reference: 70% b. While the form of training desired by the respondents is presented in Table 4. Based on the data in Table 4, it can be concluded that training should (1) not be implemented on weekdays, only on MGMP days only. (2) using a combined / blended mode between face-to-face and online modes. (3) Training duration per day maximum 4 JP. (4) Methods vary and there needs to be an assignment. Subject of training required by high school physics teachers is presented in Table 5. To strengthen understanding, there needs to be an independent assignment 84 The subject of this training is the materials they have not learned in the preservice program when they are studying at LPTK. Besides that the material chosen respondents is material that has not been studied in the inservice program either training program by the ministry of education or by other institutions. CONCLUSION Physical learning has an opportunity to contribute to the development of student energy literacy. Teachers are expected to play an optimal role, so that knowledge, attitude, and behavior can be an example for students in developing energy literacy. In addition, the competence of teachers in implementing physics learning that integrates energy literacy needs to be improved through various programs. Professional communities such as the MGMP are an appropriate forum for improving the professionalism of teachers. Training programs should be synchronized with the MGMP program so as not to undermine the teacher's primary obligation as an educator in the school. The training modes that the teacher is interested in are blended training that incorporates face-to-face mode on MGMP day and the next online mode on weekdays.
1,860.6
2018-04-17T00:00:00.000
[ "Education", "Physics" ]
A Ratio-cum-Dual to Ratio Estimator of Population Variance Using Qualitative Auxiliary Information Under Simple Random Sampling In this paper we have proposed a class of ratio-cum-dual to ratio estimators for estimating population variance of the variable under study, using known values of some population parameters of auxiliary variable, which is available in the form of an attribute. The expressions for the bias and mean squared error of the proposed estimators have been derived upto the first order of approximation. A comparison has been made with some well known estimators of population variance available in the literature when auxiliary information is in qualitative form. It has been shown that the proposed estimator is better than the existing estimators under the optimum condition. For illustration an empirical study has been carried out. INTRODUCTION T he use of auxiliary information increases the precision of an estimator when study variable Y is highly correlated with auxiliary variable X.When the variable under study y is highly positively correlated with the auxiliary variable x, then the ratio type estimators are used to estimate the population parameter and product estimators are used when the variable under study y is highly negatively correlated with the auxiliary variable x for improved estimation of parameters of variable under study.But there are situations when information on auxiliary variable is not available in quantitative form but in practice, the information regarding the population proportion possessing certain attribute ψ is easily available (see Jhajj et.al.), which is highly correlated with the study variable Y.For example (i) Y may be the use of drugs and ψ may be the gender (ii) Y may be the production of a crop and ψ may be the particular variety.(iii) Y may be the Amount of milk produced and ψ a particular breed of cow.(iv) Y may be the yield of wheat crop and ψ a particular variety of wheat etc. (see Shabbir and Gupta). Let there be N units in the population.Let ( , ) y i i ψ , i = 1,2,----,N be the corresponding observation values of the i th unit of the population of the study variable Y and the auxiliary variable ψ respectively.Further we assume that ψ i = 1 and ψ i = 0, i = 1,2,----,N if it possesses a particular characteristic or does not possess it.Let Ratio type estimator for population variance of the study variable based on ISAKI (1983) estimator is as follows where The mean squared error (MSE) of the estimator t R , up to the first order of approximation is given as A dual to ratio type estimator for the population variance of the study variable y is defined as PROPOSED ESTIMSTOR Motivated by Sharma & Tailor, we propose the following ratio-cum-dual to ratio type estimator for the population variance of the study variable as Where α is a suitably chosen constant to be determined such that MSE of the estimator t R d is minimum.For α = 1, the estimator t R d reduces to the estimator t R and for α = 0 it is the estimator t d ( ) . Thus the above estimators are the particular case of the proposed estimator t R d . In order to study the large sample properties of the proposed family of estimators, we define s S y y Expanding terms on right hand side of (2.2), we get Taking expectation, after subtracting S y 2 on both sides, upto the first order of approximation, we get where α α From equation (2.3), we have Taking expectation on both sides of (2.5), we get the MSE of t R d , upto the first order of approximation as The MSE of the estimator t R d is minimized for the optimum value of α as EFFICIENCY COMPARISON It is well known that the variance of estimator t s y 0 2 = is given as proportion of units in the population and sample respectively possessing the attribute ψ.Let a simple random sample of size n from this population is taken without replacement having sample values ( , ) to the variance of the linear regression estimator of population variance, To analyze the performance of various estimators of population variance 2 y S of study variable y, we have considered the following two data: Data [Source: Mukhopadhyay, page 44] Y : Household size.ψ : Whether households have agricultural loan.
1,026.2
2013-03-02T00:00:00.000
[ "Mathematics" ]
Network Flow-Based Refinement for Multilevel Hypergraph Partitioning We present a refinement framework for multilevel hypergraph partitioning that uses max-flow computations on pairs of blocks to improve the solution quality of a $k$-way partition. The framework generalizes the flow-based improvement algorithm of KaFFPa from graphs to hypergraphs and is integrated into the hypergraph partitioner KaHyPar. By reducing the size of hypergraph flow networks, improving the flow model used in KaFFPa, and developing techniques to improve the running time of our algorithm, we obtain a partitioner that computes the best solutions for a wide range of benchmark hypergraphs from different application areas while still having a running time comparable to that of hMetis. Introduction Given an undirected hypergraph H = (V, E), the k-way hypergraph partitioning problem is to partition the vertex set into k disjoint blocks of bounded size (at most 1 + ε times the average block size) such that an objective function involving the cut hyperedges is minimized. Hypergraph partitioning (HGP) has many important applications in practice such as scientific computing [12] or VLSI design [43]. Particularly VLSI design is a field where small improvements can lead to significant savings [56]. It is well known that HGP is NP-hard [38], which is why practical applications mostly use heuristic multilevel algorithms [11,13,25,26]. These algorithms successively contract the hypergraph to obtain a hierarchy of smaller, structurally similar hypergraphs. After applying an initial partitioning algorithm to the smallest hypergraph, contraction is undone and, at each level, a local search method is used to improve the partitioning induced by the coarser level. All state-of-the-art HGP algorithms [2,4,7,16,28,31,32,33,48,51,52,54] either use variations of the Kernighan-Lin (KL) [34,49] or the Fiduccia-Mattheyses (FM) heuristic [19,46], or simpler greedy algorithms [32,33] for local search. These heuristics move vertices between blocks in descending order of improvements in the optimization objective (gain) and are known to be prone to get stuck in local optima when used directly on the input hypergraph [33]. The multilevel paradigm helps to some extent, since it allows a more global view on the problem on the coarse levels and a very fine-grained view on the fine levels of the multilevel hierarchy. However, the performance of move-based approaches degrades A net is called cut net if λ(e) > 1. Given a k-way partition Π of H, the quotient graph Q := (Π, {(V i , V j ) | ∃e ∈ E : {V i , V j } ⊆ Λ(e)}) contains an edge between each pair of adjacent blocks. The k-way hypergraph partitioning problem is to find an ε-balanced k-way partition Π of a hypergraph H that minimizes an objective function over the cut nets for some ε. Several objective functions exist in the literature [5,38]. The most commonly used cost functions are the cut-net metric cut(Π) := e∈E ω(e) and the connectivity metric (λ − 1)(Π) := e∈E (λ(e) − 1) ω(e) [1], where E is the set of all cut nets [17]. In this paper, we use the (λ − 1)-metric. Optimizing both objective functions is known to be NP-hard [38]. Hypergraphs can be represented as bipartite graphs [29]. In the following, we use nodes and edges when referring to graphs and vertices and nets when referring to hypergraphs. In the bipartite graph G * (V∪E, F ) the vertices and nets of H form the node set and for each net e ∈ I(v), we add an edge (e, v) to G * . The edge set F is thus defined as F := {(e, v) | e ∈ E, v ∈ e}. Each net in E therefore corresponds to a star in G * . Let G = (V, E, c, ω) be a weighted directed graph. We use the same notation as for hypergraphs to refer to node weights c, edge weights ω, and node degrees d (v). Furthermore Γ(u) := {v : (u, v) ∈ E} denotes the neighbors of node u. A path P = v 1 , . . . , v k is a sequence of nodes, such that each pair of consecutive nodes is connected by an edge. A strongly connected component C ⊆ V is a set of nodes such that for each u, v ∈ C there exists a path from u to v. A topological ordering is a linear ordering ≺ of V such that every directed edge (u, v) ∈ E implies u ≺ v in the ordering. A set of nodes B ⊆ V is called a closed set iff there are no outgoing edges leaving B, i.e., if the conditions u ∈ B and (u, v) ∈ E imply v ∈ B. A subset S ⊂ V is called a node separator if its removal divides G into two disconnected components. is the residual network. An (s, t )-cut (or cut) is a bipartition (S, V \ S) of a flow network N with s ∈ S ⊂ V and t ∈ V \ S. The capacity of an (s, t )-cut is defined as e∈E c(e), where E = {(u, v) ∈ E : u ∈ S, v ∈ V \ S}. The max-flow min-cut theorem states that the value |f | of a maximum flow is equal to the capacity of a minimum cut separating s and t [21]. Flows on Hypergraphs. While flow-based approaches have not yet been considered as refinement algorithms for multilevel HGP, several works deal with flow-based hypergraph min-cut computation. The problem of finding minimum (s, t )-cuts in hypergraphs was first considered by Lawler [36], who showed that it can be reduced to computing maximum flows in directed graphs. Hu and Moerder [29] present an augmenting path algorithm to compute a minimum-weight vertex separator on the star-expansion of the hypergraph. Their vertex-capacitated network can also be transformed into an edge-capacitated network using a transformation due to Lawler [37]. Yang and Wong [57] use repeated, incremental max-flow min-cut computations on the Lawler network [36] to find ε-balanced hypergraph bipartitions. Solution quality and running time of this algorithm are improved by Lillis and Cheng [39] by introducing advanced heuristics to select source and sink nodes. Furthermore, they present a preflow-based [22] min-cut algorithm that implicitly operates on the star-expanded hypergraph. Pistorius and Minoux [45] generalize the algorithm of Edmonds and Karp [18] to hypergraphs by labeling both vertices and nets. Liu and Wong [40] simplify Lawler's hypergraph flow network [36] by explicitly distinguishing between graph edges and hyperedges with three or more pins. This approach significantly reduces the size of flow networks derived from VLSI hypergraphs, since most of the nets in a circuit are graph edges. Note that the above-mentioned approaches to model hypergraphs as flow networks for max-flow min-cut computations do not contradict the negative results of Ihler et al. [30], who show that, in general, there does not exist an edge-weighted graph G = (V, E) that correctly represents the min-cut properties of the corresponding hypergraph H = (V, E). Flow-Based Graph Partitioning. Flow-based refinement algorithms for graph partitioning include Improve [6] and MQI [35], which improve expansion or conductance of bipartitions. MQI also yields as small improvement when used as a post processing technique on hypergraph bipartitions initially computed by hMetis [35]. FlowCutter [24] uses an approach similar to Yang and Wong [57] to compute graph bisections that are Pareto-optimal in regard to cut size and balance. Sanders and Schulz [47] present a flow-based refinement framework for their direct k-way graph partitioner KaFFPa. The algorithm works on pairs of adjacent blocks and constructs flow problems such that each min-cut in the flow network is a feasible solution in regard to the original partitioning problem. KaHyPar. Since our algorithm is integrated into the KaHyPar framework, we briefly review its core components. While traditional multilevel HGP algorithms contract matchings or clusterings and therefore work with a coarsening hierarchy of O(log n) levels, KaHyPar instantiates the multilevel paradigm in the extreme n-level version, removing only a single vertex between two levels. After coarsening, a portfolio of simple algorithms is used to create an initial partition of the coarsest hypergraph. During uncoarsening, strong localized local search heuristics based on the FM algorithm [19,46] are used to refine the solution. Our work builds on KaHyPar-CA [28], which is a direct k-way partitioning algorithm for optimizing the (λ − 1)-metric. It uses an improved coarsening scheme that incorporates global information about the community structure of the hypergraph into the coarsening process. The Flow-Based Improvement Framework of KaFFPa We discuss the framework of Sanders and Schulz [47] in greater detail, since our work makes use of the techniques proposed by the authors. For simplicity, we assume k = 2. The techniques can be applied on a k-way partition by repeatedly executing the algorithm on pairs of adjacent blocks. To schedule these refinements, the authors propose an active block scheduling algorithm, which schedules blocks as long as their participation in a pairwise refinement step results in some changes in the k-way partition. An ε-balanced bipartition of a graph G = (V, E, c, ω) is improved with flow computations as follows. The basic idea is to construct a flow network N based on the induced subgraph G [B], where B ⊆ V is a set of nodes around the cut of G. The size of B is controlled by an imbalance factor ε := αε, where α is a scaling parameter that is chosen adaptively depending on the result of the min-cut computation. If the heuristic found an ε-balanced partition using ε , the cut is accepted and α is increased to min(2α, α ) where α is a predefined upper bound. Otherwise it is decreased to max( α 2 , 1). This scheme continues until a maximal number of rounds is reached or a feasible partition that did not improve the cut is found. to the source s and all border nodes δB ∩ V 2 to the sink t using directed edges with an edge weight of ∞. By connecting s and t to the respective border nodes, it is ensured that edges incident to border nodes, but not contained in G [B], cannot become cut edges. For α = 1, the size of B thus ensures that the flow network N has the cut property, i.e., each (s, t )-min-cut in N yields an ε-balanced partition of G with a possibly smaller cut. For larger values of α, this does not have to be the case. After computing a max-flow in N , the algorithm tries to find a min-cut with better balance. This is done by exploiting the fact that one (s, t )-max-flow contains information about all (s, t )-min-cuts [44]. More precisely, the algorithm uses the 1-1 correspondence between (s, t )-min-cuts and closed sets containing s in the Picard-Queyranne-DAG D s,t of the residual graph N f [44]. First, D s,t is constructed by contracting each strongly connected component of the residual graph. Then the following heuristic (called most balanced minimum cuts) is repeated several times using different random seeds. Closed node sets containing s are computed by sweeping through the nodes of DAG s,t in reverse topological order (e.g. computed using a randomized DFS). Each closed set induces a differently balanced min-cut and the one with the best balance (with respect to the original balance constraint) is used as resulting bipartition. Hypergraph Max-Flow Min-Cut Refinement In the following, we generalize the flow-based refinement algorithm of KaFFPa to hypergraph partitioning. In Section 3. Hypergraph Flow Networks The Liu-Wong Network [40]. Given a hypergraph H = (V, E, c, ω) and two distinct nodes s and t , an (s, t )-min-cut can be computed by finding a minimum-capacity cut in the following For each multi-pin net e ∈ E with |e| ≥ 3, add two bridging nodes e and e to V and a bridging edge (e , e ) with capacity c(e , e ) = ω(e) to E. For each pin p ∈ e, add two edges (p, e ) and (e , p) with capacity ∞ to E. For each two-pin net e = (u, v) ∈ E, add two bridging edges (u, v) and (v, u) with capacity ω(e) to E. The flow network of Lawler [36] does not distinguish between two-pin and multi-pin nets. This increases the size of the network by two vertices and three edges per two-pin net. Figure 1 shows an example of the Lawler and Liu-Wong hypergraph flow networks as well as of our network described in the following paragraph. Removing Low Degree Hypernodes. We further decrease the size of the network by using the observation that the problem of finding an (s, t )-min-cut of H can be reduced to finding a minimum-weight (s, t )-vertex-separator in the star-expansion, where the capacity of each star-node is the weight of the corresponding net and all other nodes (corresponding to Lawler [36] Liu and Wong [40] Our Network Hypergraph vertices in H) have infinite capacity [29]. Since the separator has to be a subset of the star-nodes, it is possible to replace any infinite-capacity node by adding a clique between all adjacent star-nodes without affecting the separator. The key observation now is that an infinite-capacity node v with degree d(v) induces 2d(v) infinite-capacity edges in the Lawler network [36], while a clique between star-nodes induces Thus we can reduce the number of nodes and edges of the Liu-Wong network as follows. Before applying the transformation on the star-expansion of H, we remove all infinite-capacity nodes v corresponding to hypernodes with d(v) ≤ 3 that are not incident to any two-pin nets and add a clique between all star-nodes adjacent to v. In case v was a source or sink node, we create a multi-source multi-sink problem by adding all adjacent star-nodes to the set of sources resp. sinks [20]. Reconstructing Min-Cuts. After computing an (s, t )-max-flow in the Lawler or Liu-Wong network, an (s, t )-min-cut of H can be computed by a BFS in the residual graph starting from s. Let S be the set of nodes corresponding to vertices of H reached by the BFS. Then (S, V \ S) is an (s, t )-min-cut. Since our network does not contain low degree hypernodes, we use the following lemma to compute an (s, t )-min-cut of H: to a bridging node e of a net e ∈ I(v). Proof. Since v ∈ S, there has to be some path s v in N f . By definition of the flow network, this path can either be of the form P 1 = s, . . . , e , v or P 2 = s, . . . , e , v for some bridging nodes e , e corresponding to nets e ∈ I(v). In the former case we are done, since e ∈ P 1 . In the latter case the existence of edge (e , v) ∈ E f implies that there is a positive flow f (v, e ) > 0 over edge (v, e ) ∈ E. Due to flow conservation, there exists at least one Furthermore this allows us to search for more balanced min-cuts using the Picard-Queyranne-DAG of N f as described in Section 2.3. By the definition of closed sets it follows that if a bridging node e is contained in a closed set C, then all nodes v ∈ Γ(e ) (which correspond to vertices of H) are also contained in C. Thus we can use the respective bridging nodes e as representatives of removed low degree hypernodes. Constructing the Hypergraph Flow Problem In the following, we distinguish between the set of internal border nodes Thus it has to hold that cut(Π f ) ≤ cut(Π 2 ). While external nets are not affected by a max-flow computation, the max-flow min-cut theorem [21] ensures the cut property for all internal nets. Border nets however require special attention. Since a border net e is only partially contained in H B , it will remain connected to the blocks of its external border nodes in H. In case external border nodes connect e to both V i and V j , it will remain a cut net in H even if it is removed from the cut-set in Π f . It is therefore necessary to "encode" information about external border nodes into the flow problem. The KaFFPa Model and its Limitations. In KaFFPa, this is done by directly connecting internal border nodes − → B to s and t . This approach can also be used for hypergraphs. In the hypergraph flow problem F G , the source s is connected to all nodes S = This limitation becomes increasingly relevant for hypergraphs with large nets as well as for partitioning problems with small imbalance ε, since large nets are likely to be only partially contained in H B and tight balance constraints enforce small B-corridors. While the former is a problem only for HGP, the latter also applies to GP. A more flexible Model. We propose a more general model that allows an (s, t )-max-flow computation to also cut through border nets by exploiting the structure of hypergraph flow networks. Instead of directly connecting s and t to internal border nodes − → B and thus preventing all min-cuts in which these nodes switch blocks, we conceptually extend H B to contain all external border nodes ← − B and all border nets The key insight now is that by using the flow network of ←− H B and connecting s resp. t to the external border nodes we get a flow problem that does not lock any node v ∈ V B in its block, since none of these nodes is directly connected to either s or t . Due to the max-flow min-cut theorem [21], this flow problem furthermore has the cut property, since all border nets of H B are now internal nets T. Heuer, P. Sanders and S. Schlag Flow Problem F G of Sanders and Schulz [38] Cut Implementation Details Since KaHyPar is an n-level partitioner, its FM-based local search algorithms are executed each time a vertex is uncontracted. To prevent expensive recalculations, it therefore uses a cache to maintain the gain values of FM moves throughout the n-level hierarchy [2]. In order to combine our flow-based refinement with FM local search, we not only perform the moves induced by the max-flow min-cut computation but also update the FM gain cache accordingly. Since it is not feasible to execute our algorithm on every level of the n-level hierarchy, we use an exponentially spaced approach that performs flow-based refinements after uncontracting i = 2 j vertices for j ∈ N + . This way, the algorithm is executed more often on smaller flow problems than on larger ones. To further improve the running time, we introduce the following speedup techniques: S1: We modify active block scheduling such that after the first round the algorithm is only executed on a pair of blocks if at least one execution using these blocks improved connectivity or imbalance of the partition on previous levels. S2: For all levels except the finest level: Skip flow-based refinement if the cut between two adjacent blocks is less than ten. S3: Stop resizing the corridor B if the current (s, t )-cut did not improve the previously best solution. Experimental Evaluation We implemented the max-flow min-cut refinement algorithm in the n-level hypergraph partitioning framework KaHyPar (Karlsruhe Hypergraph Partitioning). The code is written in C++ and compiled using g++-5.2 with flags -O3 -march=native. The latest version of the framework is called KaHyPar-CA [28]. We refer to our new algorithm as KaHyPar-MF. Both versions use the default configuration for community-aware direct k-way partitioning. 1 Instances. All experiments use hypergraphs from the benchmark set of Heuer and Schlag [28] 2 , which contains 488 hypergraphs derived from four benchmark sets: the ISPD98 VLSI Circuit Benchmark Suite [3], the DAC 2012 Routability-Driven Placement Contest [55], the University of Florida Sparse Matrix Collection [15], and the international SAT Competition 2014 [9]. Sparse matrices are translated into hypergraphs using the row-net model [12], i.e., each row is treated as a net and each column as a vertex. SAT instances are converted to three different representations: For literal hypergraphs, each boolean literal is mapped to one vertex and each clause constitutes a net [43], while in the primal model each variable is represented by a vertex and each clause is represented by a net. In the dual model the opposite is the case [41]. All hypergraphs have unit vertex and net weights. Table 1 gives an overview about the different benchmark sets used in the experiments. The full benchmark set is referred to as set A. We furthermore use the representative subset of 165 hypergraphs proposed in [28] (set B) and a smaller subset consisting of 25 hypergraphs (set C), which is used to devise the final configuration of KaHyPar-MF. Basic properties of set C can be found in Table 10 in Appendix C. Unless mentioned otherwise, all hypergraphs are partitioned into k ∈ {2, 4, 8, 16, 32, 64, 128} blocks with ε = 0.03. For each value of k, a k-way partition is considered to be one test instance, resulting in a total of 175 instances for set C, 1155 instances for set B and 3416 instances for set A. Furthermore we use 15 graphs from [42] to compare our flow model F H to the KaFFPa [47] model F G . Table 11 in Appendix C summarizes the basic properties of these graphs, which constitute set D. [32,33], and to PaToH 3.2 [12]. These HGP libraries were chosen because they provide the best solution quality [2,28]. The partitioning results of these tools are already available from http://algo2.iti.kit.edu/schlag/sea2017/. For each partitioner except PaToH the results summarize ten repetitions with different seeds for each test instance and report the arithmetic mean of the computed cut and running time as well as the best cut found. Since PaToH ignores the random seed if configured to use the quality preset, the results contain both the result of single run of the quality preset (PaToH-Q) and the average over ten repetitions using the default configuration (PaToH-D). Each partitioner had a time limit of eight hours per test instance. We use the same number of repetitions and the same time limit for our experiments with KaHyPar-MF. In the following, we use the geometric mean when averaging over different instances in order to give every instance a comparable influence on the final result. In order to compare the algorithms in terms of solution quality, we perform a more detailed analysis using improvement plots. For each algorithm, these plots relate the minimum connectivity of KaHyPar-MF to the minimum connectivity produced by the corresponding algorithm on a per-instance basis. For each algorithm, these ratios are sorted in decreasing order. The plots use a cube root scale for the y-axis to reduce right skewness [14] and show the improvement of KaHyPar-MF in percent (i.e., 1 − (KaHyPar-MF/algorithm)) on the y-axis. A value below zero indicates that the partition of KaHyPar-MF was worse than the partition produced by the corresponding algorithm, while a value above zero indicates that KaHypar-MF performed better than the algorithm in question. A value of zero implies that the partitions of both algorithms had the same solution quality. Values above one correspond to infeasible solutions that violated the balance constraint. In order to include instances with a cut of zero into the results, we set the corresponding cut values to one for ratio computations. Evaluating Flow Networks, Models, and Algorithms Flow Networks and Algorithms. To analyze the effects of the different hypergraph flow networks we compute five bipartitions for each hypergraph of set B with KaHyPar-CA using different seeds. Statistics of the hypergraphs are shown in Table 2. The bipartitions are then used to generate hypergraph flow networks for a corridor of size |B| = 25 000 hypernodes around the cut. primal and literal SAT instances are the largest in terms of both numbers of nodes and edges. High average vertex degree combined with low average net sizes leads to subhypergraphs H B containing many small nets, which then induce many nodes and (infinite-capacity) edges in N L . Dual instances with low average degree and large average net size on the other hand lead to smaller flow networks. For VLSI instances (DAC, ISPD) both average degree and average net sizes are low, while for SPM hypergraphs the opposite is the case. This explains why SPM flow networks have significantly more edges, despite the number of nodes being comparable in both classes. As expected, the Lawler-Network N L induces the biggest flow problems. Looking at the Liu-Wong network N W , we can see that distinguishing between graph edges and nets with |e| ≥ 3 pins has an effect for all hypergraphs with many small nets (i.e., DAC, ISPD, Primal, Literal). While this technique alone does not improve dual SAT instances, we see that the combination of the Liu-Wong approach and our removal of low degree hypernodes in N Our reduces the size of the networks for all instance classes except SPM. Both techniques only have a limited effect on these instances, since both hypernode degrees and net sizes are large on average. Since our flow problems are based on B-corridor induced subhypergraphs, N 1 Our additionally models single-pin border nets more efficiently as described in Section 3.2. This further reduces the network sizes significantly. As expected, the reduction in numbers of nodes and edges is most pronounced for hypergraphs with low average net sizes because these instances are likely to contain many single-pin border nets. To further see how these reductions in network size translate to improved running times Hypergraphs Graphs α ε = 1% ε = 3% ε = 5% ε = 1% ε = 3% ε = 5% of max-flow algorithms, we use these networks to create flow problems using our flow model F H and compute min-cuts using two highly tuned max-flow algorithms, namely the BKalgorithm 3 [10] and the incremental breadth-first search (IBFS) algorithm 4 [23]. These algorithms were chosen because they performed best in preliminary experiments [27]. We then compare the speedups of these algorithms when executed on N W , N Our , and N 1 Our to the execution on the Lawler network N L . As can be seen in Figure 3 (bottom) both algorithms benefit from improved network models and the speedups directly correlate with the reductions in network size. While N W significantly reduces the running times for Primal and Literal instances, N Our additionally leads to a speedup for Dual instances. By additionally considering single-pin border nets, N 1 Our results in an average speedup between 1.52 and 2.21 (except for SPM instances). Since IBFS outperformed the BK algorithm in [27], we use N 1 Our and IBFS in all following experiments. Flow Models. We now compare the flow model F G of KaFFPa to our advanced model F H described in Section 3.2. The experiments summarized in Table 3 were performed using sets C and D. To focus on the impact of the models on solution quality, we deactivated KaHyPar's FM local search algorithms and only use flow-based refinement without the most balanced minimum cut heuristic. The results confirm our hypothesis that F G restricts the space of possible solutions. For all flow problem sizes and all imbalances tested, F H yields better solution quality. As expected, the effects are most pronounced for small flow problems and small imbalances where many vertices are likely to be border nodes. Since these nodes are locked inside their respective block in F G , they prevent all non-cut border nets from becoming part of the cut-set. Our model, on the other hand, allows all min-cuts that yield a feasible solution for the original partitioning problem. The fact that this effect also occurs for the graphs of set D indicates that our model can also be effective for traditional graph partitioning. All following experiments are performed using F H . Configuring the Algorithm We now evaluate different configurations of the max-flow min-cut based refinement framework on set C. In the following, KaHyPar-CA [28] is used as a reference. Since it neither uses (F)lows nor the (M)ost balanced minimum cut heuristic and only relies on the (FM) algorithm for local search, it is referred to as (-F,-M,+FM). This basic configuration is then successively extended with specific components. The results of our experiments are summarized in Table 4 for increasing scaling parameter α . The table furthermore includes a configuration Constant128. In this configuration all components are enabled (+F,+M,+FM) and we perform flow-based refinements every 128 uncontractions. While this configuration is slow, it is used as a reference point for the quality achievable using flow-based refinement. The results indicate that only using flows (+F,-M,-FM) as refinement technique is inferior to localized FM local search in regard to both running time and solution quality. Although the quality improves with increasing flow problem size (i.e., increasing α ), the average connectivity is still worse than the reference configuration. Enabling the most balanced minimum cut heuristic improves partitioning quality. Configuration (+F,+M,-FM) performs better than the basic configuration for α ≥ 8. By combining flows with the FM algorithm (+F,-M,+FM) we get a configuration that improves upon the baseline configuration even for small flow problems. However, comparing this variant with (+F,+M,-FM) for α = 16, we see that using large flow problems together with the most balanced minimum cut heuristic yields solutions of comparable quality. Enabling all components (+F,+M,+FM) and using large flow problems performs best. Furthermore we see that enabling FM local search slightly improves the running time for α ≥ 8. This can be explained by the fact that the FM algorithm already produces good cuts between the blocks such that fewer rounds of pairwise flow refinements are necessary to further improve the solution. Comparing configuration (+F,+M,+FM) with Constant128 shows that performing flows more often further improves solution quality at the cost of slowing down the algorithm by more than an order of magnitude. In all further experiments, we therefore use configuration (+F,+M,+FM) with α = 16 for KaHyPar-MF. This configuration also performed best in the effectiveness tests presented in Appendix A. While this configuration performs better than KaHyPar-CA, its running time is still more than a factor of 3 higher. We therefore perform additional experiments on set B and successively enable the speedup heuristics described in Section 3.3. The results are summarized in Table 5. Only executing pairwise flow refinements on blocks that lead to an improvement on previous levels (S1) reduces the running time of flow-based refinement by a factor of 1.27, while skipping flows in case of small cuts (S2) results in a further speedup of 1.19. By additionally stopping the resizing of the flow problem as early as possible (S3), we decrease the running time of flow-based improvement by a factor of 2 in total, while still computing solutions of comparable quality. Thus in the comparisons with other systems, all heuristics are enabled. Comparison with other Systems Finally, we compare KaHyPar-MF to different state-of-the-art hypergraph partitioners on the full benchmark set. We exclude the same 194 out of 3416 instances as in [28] because either PaToH-Q could not allocate enough memory or other partitioners did not finish in time. The excluded instances are shown in Table 12 in Appendix D. Note that KaHyPar-MF did not lead to any further exclusions. The following comparison is therefore based on the remaining 3222 instances. As can be seen in Figure Comparing the best solutions of all systems simultaneously, KaHyPar-MF produced the best partitions for 2427 of the 3222 instances. It is followed by hMetis-R (678), KaHyPar-CA (388), hMetis-K (352), PaToH-D (154), and PaToH-Q (146). Note that for some instances multiple partitioners computed the same best solution and that we disqualified infeasible solutions that violated the balance constraint. Figure 5 shows that KaHyPar-MF also performs best for different values of k and that pairwise flow refinements are an effective strategy to improve k-way partitions. As can be seen in Table 6, the improvement over KaHyPar-CA is most pronounced for hypergraphs derived from matrices of web graphs and social networks 5 and dual SAT instances. While the former are difficult to partition due to skewed degree and net size distributions, the latter are difficult because they contain many large nets. Finally, Table 9 compares the running times of all partitioners. By using simplified flow networks, highly tuned flow algorithms and several techniques to speed up the flow-based refinement framework, KaHyPar-MF is less than a factor of two slower than KaHyPar-CA and still achieves a running time comparable to that of hMetis. Conclusion We generalize the flow-based refinement framework of KaFFPa [47] from graph to hypergraph partitioning. We reduce the size of Liu and Wong's hypergraph flow network [40] by removing low degree hypernodes and exploiting the fact that our flow problems are built on subhypergraphs of the input hypergraph. Furthermore we identify shortcomings of the KaFFPa [47] approach that restrict the search space of feasible solutions significantly and introduce an advanced model that overcomes these limitations by exploiting the structure of hypergraph flow networks. Lastly, we present techniques to improve the running time of the flow-based refinement framework by a factor of 2 without affecting solution quality. The resulting hypergraph partitioner KaHyPar-MF performs better than all competing algorithms on all instance classes of a large benchmark set and still has a running time comparable to that of hMetis. Since our flow problem formulation yields significantly better solutions for both hypergraphs and graphs than the KaFFPa [47] approach, future work includes the integration of our flow model into KaFFPa and the evaluation in the context of a high quality graph partitioner. Furthermore an approach similar to Yang and Wong [57] could be used as an alternative to the most balanced minimum cut heuristic and adaptive B-corridor resizing. We also plan to extend our framework to optimize other objective functions such as cut or sum of external degrees. A Effectiveness Tests To evaluate the effectiveness of our configurations presented in Section 4.2 we give each configuration the same time to compute a partition. For each instance (hypergraph, k), we execute each configuration once and note the largest running time t H,k . Then each configuration gets time 3t H,k to compute a partition (i.e., we take the best partition out of several repeated runs). Whenever a new run of a partition would exceed the largest running time, we perform the next run with a certain probability such that the expected running time is 3t H,k . The results of this procedure, which was initially proposed in [47], are presented in Table 8. We see that the combinations of flow-based refinement and FM local search perform better than repeated executions of the baseline configuration (-F,-M,+FM). The most effective configuration is (+F,+M,-FM) with α = 16, which was chosen as the default configuration for KaHyPar-MF.
8,190
2018-02-10T00:00:00.000
[ "Computer Science" ]
Suitability of modelled and remotely sensed essential climate variables for monitoring Euro-Mediterranean droughts . Two new remotely sensed leaf area index (LAI) and surface soil moisture (SSM) satellite-derived products are compared with two sets of simulations of the ORganizing Carbon and Hydrology In Dynamic EcosystEms (OR-CHIDEE) and Interactions between Soil, Biosphere and Atmosphere, CO 2 -reactive (ISBA-A-gs) land surface models. We analyse the interannual variability over the period 1991– 2008. The leaf onset and the length of the vegetation growing period (LGP) are derived from both the satellite-derived LAI and modelled LAI. The LGP values produced by the photosynthesis-driven phenology model of ISBA-A-gs are closer to the satellite-derived LAI and LGP Introduction The Global Climate Observing System (GCOS) has defined a list of atmospheric, oceanic and terrestrial essential climate variables (ECVs) which can be monitored at a global scale from satellites. Terrestrial ECV products consisting of long time series are needed to evaluate the impact of climate change on environment and human activities. They have high impact on the requirements of the Intergovernmental Panel on Climate Change (IPCC). New ECV products are now available and they can be used to characterize extreme events, such as droughts. Soil moisture is a key ECV in hydrological and agricultural processes. It constrains plant transpiration and photosynthesis (Seneviratne et al., 2010) and is one of the limiting factors of vegetation development and growth (Champagne et al., 2012), especially in water-limited regions such as the Mediterranean zone, from spring to autumn. Microwave remote sensing observations can be related to surface soil moisture (SSM) rather than to root-zone soil moisture, as the sensing depth is limited to the first centimetres of the soil surface (Wagner et al., 1999;Kerr et al., 2007). Land Surface Models (LSMs) are generally able to C. Szczypta et al.: Suitability of modelled and remotely sensed essential climate variables provide soil moisture simulations over multiple depths, depending upon their structure, i.e. bucket models vs. more complex vertically discretized soil water diffusion schemes (Dirmeyer et al., 1999;Georgakakos and Carpenter, 2006). Their outputs are affected by uncertainties in the atmospheric forcing, model physics and parameters. However, showed the usefulness of using simulated SSM as a benchmark to intercompare independent satellite-derived SSM estimates, and Albergel et al. (2013a) used hindcast SSM simulations to provide an independent check on the quality of remotely sensed SSM over time. Conversely, remotely sensed SSM can be used to benchmark hindcast SSM simulations derived from two independent modelling platforms (Albergel et al., 2013b). Leaf area index (LAI) is one of the terrestrial ECVs related to the vegetation growth and senescence. Monitoring LAI is essential for assessing the vegetation trends in the climate change context, and for developing applications in agriculture, environment, carbon fluxes and climate monitoring. LAI is expressed in m 2 m −2 and is defined as the total one-sided area of photosynthetic tissue per unit horizontal ground area. The LAI seasonal cycle can be monitored at a global scale using medium-resolution optical satellite sensors (Myneni et al., 2002;Baret et al., 2007Baret et al., , 2013Weiss et al., 2007). Another way to provide LAI over large areas and over long periods of time is to use generic LSMs, such as Interactions between Soil, Biosphere and Atmosphere, CO 2 -reactive (ISBA-A-gs) (Calvet et al., 1998;Gibelin et al., 2006) or ORganizing Carbon and Hydrology In Dynamic EcosystEms (ORCHIDEE) (Krinner et al., 2005). The direct validation of climate data records, based on in situ observations, is not easy at a continental scale, as in situ observations are limited in space and time. Therefore, indirect validation plays a key role. The comparison of ECV products derived from satellite observations with ECV products derived from LSM hindcast simulations is particularly useful. Inconsistencies between two independent products permit detecting shortcomings and improving the next versions of the products. The Mediterranean Basin will probably be affected by climate change to a large extent (Gibelin and Déqué, 2003;Planton et al., 2012). Over Europe and Mediterranean areas, the annual mean temperature of the air is likely to increase more than the global mean (IPCC assessment, 2007). In most Mediterranean regions, this trend would be associated with a decrease in annual precipitation (Christensen et al., 2007). In this context, it is important to build monitoring systems of the land surface variables over this region, able to describe extreme climatic events such as droughts and to analyse their severity with respect to past droughts. This study was performed in the framework of the HYMEX (Hydrological cycle in the Mediterranean EXperiment) initiative (HYMEX White Book, 2008;Drobinski et al., 2009aDrobinski et al., , b, 2010, with the aim of investigating the interannual variability of LAI and SSM ECV products over the Euro-Mediterranean area. While an attempt was made in a previous work to simulate the hydrological droughts over the Euro-Mediterranean area, this study focuses on the monitoring of agricultural droughts and complements the joint evaluation of the ORCHIDEE and ISBA-A-gs land surface model performed by Lafont et al. (2012) over France using satellite-derived LAI. A 18 yr time period (1991-2008) is considered against an 8 yr period (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007) in Lafont et al. (2012). Using the modelling framework implemented by Szczypta et al. (2012), we compare ISBA-A-gs and ORCHIDEE simulations of LAI, and we evaluate new homogenized remotely sensed LAI and SSM data sets. The satellite-derived SSM is compared with ISBA-A-gs simulations of SSM, as ORCHIDEE has no explicit representation of this quantity. The capacity of the two models to represent the interannual variability of the vegetation growth and the impact of extreme events such as the 2003 heat wave is assessed. Finally, the synergy between SSM and LAI is investigated using the satellite products and the ISBA-A-gs model. The data, including the leaf onset and the length of the vegetation growing period (LGP) derived from the observed and simulated LAI are first described. Then, anomalies of the detrended LAI are compared over the 1991-2008 period with a focus on the 2003 western European drought (Rebetez et al., 2006;Vidal et al., 2010). Lastly, we investigate to what extent SSM observations can be used to predict mean anomalous vegetation state conditions in the current growing season. The interannual SSM variability, resulting from satellite observations and LSM simulations, is used as an indicator able to anticipate LAI anomalies during key periods. Data and methods In this study, several data sets (either model simulations, atmospheric variables, or satellite-derived products) were produced or collected, over the Euro-Mediterranean area. In order to force the two LSM simulations of SSM and LAI (Sect. 2.1), the ERA-Interim surface atmospheric variables (Simmons et al., 2010) are used. The ERA-Interim data are available on a 0.5 × 0.5 • grid and the LSM simulations use the same grid . The 1991-2008 18 yr period is considered, as in Szczypta et al. (2012). During this period, SSM products from both active (ERS-1/2, ASCAT) and passive (SSM/I, TMI, AMSR-E) microwave sensors are available and can be combined (Sect. 2.2), together with LAI products (Sect. 2.3). In order to compare the LSM simulations with the satellite products, the latter are aggregated on the same 0.5 × 0.5 • grid using linear interpolation and averaging techniques. -Farquhar et al. (1980) for C3 plants, -Collatz et al. (1992) for C4 plants Goudriaan et al. (1985), modified by Jacobs et al. (1996) Impact of drought on photosynthesis parameters (response to root-zone soil moisture) Linear response of V c,max (McMurtrie et al., 1990) -Log response of g m -Linear response of the maximum saturation deficit for herbaceous vegetation (Calvet, 2000) -Linear response of the scaled maximum intercellular CO 2 concentration for woody vegetation (Calvet et al., 2004) -Drought-avoiding response for C3 crops, needleleaf forests -Drought-tolerant response for C4 crops, grasslands, broadleaf forests Soil moisture profile No explicit representation of SSM; twolayer soil model; the depth of the layers evolves through time in response to "top-to-bottom" filling due to precipitation and drying due to evapotranspiration (Ducoudré et al., 1993) Explicit representation of SSM (0-1 cm top soil layer); three-layer force-restore model (Boone et al., 1999;Deardoff, 1977Deardoff, , 1978 Phenology -LAImax is prescribed -LAImin is prognostic -Growing degree days (Leaf onset model was trained using satellite NDVI data (Botta et al., 2000)) -LAImax is prognostic -LAImin is prescribed -Photosynthesis-driven plant growth and mortality Models Although the generic ISBA-A-gs and ORCHIDEE LSMs share the same general structure, based on the description of the main biophysical processes, they were developed independently and differ in the way photosynthesis, transpiration, and phenology are represented. The main differences between the two models are summarized in Table 1. More details about the differences between the two models can be found in Lafont et al. (2012). ISBA-A-gs ISBA-A-gs is a CO 2 -responsive LSM (Calvet et al., 1998(Calvet et al., , 2004Gibelin et al., 2006), simulating the diurnal cycle of carbon and water vapour fluxes, together with LAI and soil moisture evolution. The soil hydrology is represented by three layers: a skin surface layer 1 cm thick, a bulk root-zone reservoir, and a deep soil layer (Boone et al., 1999) contributing to evaporation through capillarity rises. Over the Euro-Mediterranean area, the rooting depth varies from 0.5-1.5 m for grasslands, to 2.0-2.5 m for broadleaf forests. The model includes an original representation of the impact of drought on photosynthesis (Calvet, 2000;Calvet et al., 2004). The version of the model used in this study corresponds to the "NIT" simulations performed by Szczypta et al. (2012). This version interactively calculates the leaf biomass and LAI, using a plant growth model (Calvet et al., 1998;Calvet and Soussana, 2001) driven by photosynthesis. In contrast to OR-CHIDEE, no GDD-based phenology model is used in ISBA-A-gs, as the vegetation growth and senescence are entirely driven by photosynthesis. The leaf biomass is supplied with the carbon assimilated by photosynthesis, and decreased by a turnover and a respiration term. Turnover is increased by a deficit in photosynthesis. The leaf onset is triggered by sufficient photosynthesis levels and a minimum LAI value is prescribed (LAImin in Table 1). The maximum annual value of LAI is prognostic, i.e. it is predicted by the model. Gibelin et al. (2006) and Brut et al. (2009) showed that ISBA-A-gs provides reasonable LAI values at regional and global scales under various environmental conditions. showed that the model can be used to assess the interannual variability of fodder and cereal crops production over regions 934 C. Szczypta et al.: Suitability of modelled and remotely sensed essential climate variables of France. The ISBA-A-gs LSM is embedded into the SUR-FEX modelling platform , and the simulations performed in this study correspond to SURFEX version 6.2 runs. ORCHIDEE ORCHIDEE (Krinner et al., 2005) is a process-based terrestrial biosphere model designed to simulate energy, water and carbon fluxes of ecosystems and is based on three submodules: (1) SECHIBA (Schématisation des Echanges Hydriques à l'Interface Biosphère-Atmosphère) is a land surface energy and water balance model (Ducoudré et al., 1993), (2) STOMATE (Saclay Toulouse Orsay Model for the Analysis of Terrestrial Ecosystems) is a land carbon cycle model (Friedlingstein et al., 1999;Ruimy et al., 1996;Botta et al., 2000), and (3) LPJ (Lund-Postdam-Jena) is a dynamic model of long-term vegetation dynamics including competition and disturbances (Sitch et al., 2003). ORCHIDEE uses a phenology model based on growing degree days (GDDs) for leaf onset. The parameters of the GDD model were calibrated by Botta et al. (2000) using remotely sensed NDVI observations. The LAI cycle simulated by ORCHIDEE is characterized by a dormancy phase, a sharp increase of LAI over a few days at the leaf onset, and a more gradual growth governed by photosynthesis, until a predefined maximum LAI value has been reached (LAImax in Table 1). Note that the prescribed LAImax is not necessarily reached in a simulation over a grid cell. The senescence phase presents an exponential decline of LAI. The leaf offset depends on leaf life span and climatic parameters. The ORCHIDEE 1.9.5.1 tag was used to perform these simulations. Only the ORCHIDEE LAI variable is used since the simple bucket soil hydrology version of this version of ORCHIDEE has no explicit representation of SSM (Table 1). An attempt was made by Rebel et al. (2012) to compare the soil moisture simulated by OR-CHIDEE with the AMSR-E SSM product. They concluded that the shallow soil moisture estimates they derived from the ORCHIDEE simulations were not an explicit representation of SSM and could not be compared with the AMSR-E SSM product. Instead, they compared the AMSR-E SSM with the root-zone soil moisture simulated by ORCHIDEE, and they observed that the satellite-derived SSM had a much faster reaction time and a much shorter characteristic lag time than the simulations. This can be explained by the shallow penetration depth (< 5 cm) of the C-band microwave signal measured by AMSR-E, which is not representative of deep soil layers. Design of the simulations In this study, the two models use the same spatial distribution of vegetation types, based on the ECOCLIMAP-II database of ecosystems and model parameters, over the area 11 • W-62 • E, 25 • N-75 • N ( Fig. 1) covering the Mediterranean Basin, northern Europe, Scandinavia and part of Russia. Further, ISBA-A-gs and ORCHIDEE are driven by the same atmospheric forcing, the ERA-Interim global ECMWF atmospheric reanalysis (projected onto a 0.5 • ×0.5 • grid). ERA-Interim tends to underestimate precipitation, as observed over France by Szczypta et al. (2011) and over the Euro-Mediterranean area by Szczypta et al. (2012). In the latter study, the monthly Global Precipitation Climatology Centre (GPCC) precipitation product was used to bias-correct the 3-hourly ERA-Interim precipitation estimates over the whole Euro-Mediterranean area. The resulting 3-hourly precipitation was indirectly validated using river discharges simulations and observations. The two models are driven by the 3hourly atmospheric variables from the bias-corrected ERA-Interim and perform half-hourly simulations of the surface fluxes, of soil moisture and of surface temperature, together with daily LAI simulations. Irrigation is not represented. The daily LAI values are produced for each plant functional type (PFT) present in the grid cell. Similarly, daily mean SSM values are produced for each PFT. The grid-cell simulated LAI (SSM) is the average of the PFT-dependent LAI (SSM) multiplied by the fractional area of each PFT. The model runs are performed at a spatial resolution of 0.5 • × 0.5 • , over the ECOCLIMAP-II Euro-Mediterranean area, corresponding to: The fractional coverage of the various PFTs is provided by ECOCLIMAP-II at a spatial resolution of 1 km, aggregated at a spatial resolution of 0.5 • , and the two models account for the subgrid variability by simulating separate LAI values for each surface type present in the grid cell. ISBA-A-gs simulates separate SSM values for each surface type present in the grid cell. Figure 2 shows the spatial distribution of the dominant vegetation types over the studied domain. ESA-CCI surface soil moisture The European Space Agency Climate Change Initiative (ESA-CCI) project dedicated to soil moisture has produced a global 32-year SSM time series described in Liu et al. (2011Liu et al. ( , 2012. The ESA-CCI SSM product is today the only multidecadal SSM data set derived from satellite observations. The daily data are available on a 0.25 • grid and can be downloaded from http://www.esa-soilmoisture-cci.org/. Several SSM products based on either active or passive single satellite microwave sensors were combined to build a blended harmonized time series of SSM at the global scale from 1978 to 2010: scatterometer-based products from ERS-1/2 and ASCAT (July 1991-May 2006-2010, and radiometer-based products from SMMR, SSM/I, TMI, and AMSR-E (November 1978-August 1987, July 1987-2007-2008, July 2002-2010. The method used to combine the different data sets is described in detail by Liu et al. (2011Liu et al. ( , 2012 and takes advantage of the assets of both passive and active systems. In most of the Euro-Mediterranean area, active microwave products are used. The passive microwave products mainly cover North Africa. In some parts of the area (e.g. in Spain), the average of both active and passive microwave products is used (see Fig. 14 in Liu et al., 2012). It must be noted that the sensing depth of microwave remote sensing observations is limited to the first centimetres of the soil surface. The ESA-CCI data set was used by Dorigo et al. (2012) to analyse trends in SSM, while Muñoz et al. (2014) and showed its strong connectivity with vegetation development. Loew et al. (2013) have assessed this product and showed that the agreement with other soil moisture data sets from modelling studies as well as with rainfall data is generally good. The ESA-CCI SSM temporal and spatial coverage is much better after 1990 than before but is limited at high latitudes due to snow cover and frozen soil conditions. GEOV1 LAI The European Copernicus Global Land Service provides a global LAI product in near-real-time called GEOV1 (Baret et al., 2013). This product was extensively validated and benchmarked with pre-existing satellite-derived LAI products using an ensemble of ground observations at 30 sites in Europe, Africa and North America . It must be noted that this direct validation does not completely address the seasonality of LAI as, for a given site, LAI observations are available at only one or very few dates. It was found that the GEOV1 LAI correlates very well with in situ observations (r 2 = 0.81), with a root mean square error of 0.74 m 2 m −2 . The GEOV1 scores are better than those obtained by other products such as MODIS c5, CYCLOPES v3.1 and GLOBCARBON v2. A 32 yr LAI time series based on the GEOV1 algorithm was produced by the GEOLAND-2 project. Ten-daily data are available from 1981 to the present and can be downloaded at http://land.copernicus.eu/global/. For the period before 1999, the AVHRR Long Term Data Record (LTDR) reflectances (Vermote et al., 2009) are used to generate the LAI product at a spatial resolution of 5 km. From 1999 onward, the SPOT-VGT reflectances are used to generate the LAI product at a spatial resolution of 1 km. The harmonized time series is produced by neural networks trained to produce consistent estimates of LAI from the reflectance measured by different sensors (Verger et al., 2008). Surface soil moisture In this study, we focus on the seasonal and interannual variability of SSM after removing the trends from both satellitederived and simulated time series. The detrended time series at a given location and for a given 10-daily period of the year is obtained by subtracting the least-squares-fit straight line. The same 10-daily periods as for the GEOV1 LAI product are used. Hereafter, this quantity is referred to as SSMd, for both satellite observations and model simulations. In order to characterize the day-to-day variability of SSMd, anomalies are calculated using Eq. (6) in . For each SSMd estimate at day (j ), a period F is defined, with F = [j −17d, j +17d]. If at least five measurements are available in this period of time, the average SSMd value and the standard deviation are first calculated. Then, the scaled anomaly Ano SSM is computed: This procedure is applied to the ESA-CCI SSM observations and to the ISBA-A-gs SSM simulations. Leaf area index Three metrics are calculated to characterize LAI seasonal and interannual variability: the leaf onset, the leaf offset and the monthly (or 10-daily) scaled anomaly, for both satellite observations and model simulations. The LGP is defined as the period of time between the leaf onset and the leaf offset of a given annual cycle. The leaf onset (respectively, offset) is determined as the 10-daily period when the departure of LAI from its minimum annual value becomes higher (respectively, lower) than 40 % of the amplitude of the annual cycle (Gibelin et al., 2006;Brut et al., 2009). This method is sufficiently robust to be applied to both deciduous and non-perennial vegetation, and to evergreen vegetation presenting a sufficiently marked annual cycle of LAI. have shown that the neural network algorithm used to produce GEOV1 was successful in reducing the saturation of optical signal for dense vegetation (i.e. at high LAI values). Since the saturation effect is the main obstacle to the derivation of LGP from LAI or other vegetation satellite-derived products, it can be assumed that the GEOV1-derived LGP values are reliable. The interannual variability of LAI for various seasons is represented by monthly or 10-daily scaled anomalies defined as where DLAI(i, yr) represents the difference between LAI for a particular month (i ranging from 1 to 12) or 10-day period (i ranging from 1 to 36) of year (yr) and its average interannual value, and stdev(DLAI(i,:)) is the standard deviation of DLAI for a particular month or 10-day period. This procedure is applied to the GEOV1 observations and to the ORCHIDEE and ISBA-A-gs LAI simulations. In the case of GEOV1, in order to cope with shortcomings in the harmonization of satellite-derived products, the calculation of DLAI is made separately for the 1991-1998 AVHRR and for the 1999-2008 SPOT-VGT periods. It was checked that the resulting time series have a zero mean and present no trend. Finally, the annual coefficient of variation (ACV) is computed as the ratio of the standard deviation of the mean annual LAI to the long-term mean annual LAI, over the 1991-2008 period. ACV characterizes the relative interannual variability of LAI. Correlation scores In this study, the Pearson correlation coefficient (r) is used. Squared correlation coefficient (r 2 ) plots are used when all the corresponding r values are greater than or equal to zero. When r presents negative values, r is plotted instead of r 2 . Leaf area index vs. surface soil moisture In order to assess to what extent LAI anomalies are related to the SSMd anomalies observed a few 10-day periods ahead, the Pearson correlation coefficient between 18 SSMd values (one value per year over the 1991-2008 period) and 18 DLAI values is calculated on a 10-daily basis. For each considered 10-day period, SSMd is compared to DLAI values at the same period, and to hindcast DLAI values obtained 10 days, 20 days, 30 days, 40 days and 50 days later, from March to August. Preliminary tests based on the satellitederived products showed that significant correlations were mainly obtained over cropland areas. An explanation is that LAI is more representative of the biomass production for annual crops than for managed grasslands or natural vegetation, or that natural vegetation in water-restricted areas is better adapted to changing water variability than crops. Therefore, the correlation coefficients are computed for the grid cells with more than 50 % of croplands (according to the ECOCLIMAP-II land cover data). The scores are calculated with hindcast SSMd and DLAI for 10-daily time lags derived from either (1) the SSM and LAI simulated by the ISBA-A-gs LSM or (2) the ESA-CCI SSM and GEOV1 LAI products. Figure 3 shows the absolute (original SSMd data) and anomaly (Ano SSM ) correlation between the ISBA-A-gs SSM simulations and the ESA-CCI SSM product for the 1991-2008 period. In general, good absolute positive correlations are observed over all the sub-regions of Fig. 1. The best anomaly correlations are observed over the croplands of the Ukraine and southern Russia. However, negative correlations are observed in mountainous areas of the Mediterranean Basin, in southern Turkey (Taurus Mountains) and in western Iran (Zagros Mountains). In order to understand the negative absolute correlations in Fig. 3, we plotted (Fig. 4) the same figure as Fig. 3, except for the 2003-2008 period over which the AMSR-E product is available, using either the ESA-CCI blended (active/passive) product or the original AMSR-E product. While the results obtained with the blended product are similar to Fig. 3 over the whole domain and those obtained with AMSR-E are similar to Fig. 3 over the Mediterranean Basin, the negative correlations are not observed in the AMSR-E product. Over Northern Europe and Russia-Scandinavia, the correlations obtained for AMSR-E are lower than with the blended product. This shows that the blending technique used by Liu et al. (2012) is appropriate, apart from mountainous areas in southern Turkey and in western Iran where the active product is used, whereas the passive product is more relevant in these regions. Although the extreme 2003 year has more weight in the time series considered in Fig. 4, Fig. 3 and the top sub-figures of Fig. 4 are similar over western Europe. This shows that the consistency between ESA-CCI and ISBA-A-gs SSM is preserved during contrasting climatic conditions. Figure 5 compares the absolute and anomaly correlations r 2 of the blended product and of AMSR-E over the 2003-2008 period. Higher values are generally observed for the blended product. The AMSR-E product is more consistent with the ISBA-A-gs simulations than the blended product over 24 % of the grid cells for the absolute correlations, and over 17 % of the grid cells for the anomaly correlations. Simulated and observed phenology Figures 6 and 7 present leaf onset and LGP maps derived from the modelled LAI and from the GEOV1 LAI. Consistent leaf onset features (Fig. 6) are observed across satellite and model products: while the vegetation growing cycle may start at wintertime in some areas of the Mediterranean Basin (e.g. North Africa, southern Spain), the leaf onset occurs later in northern Europe (from February to July) and even later in Russia-Scandinavia (from April to August). In contrast to leaf onset, results are quite different from one data set to another for LGP (Fig. 7). In general, the two models tend to overestimate LGP. However, the LGP values produced by the photosynthesis-driven phenology model of ISBA-A-gs are closer to the satellite-derived LAI LGP than those produced by ORCHIDEE. On average, ORCHIDEE gives relatively high LGP values (180 ± 28 day), compared to ISBA-A-gs and GEOV1 (138 ± 41 day and 124 ± 44 day, respectively). The largest LGP differences between GEOV1 and ISBA-Ags are obtained in the Iberian Peninsula and over Russia-Scandinavia, where GEOV1 observes longer and shorter vegetation cycles, respectively. Figure 8 presents the differences of the two LSM simulations in leaf onset dates and LGP values (in days). It illustrates the overestimation of LGP in northern Europe by the two LSMs, and in other regions by ORCHIDEE. Figure 9 shows the simulated and observed average annual cycle of LAI for the three regions indicated in Fig. 1. It appears clearly that GEOV1 tends to produce shorter growing seasons than the other products, apart from the Mediterranean Basin where the GEOV1 and ISBA-A-gs annual cycles of LAI are similar. In Russia-Scandinavia, the end of the growing period in ISBA-A-gs presents a delay of about one month. This delay is not associated with a marked delay in the leaf onset (Fig. 6). This contradiction is related to the very low LAI value of ISBA-A-gs in wintertime. The prescribed minimum LAI value (LAImin in Table 1) is lower than the GEOV1 observations in wintertime and this bias has an impact on the leaf onset calculation. If LAImin was unbiased, the maximum LAI would probably be reached earlier. On the other hand, the prescribed maximum LAI value in ORCHIDEE is higher than the observations, especially in the Mediterranean Basin. On average, the prognostic LAImin of ORCHIDEE is higher than for the other products. Figure 9 shows that the ORCHIDEE delay in the leaf onset over northern Europe and Russia-Scandinavia is caused by minimum LAI values reached in March (one to two months after GEOV1) and maximum LAI values reached one month after GEOV1 (in July for northern Europe and in August for Russia-Scandinavia). Representation of the interannual variability of LAI In order to assess the interannual variability across seasons, 10-daily Ano LAI values were put end to end to constitute anomaly time series for each of the three LAI products (GEOV1, ISBA-A-gs, ORCHIDEE). Figure 10 presents maps of the Pearson correlation coefficient between the simulated LAI anomalies and the observed ones. Overall, ISBA-A-gs is better correlated with GEOV1 than ORCHIDEE (on average, r = 0.44 over the considered area, against r = 0.35 for ORCHIDEE) and slightly better scores are obtained by the two models over croplands (r = 0.48 and 0.36, respectively). Similar results are obtained considering either median or mean r values. The best correlations (r > 0.6) are obtained over the Iberian Peninsula, North Africa, southern Russia, and eastern Turkey. At high latitudes (northern Russia-Scandinavia), the year-to-year changes in LAI are not represented well by the two models. In these areas, the vegetation generally consists of evergreen forests presenting little seasonal and interannual variability in LAI. Moreover, up to 50 % of the remotely sensed reflectances are missing, mainly due to the snow cover, clouds, high sun and view zenith angles. Figure 11 presents the relative interannual variability of LAI, i.e. the ACV indicator defined in Sect. 2.4.2. Figure 11 shows that ACV is generally higher for ISBA-A-gs than for GEOV1, except for Scandinavia and northern Russia. Conversely, ACV is generally lower for ORCHIDEE than for GEOV1, except for croplands of Ukraine and southern Russia. In these areas the ORCHIDEE mean annual LAI is extremely variable (ACV values close to 50 % are observed), and this variability is more pronounced than in the GEOV1 observations (ACV values are generally below 25 %). The 2003 drought in western Europe The 2003 year was marked, in Europe, by two climatic events which had a significant impact on the vegetation growth. The first one was a wintertime and springtime cold wave, which affected the growth of cereal crops in Ukraine and in southern Russia (USDA, 2003;Vetter et al., 2008). The second one was a summertime heat wave following a long spring drought, which triggered an agricultural drought over western and central Europe Reichstein et al., 2006;Vetter et al., 2008;Vidal et al., 2010). Figure 12 shows the observed and simulated monthly Ano LAI values from May to October 2003. Negative values correspond to a LAI deficit. In May and June, the impact of the cold wave in eastern Europe is clearly visible in the GEOV1 satellite observations. In the same period, the impact of the heat wave appears in western and central regions of France. At summertime, the impact of drought on LAI spreads towards southeastern France and central Europe and tends to gradually disappear in October. The LSM LAI anomalies show patterns that match the two climatic anomalies (drought in western and east-southern Europe; cold winter and spring in northern European Russia) but tend to maintain the agricultural drought too long in comparison to GEOV1. The Ano LAI values derived from the simulations of the two models remain markedly negative in October 2003, while the observations show that a recovery of the vegetation LAI has occurred, especially in the Mediterranean Basin area. Figure 13 presents the time lag for which the best correlation between SSMd and DLAI is obtained (see Sect. 2.4.4), for the second 10-day period of May, June and July. For a large proportion of the cropland area (75 %, 92 %, 94 % in May, June, July, respectively) significant correlations (p value < 0.01) are obtained with the model. A much lower proportion is obtained with the satellite data (1 %, 5 %, 14 %, respectively). For the three months, the average time lag of the model ranges between 16 and 20 days, and the average time lag of satellite-derived products ranges between 18 and 34 days. In April (not shown) nearly no correlation is found with the satellite data, while 45 % of the cropland area presents significant correlation for the model, with an average time lag of 34 days. Representation of soil moisture In the two LSMs considered in this study, soil moisture impacts the LAI seasonality and interannual variability. The interannual variability of the simulated LAI is often driven by changes in the soil moisture availability, which for the soil models of the versions of ORCHIDEE and ISBA-A-gs used in this study results from rather simple parameterizations. In particular, the ability of distinct root layers to take up water and to interact with a detailed soil moisture profile is not represented. Therefore, while the difficulty in representing the modelled LAI interannual variability, as illustrated in Sects. 3.3 and 3.4, can be partly explained by shortcomings in the phenology and leaf biomass parameterizations, another factor is the inadequate simulation of root-zone soil moisture. For example, the difficulty in simulating the vegetation recovery in the Mediterranean Basin in October 2003 (Fig. 12) can be explained by shortcomings in the representation of the soil moisture profile and by the fact that Mediterranean vegetation is rather well adapted to drought with mechanisms of "emergency" stomatal closure (Reichstein et al., 2003) that prevent leaf damage and cavitation. In addition, many European tree and shrub species have deep roots and can access ground water to alleviate drought stress. The soil hydrology component of the ISBA-A-gs simulations performed in this study is based on the force-restore model. The root zone is described as a single thick soil layer with a uniform root profile. After the drought, this moisture reservoir is empty, and the first precipitation events have little impact on the bulk soil moisture stress function influencing photosynthesis and plant growth. In the real world, the high root density at the top soil layer permits a more rapid response of the vegetation growth to rainfall events. The implementation of a soil multilayer diffusion scheme in ISBA-A-gs (Boone et al., 2000;Decharme et al., 2011) is expected to improve the simulation of vegetation regrowth. Similar developments are performed in the ORCHIDEE model following de Polcher (1998) andd'Orgeval et al. (2008). Moreover, LSM simulations are affected by large uncertainties in the maximum available water capacity (Max-AWC). The MaxAWC value depends on both soil (e.g. soil density, soil depth) and vegetation (e.g. rooting depth, shape of the root profile, capacity to extract water from the soil in dry conditions) characteristics. showed over France that MaxAWC drives to a large extent the Figure 13. Predictability of LAI 10-daily differences from SSM over croplands from May to July, based on detrended (top) ISBA-Ags simulations and (bottom) satellite-derived products (GEOV1 LAI and ESA-CCI SSM). The colour dots correspond to four time lags providing the highest squared coefficient correlation (r 2 ) for the predicted LAI anomaly over the 1991-2008 period. The results are given for the second 10-day period of each month at grid cells presenting significant LAI anomaly estimates (p value < 0.01). interannual variability of the cereal and forage biomass production simulated by ISBA-A-gs and that agricultural yield statistics can be used to retrieve these MaxAWC values. It is likely that the correlation maps of Fig. 10 could be improved by adjusting MaxAWC. In ISBA-A-gs, LAImax is a prognostic quantity related to the annual biomass production, especially for crops. Therefore, LAImax values derived from the GEOV1 LAI data could be used to retrieve MaxAWC or at least better constrain this parameter together with additional soil characteristic information and a better soil model. Representation of LAI Apart from indirectly adjusting MaxAWC (see above), the GEOV1 LAI could help improving the phenology of the two models. In ISBA-A-gs, the LAImin parameter could be easily adapted to better match the observations before the leaf onset. In particular LAImin is mostly underestimated over grasslands (not shown). Improving the whole plant growth cycle is not easy as the ISBA-A-gs phenology is driven by photosynthesis and, therefore, depends on all the factors impacting photosynthesis, including the absorption of solar radiation by the vegetation canopy. For example, preliminary tests using a new short-wave radiative transfer within the vegetation canopy indicate that this new parameterization tends to slightly reduce the LGP value (results not shown). Regarding ORCHIDEE, this study revealed a number of shortcomings in the phenology parameterization. The LGP values were generally overestimated (Fig. 7) and the senescence model for grasses was deficient at northern latitudes, with a much too long growing season ending at the beginning of the following year (Fig. 9). A new version is being developed, in which the phenological parameters are optimized using both in situ and satellite observations. The in situ data are derived from the FLUXNET data base (Baldocchi et al., 2008). For boreal and temperate PFTs, the leaf life span parameter is systematically reduced, leading to a shorter LGP (see e.g. Kuppel et al., 2012). A new phenological model for crop senescence involving a GDD threshold, described in Bondeau et al. (2007) and evaluated in Maignan et al. (2011), results in much shorter LGP values for crops. Finally, a temperature threshold is activated in order to improve the simulation of the senescence of grasslands. Can LAI anomalies be anticipated using SSM? The biomass accumulated at a given date is the result of past carbon uptake through photosynthesis, and in waterlimited regions it depends on past soil moisture conditions. For example, using the ISBA-A-gs model over the Puy-de-Dôme area in the centre of France, found a very good squared correlation coefficient values (r 2 = 0.64) between the simulated root-zone soil moisture in May (July) and the simulated annual cereal (managed grassland) biomass production. To some extent, SSM can be used as a proxy for soil moisture available for plant transpiration and LAI can be used as a proxy for biomass. In water-limited areas, the annual biomass production of rainfed crops and natural vegetation depends on soil moisture (among other factors) at critical periods on the year. The differences in predictability of LAI shown in Fig. 13 may be due to shortcomings in both observations and simulations. Significant correlations with the satellite data are only observed in homogeneous cropland plains, such as in southern Russia, especially in July. The accuracy of satellitederived LAI and SSM products is affected by heterogeneities and by topography. This may explain why the synergy between the two variables only appears in rather uniform landscapes, while the modelled variables are more easily comparable in various conditions. The ISBA-A-gs simulations present weaknesses related to the representation of the soil moisture profile (Sect. 4.1). In particular, the force-restore representation of SSM tends to enhance the coupling between SSM and the root-zone soil moisture (and hence to LAI through the plant water stress). Parrens et al. (2014) showed that the decoupling between the surface soil layers and the deepest layers in dry conditions can be simulated by using a multilayer soil model. Apart from these uncertainties, the main reason for the differences in predictability of LAI is probably that the satellite-derived LAI and SSM are completely independent while deterministic interactions between the two variables are simulated by the model. From benchmarking to data assimilation The direct validation of long time series of satellite-derived ECV products is not easy, as in situ observations are limited in space and time . Therefore, indirect validation based on the comparison with independent products (e.g. products derived from model simulations) has a key role to play (Albergel et al., 2013a). In this study, the new ESA-CCI SSM product and the new GEOV1 LAI product were compared with LSM simulations. Hindcast simulations can be used to validate satellite-derived ECV products (Sect. 3.1) and conversely, the latter can be used to detect problems in the models (Sect. 4.2). The results presented in Sect. 3.1 suggest that SSM simulations could be used to improve the blending of the active and passive microwave products. The most advanced indirect validation technique consists in integrating the products into a LSM using a data assimilation scheme. The obtained reanalysis accounts for the synergies of the various upstream products and provides statistics which can be used to monitor the quality of the assimilated observations. Barbu et al. (2011Barbu et al. ( , 2014 have developed a Land Data Assimilation System over France (LDAS-France) using the multi-patch ISBA-A-gs LSM and a simplified extended Kalman filter. The LDAS-France assimilates GEOV1 data together with ASCAT SSM estimates and accounts for the synergies of the two upstream products. While the main objective of LDAS-France is to reduce the model uncertainties, the obtained reanalysis provides statistics which can be used to monitor the quality of the assimilated observations. The long-term LDAS statistics can be analysed in order to detect possible drifts in the quality of the products: innovations (observations vs. model forecast), residuals (observations vs. analysis) and increments (analysis vs. model forecast). This use of data assimilation techniques is facilitated by the flexibility of the vegetation-growth model of ISBA-A-gs, which is entirely photosynthesis driven. In contrast to ISBA-A-gs, ORCHIDEE uses phenological models for leaf onset and leaf offset and the LAI cannot be easily updated with observations. Instead, a Carbon Cycle Data Assimilation System (CCDAS) can be used to retrieve model parameters (Kaminski et al., 2012;Kato et al., 2013). Using this technique, Kuppel et al. (2012) have assimilated eddy-correlation flux measurements in ORCHIDEE at 12 temperate deciduous broadleaf sites. Before the assimilation, the model systematically overestimates LGP (by up to 1 month). The model inversion produces new values of three key parameters of the phenology model and shorter LGP values are obtained. Conclusions For the first time, the variability in time and space of LAI and SSM derived from new harmonized satellite-derived products (GEOV1 and ESA-CCI soil moisture, respectively) was analysed over the Euro-Mediterranean area for a 18-year period (1991-2008), using detrended time series. The explicit simulation of SSM by the ISBA-A-gs LSM permitted evaluating the seasonal and the day-to-day variability of the ESA-CCI SSM. The comparison generally showed a good agreement between the observed and the simulated SSM, and highlighted the regions where the ESA-CCI product could be improved by revising the procedure for blending the active and passive microwave products. ORCHIDEE and ISBA-Ags were used to assess the seasonal and interannual vegetation phenology derived from GEOV1. It appeared that the GEOV1 LAI product is not affected much by saturation and was able to generate a realistic phenology. It was shown that GEOV1 can be used to detect shortcomings in the LSMs. In general, the ISBA-A-gs LAI agreed better with GEOV1 than the ORCHIDEE LAI, for a number of metrics considered in this study: LGP, 10-daily Ano LAI , ACV. In contrast to OR-CHIDEE, the ISBA-A-gs plant phenology is entirely driven by photosynthesis and no degree-day phenology model is used. The advantage is that all the atmospheric variables influence LAI through photosynthesis. Also, the regional differences between ISBA-A-gs and the GEOV1 LAI can be handled through sequential data assimilation techniques able to integrate satellite-derived products into LSM simulations . As shown in the latter study, though the main purpose of data assimilation is to improve the model simulations, the difference between the simulated and the observed LAI and SSM can be used as a metric to monitor the quality of the observed time series. On the other hand, ISBA-A-gs is very sensitive to errors in the atmospheric variables, and bias-corrected atmospheric variables must be used (Szczypta et al., 2011). Finally, the use of SSM to predict LAI 10 to 30 days ahead was evaluated over cropland areas. Under certain conditions, the harmonized LAI and SSM observations used in this study present consistent results over croplands, and SSM anomalies can be used to some extent to predict LAI anomalies over uniform cropland regions. The combined use of satellitederived products and models could help improve the characterization of agricultural droughts.
10,054.8
2013-11-06T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Adoption of computerized tomography perfusion imaging in the diagnosis of acute cerebral infarct under optimized deconvolution algorithm Objectives: To explore the significance of the hemodynamic parameters of Computerized Tomography Perfusion Imaging (CTPI) under the deconvolution optimization algorithm for the diagnosis and treatment of patients with acute cerebral infarct (ACI). Methods: A hundred and ten patients with ACI from December 2018 to September 2019 were selected for research, and CTPI was performed before and after Edaravone injection treatment. Then, the CTPI deconvolution algorithm based on the weighted adaptive (WA) total variation (TV) (WA-TV) optimization was constructed, which was compared with tensor total variation (TTV) and Motion-adaptive sparse parity (MASP). Brain Perfusion 4.0 was applied to obtain the relative time to peak (rTTP), the relative transit time of mean (rMTT), relative cerebral blood volume (rCBV), and relative cerebral blood flow (rCBF) of the core infarction area (CIA) and penumbra ischemic (PI). Results: In four parameters of rTTP, rMTT, rCBV, and CBF, the peak signal to noise ratio (PSNR) of the WA-TV algorithm was higher than the MSAP and TTV algorithms, while the Mean Square Error (MSE) and Mean Absolute Error (MAE) were lower than MSAP and TTV algorithms (P<0.05); the parameters of rCBV (71.56±9.87), rCBF (43.17±7.06) of the CIA before treatment were higher than PI (23.66±7.22; 18.37±3.99), rMTT (124.83±9.73) and rTTP (122.57±7.41) were lower than the PI (183.17±10.16); 150.74±9.74) (P<0.05). After treatment, the rCBV and rCBF of PI were higher than before treatment, and rMTT and rTTP were lower than before treatment (P<0.05), and there was no obvious difference in rCBV, rCBF, rMTT, and rTTP before and after treatment in the CIA (P>0.05). Conclusion: Compared with TTV and MASP, the WA-TV algorithm performs better in noise reduction and artifact reduction. The CTPI parameters of rCBV, rCBF, rMTT, and rTTP are all important indications for the diagnosis of PI and ACI. INTRODUCTION recombinant tissue-type plasminogen activator to treat patients with intravenous thrombolysis. Medications include mannitol, heparin, and Edaravone. [3][4][5] CT can check the lesions of patients with ACI, with low cost, fast examination, and non-invasiveness. CTPI can check the brain with contrast agent injection, [4][5][6][7] providing help for the clinical development of personalized thrombolytic therapy. Deconvolution can calculate various myocardial perfusion hemodynamic parameters, including myocardial blood flow and myocardial blood volume, to assess myocardial ischemia. [8][9][10] Hence, the CTPI deconvolution algorithm based on the WA-TV optimization was constructed to compare with the TTV and MASP algorithms. CTPI was performed before and after the Edaravone injection treatment on 110 cases with ACI, and the clinical value of CTPI was evaluated through comparison with the parameters of rTTP, rMTT, rCBV, and rCBF of the CIA and the PI. METHODS One hundred ten patients with ACI admitted to the hospital from December 2018 to September 2019 were selected and performed CTPI before and after Edaravone injection treatment. The study was approved by the medical ethics committee of the hospital, and the patients and their families understood the study and signed an informed consent form. A 128-slice spiral CT scanner of General Electric (GE) can examine patients. The contrast agent was Iopromide Injection 300 (Iodine Concentration 300mg/mL) of Bayer Pharmaceuticals. Here, a high-pressure syringe was adopted to inject patients with 50 mL of Iopromide into the elbow vein at a rate of 5 mL/s, and the patients are scanned from head to foot. A total of 20 cycles are scanned to obtain a time-dose-curve (TDC). The scanning parameters were tube voltage 125kv, tube current 110mA, layer thickness 6mm, matrix 512×512, and field of view 240×240mm. After the CT scan, the perfusion image was sent to the Brain Perfusion 4.0 perfusion analysis software for the perfusion imaging parameters, namely, CBV, CBF, mean transit time (MTT), and time to peak (TTP). Then, according to the size of the CIA and the PI zone, the relative ratio of the affected side and the uninfected side of the abnormal perfusion area was obtained by mirror line symmetry, including rTTP, rMTT, rCBV, and rCBF. Construction of deconvolution algorithm based on WA-TV optimization. First, the tracer dilution theory was introduced to construct a myocardial CT perfusion convolution model, that is, the arterial input function was used to convolve the blood flow scale function to obtain the tissue contrast agent concentration, the arterial input function was set as B, and the blood flow scaling function was set as G. Then, the convolutional discrete matrix of the region of interest of the entire organization can be expressed as follows. C BG = (1) In which, C was the concentration of tissue contrast agent, , N was the number of voxels in the region of interest. Then, the regularization method can obtain a stable and accurate blood flow scale function, as shown in equation (2). (2) Equation (2) was the myocardial CT perfusion convolution model, represented the regularization term, and � G represented the blood flow scaling function. Using regularization to reduce the noise of CT perfusion images was a commonly used method, but the traditional TV has obvious shortcomings, which can cause image texture results to be lost due to step artifacts. To solve this problem, the WA-TV regularization method was introduced, and it performed adaptive adjustments according to the local image intensity to ensure the completeness of the image edge detail information, which was set as WA-TV. The objective function was as follows. represented the prior term of WA-TV regularization, which can be defined as follows. δ represented the scale factor for adjusting the iteration intensity, which was sensitive to changes in local voxel intensity. Then, the Iterative Shrinkage-Thresholding Method was introduced to optimize the WA-TV regularization prior terms, which can be expressed as follows. represented the average value of ture X , and Q represented the number of pixels. Statistical Analysis: SPSS19.0, mean ± standard deviation (x± s), and the percentage (%) were adopted to process the count data. The comparison of PSNR, MSE, and MAE of WA-TV, MSAP, and TTV algorithms adopted t-test. The relative values of parameters in the PI and CIA before and after treatment were compared through variance. With P<0.05, the difference was statistically significant. RESULTS Table-I showed that the proportion of male patients was slightly higher than that of females; patients older than 50 years was the highest; patients with body mass index of 27-29.9 was the highest; and patients with diabetes, hypertension, and hyperlipidemia were higher than those without them, and the proportions of patients with or without a smoking history didn't differ much, neither did patients with or without coronary heart disease. Performance of the three algorithms under different CT perfusion parameters. In Fig.1 & 2, in the parameters of CBV, CBF, MTT, and TPP, the PSNR of the WA-TV algorithm was higher than that of the MSAP and TTV algorithms (P<0.05); the MSE and MAE of the WA-TV algorithm were lower than the MSAP and TTV algorithms (P<0.05). Bo Fang et al. Comparison of relative values of PI and CIA parameters before and after treatment. As shown in Fig.3 & 4, the rCBV and rCBF in PI after treatment were higher than before treatment, and rMTT and rTTP were lower than before treatment (P<0.05); the difference of rCBV, rCBF, rMTT, and rTTP before and after treatment in the CIA was not obvious (P>0.05). DISCUSSION The results showed that the WA-TV algorithm proposed in this study had a higher PSNR in contrast to the TTV and MASP algorithms after the CTPI images were processed. The PSNR index is a quantitative index used to evaluate the effect of image reconstruction. The larger the PSNR value, the closer the resolution of the reconstructed image to the original image. [11][12][13] Therefore, it suggested that the resolution of the CTPI image reconstructed by the WA-TV algorithm proposed in this study was higher, and the reconstruction result was significantly better than that of the TTV and MASP algorithms. Subsequently, the CTPI images processed by the WA-TV algorithm was applied to compare and analyze the changes in the brain tissue structure of patients with ACI before and after treatment. The indicators used to evaluate the effect of the indicators included CBV, CBF, MTT, and TTP. CBV refers to the VOLUME of blood through a certain cross-section of cerebrovascular in a unit time, which can reflect the damage of brain tissue. 14-15 CBF refers to the total amount of blood contained in cerebral blood vessels (including cerebral arteries, arterioles, and capillaries), and it has important guiding value for the clinical treatment of patients with acute cerebral infarction. 16,17 . MTT means the average time for blood to pass through the vasculature of a specific brain area. 18,19 TTP refers to the time required for the contrast agent to reach the maximum concentration of the brain tissue. 20 The results of this study revealed that CBV and CBF were much higher in patients with ACI before treatment, while MTT and TTP values were greatly lower. It showed that treatment can dramatically improve the perfusion state of ischemic brain tissue in patients with ACI, and reduce the damage to the brain tissue and nerve function of patients. CONCLUSION Here, the CTPI deconvolution algorithm was constructed based on the WA-TV optimization, the performance was compared with TTV and MASP, and CTPI scanned patients with ACI. Consequently, compared with TTV and MASP, the WA-TV algorithm has better noise reduction and artifact reduction. The CTPI parameters of rCBV, rCBF, rMTT, and rTTP are all important indications for the diagnosis of PI and ACI.
2,205
2021-08-04T00:00:00.000
[ "Medicine", "Computer Science" ]
Affecting the Effectors: Regulation of Legionella pneumophila Effector Function by Metaeffectors Many bacterial pathogens utilize translocated virulence factors called effectors to successfully infect their host. Within the host cell, effector proteins facilitate pathogen replication through subversion of host cell targets and processes. Legionella pneumophila is a Gram-negative intracellular bacterial pathogen that relies on hundreds of translocated effectors to replicate within host phagocytes. Within this large arsenal of translocated effectors is a unique subset of effectors called metaeffectors, which target and regulate other effectors. At least one dozen metaeffectors are encoded by L. pneumophila; however, mechanisms by which they promote virulence are largely unknown. This review details current knowledge of L pneumophila metaeffector function, challenges associated with their identification, and potential avenues to reveal the contribution of metaeffectors to bacterial pathogenesis. Introduction Bacterial pathogens use a myriad of virulence strategies to parasitize eukaryotic hosts. A well-established virulence strategy is use of macromolecular secretion systems to translocate bacterial protein virulence factors, termed effector proteins, directly into infected host cells [1]. Legionella pneumophila is a natural intracellular pathogen of freshwater amoebae and the etiological agent of Legionnaires' Disease, a severe inflammatory pneumonia resulting from bacterial replication within alveolar macrophages. To replicate intracellularly, L. pneumophila employs a type IVB secretion system called Dot/Icm to translocate a massive arsenal of over 300 individual effector proteins into the host [2,3]. Collectively, L. pneumophila effectors facilitate biogenesis of the Legionella-containing vacuole (LCV), an endoplasmic reticulum-derived compartment that evades lysosomal fusion and serves as L. pneumophila's intracellular replicative niche. The status quo pertaining to bacterial effectors is that they specifically target host proteins and pathways. However, L. pneumophila encodes a family of effectors, termed metaeffectors, which function as "effectors of effectors" through targeting and regulating the function of other effectors. Metaeffectors contribute to L. pneumophila virulence and provide an additional mechanism by which bacteria regulate effector functions within host cells. Here, we discuss current knowledge pertaining to L. pneumophila metaeffectors and conclude with the importance of future investigation into these important virulence factors within both the Legionella genus and other bacterial pathogens. Identification and Function of L. pneumophila Metaeffectors The term "metaeffector" was coined a decade ago when Kubori and colleagues discovered that the effector LubX spatiotemporally regulates the effector SidH within L. pneumophila-infected host cells [4]. LubX contains two regions with similarity to eukaryotic U-box domains, and functions as E3 ubiquitin ligase within eukaryotic cells [5]. In conjunction with UbcH5a or UbcH5c E2 enzymes, LubX polyubiquitinates the host kinase, Clk1 [4,6]. However, LubX additionally co-opts E2 enzymes to ubiquitinylate its cognate effector, SidH, leading to its proteasomal degradation ( Figure 1) [4]. Like the majority of L. pneumophila effectors, genetic deletion of sidH has no discernable effect on intracellular replication within macrophages, and the function of SidH within host cells has yet to be elucidated [4,6]. However, SidH is a paralog of the L. pneumophila effector SdhA, which promotes L. pneumophila intracellular replication through maintenance of LCV integrity [4,7,8]. Thus, SidH may contribute to maintaining the integrity of the LCV during early infection. In a Drosophila melanogaster infection model, L. pneumophila ∆lubX mutants are hyper-lethal. However, loss of lubX results in decreased bacterial burden in flies compared to wild-type, ∆sidH and ∆sidH∆lubX L. pneumophila strains [4]. However, loss of lubX has no discernable effect on L. pneumophila replication within mouse bone marrow-derived macrophages, suggesting that SidH may be detrimental in the absence of LubX specifically in vivo. It would be valuable to reveal whether loss of LubX-mediated regulation of SidH is also deleterious to L. pneumophila replication in a mouse model of Legionnaires' Disease. Interestingly, LubX expression peaks when the cells are nearing the stationary phase; much later than the critical window for SidH degradation, suggesting that mediation of SidH toxicity may not be the apogee of LubX activity [4]. [4,6]. However, LubX additionally co-opts E2 enzymes to ubiquitinylate its cognate effector, SidH, leading to its proteasomal degradation ( Figure 1) [4]. Like the majority of L. pneumophila effectors, genetic deletion of sidH has no discernable effect on intracellular replication within macrophages, and the function of SidH within host cells has yet to be elucidated [4,6]. However, SidH is a paralog of the L. pneumophila effector SdhA, which promotes L. pneumophila intracellular replication through maintenance of LCV integrity [4,7,8]. Thus, SidH may contribute to maintaining the integrity of the LCV during early infection. In a Drosophila melanogaster infection model, L. pneumophila ∆lubX mutants are hyper-lethal. However, loss of lubX results in decreased bacterial burden in flies compared to wild-type, ∆sidH and ∆sidH∆lubX L. pneumophila strains [4]. However, loss of lubX has no discernable effect on L. pneumophila replication within mouse bone marrow-derived macrophages, suggesting that SidH may be detrimental in the absence of LubX specifically in vivo. It would be valuable to reveal whether loss of LubX-mediated regulation of SidH is also deleterious to L. pneumophila replication in a mouse model of Legionnaires' Disease. Interestingly, LubX expression peaks when the cells are nearing the stationary phase; much later than the critical window for SidH degradation, suggesting that mediation of SidH toxicity may not be the apogee of LubX activity [4]. Temporal regulation of effector translocation is likely important for other effectormetaeffector pairs. The metaeffector SidJ regulates the SidE family of effectors (SidE/ SdeABC) to facilitate biogenesis of the LCV (Figure 1) [9,10]. SidJ is one of very few effectors individually important for L. pneumophila intracellular replication [11]. While expression and translocation of the SidE effectors peaks during early infection, SidJ translocation increases gradually over the course of infection [10,11]. The SidE effectors are mono-ADP-ribosyltransferases that ligate ubiquitin to Rab GTPases independently of E1 and E2 enzymes [12][13][14]. SidJ is a calmodulin-dependent glutamylase that spatiotemporally regulates the SidE effectors by breaking phosphodiester bonds between ubiquitin-and SidE-modified substrates [14]. SidJ is a calmodulin-dependent glutamylase that temporally regulates the function of the SidE effectors [14][15][16][17]. SidJ polyglutamylates Glu860 of the SidE family effector SdeA, leading to its inactivation. In the absence of SidJ, SdeA fails to depart from the LCV surface, but robustly ubiquitinates several Rab and Rag GTPases ( Figure 1) [10,14]. While SidE effectors are important at early stages of infection, their prolonged activity is deleterious to L. pneumophila. Delayed translocation of SidJ relative to the SidE family enables precise temporal regulation of SidE effector function [10]. Timing of SidJ translocation is facilitated by an internal secretion signal, present in addition to its canonical C-terminal secretion signal [10]. Deletion of SidJ's internal secretion signal impairs L. pneumophila intracellular replication to the same extent as a loss-of-function mutation in sidJ [10]. The importance of temporal regulation of the SidE family of effectors by SidJ within host cells demonstrates the critical role of metaeffectors in the establishment of L. pneumophila's intracellular replicative niche. The metaeffector MesI (Lpg2505) was identified following high-throughput forward genetic screening for effector virulence phenotypes using transposon insertion sequencing (INSeq) [18]. L. pneumophila defective in mesI have a severe intracellular growth defect in both a natural amoebal host and mouse models of infection [18]. However, the virulence defect-associated absence of mesI is due solely to the activity of its cognate effector, SidI, since loss-of-function mutation in sidI rescues the growth defect of the ∆mesI mutant [18]. SidI is a cytotoxic effector that inhibits eukaryotic protein translation in vitro and contributes to activation of the heat shock response in L. pneumophila-infected cells [19]. We recently discovered that SidI possesses GDP-mannose-dependent glycosyl hydrolase activity and likely functions as a mannosyltransferase [20]. MesI is sufficient to abrogate both SidI-mediated toxicity and protein translation inhibition [18,20]. MesI binds SidI with nanomolar affinity and the interaction is characterized by a long half-life. MesI binds SidI on both N-and C-termini and does not impair interaction between SidI and its established binding partner, eEF1A ( Figure 1) [20,21]. Despite almost complete abrogation of SidI-mediated translation inhibition, MesI only mildly attenuates SidI glycosyl hydrolase activity, suggesting that MesI does not function to inhibit SidI activity [20]. Although the regions of MesI important for binding the termini of SidI have yet to be defined, the crystal structure of MesI revealed a tetratricopeptide repeat (TPR) segment in MesI's 6/7, 8/9, and 10/11 alpha-helices that form grooves predicted to play a role in SidI binding [21]. Whether the terminal regions of SidI bind to MesI through a large unilateral interface, or if multiple separate interaction sites exist on MesI is unknown. Whether MesI participates in SidI-mediated activation of the heat shock response is also unknown (Figure 1). Urbanus and colleagues recently executed the most comprehensive effector toxicity suppression screen to date, resulting in the discovery of 17 effector-suppression pairs, including nine putative metaeffectors [22]. The researchers used a high-throughput yeast toxicity assay to screen over 108,000 pairwise effector-effector genetic interactions [22]. This study revealed the plasticity of metaeffector activity. In some cases, metaeffectors directly inactivate their cognate effector. For example, LegL1 deactivates its cognate effector through steric hindrance of its active site. Other metaeffectors, such as LupA and LubX, enzymatically modify their cognate effectors LegC3 and SidH (see above), respectively ( Figure 1) [22]. LupA is a eukaryotic-like ubiquitin protease that catalyzes removal ubiquitin from LegC3 [22]. LegC3 is one of three L. pneumophila effectors that mimic eukaryotic Q-SNAREs to recruit vesicles coated with VAMP4 to the LCV [23,24]. How ubiquitiniylation influences LegC3 activity and the contribution of its regulation by its metaeffector LupA are both unknown. This screen not only identified novel metaeffector pairs, but also unveiled diversity in effector function and regulation. SidP is a phosphatidylinositol-3-phosphate (PI3P) phosphatase [25]. However, SidP's PI3P phosphatase activity is dispensable for binding and suppressing the toxicity of its cognate effector MavQ. The phosphatase activity of SidP resides within its N-terminal domain, and the C-terminal domain alone is sufficient for binding and regulation of MavQ. MavQ is a predicted phosphoinositide (PI) kinase, and together with SidP, likely regulates PI metabolism within host cells. Interestingly, SidP is toxic to yeast when expressed together with the effector Lem14; however, the role of Lem14 in SidP metaeffector activity and PIP metabolism has not been fully elucidated [22]. The putative role of MavQ as a PIP kinase, and the synergistic effects of SidP and Lem14 reveal a complex picture of effector regulation of host PIPs (Figure 1) [22]. LegA11 is a metaeffector of unknown function that binds and suppresses the toxicity of SidL. The N-terminal region of LegA11 contains ankyrin-repeats (PDB:4ZHB), which are canonically involved in protein-protein interactions [26,27]. Like SidI, SidL inhibits eukaryotic protein translation; however, SidL also inhibits actin polymerization when ectopically expressed in eukaryotic cells [28,29]. Aberrant organization of the actin cytoskeleton attenuates protein translation [30], but whether SidL-mediated translation inhibition is a consequence of impaired actin polymerization is unknown (Figure 1) [29,31]. The role of LegA11 in regulation of SidL function is unknown. Elucidating the mechanism by which LegA11 regulates SidL will likely shed light on SidL's function and the importance of its spatiotemporal regulation. The effector deamidases MavC and MvcA are both regulated by a single metaeffector, Lpg2149 (Figure 1). MavC and MvcA are functional antagonists that temporally regulate the activity of the host E2 enzyme, Ube2N. MavC catalyzes E1-independent monoubiquitination and inhibition of Ube2N [32]. However, prolonged inhibition of Ube2N is detrimental to L. pneumophila and is reversed through MvcA deubiquitination ( Figure 1) [33]. Lpg2149 binds and inhibits the deamidase activity of both MavC and MvcA; however, the biological significance of this inhibition and influence on temporal regulation of Ube2N ubiquitination are unknown. Further investigation is required to uncover the role of Lpg2149 in L. pneumophila virulence. Collectively, these studies underlie the importance of metaeffectors in spatiotemporal regulation of L. pneumophila effector function. What Makes a Metaeffector? Classification of an effector as a metaeffector is based on two criteria, (1) binding; and (2) regulation of a cognate effector(s). Several metaeffectors, including LubX and SidJ, coopt host proteins to regulate their cognate effectors. Moreover, LubX does not exclusively catalyze ubiquitination of SidH (see above), demonstrating the functional versatility of metaeffectors. Other metaeffectors, such as MesI, are able to regulate their cognate effectors in the absence of host components, but this does not preclude the involvement of host factors. A defining feature of metaeffectors is direct interaction with cognate effector proteins. However, other characteristics are shared amongst effector-metaeffector pairs. Structure In general, metaeffectors are smaller than their cognate effectors. This is a trend and not a rule, as several metaeffectors such as LegL1, LupA, and SidP are comparable in size to their targets (Table 1). It is also not uncommon for metaeffectors to contain interaction domains, such as the tetratricopeptide repeats (TPR) of MesI, ankyrin repeats of LegA11, or the leucine-rich repeats (LRR) of LegL1 [21,27]. These interaction domains are likely important for the interaction of effectors with their cognate effector. For example, the LRR of LegL1 forms a canonical horseshoe shape over RavJ's active site, causing steric hindrance [22]. The ankyrin repeats in LegA11 likely facilitates protein-protein interactions (see above) [26,27]. Thus, several metaeffectors possess canonical proteinprotein interaction motifs that are likely used to bind their cognate effector(s). Putative PIP Kinase 871 [23] a Protein size shown as number of amino acid residues; b Predicted activity determined using HHPred [34]. Proximity Metaeffectors are typically encoded in close proximity to their cognate effector within the genome [22]. However, some exceptions exist, since mavQ is not encoded in the vicinity of either sidP or lem14 [22]. Genomic analysis of 38 Legionella species revealed 143 effector pairs encoded in close proximity in at least two Legionella genomes. Nineteen of these effector pairs-including SidL-LegA11 and SidI-MesI-appear to have co-evolved; however, this number may be higher, as it only captures pairs found in multiple species and does not consider those unique to a single species [35]. Some effector pairs, such as SidL and LegA11, are always found in conjunction, while others, such as SidI and MesI, occasionally occur in solidarity [35]. sidL and legA11 represent the most highly co-evolved effector pair in the Legionella genus [35]. Relatively little is known about transcriptional regulation of effector-metaeffector gene expression. Interestingly, legA11 and sidL are encoded adjacent to each other, but on different strands of the chromosome and initiate in opposite directions. Elucidating the timing and quantity of effector and metaeffector gene expression can provide additional spatiotemporal insights into mechanisms of metaeffector-mediated regulation of effectors. While effector pairs are present across the Legionella genus [3], only L. pneumophila metaeffectors have been studied to date. Although all Legionella species studied to date replicate within an endoplasmic reticulum-derived LCV, whether species-specific differences affect metaeffector-effector regulation and function exist has yet to be elucidated. Concluding Remarks Although effectors are critical virulence factors for many Gram-negative bacterial pathogens, mechanisms by which effectors are regulated within host cells are poorly understood. Metaeffectors provide an additional layer of regulation and spatiotemporal fine-tuning of effector function. Although metaeffectors are currently unique to the Legionella genus, it is tempting to speculate that other pathogen virulence strategies involve metaeffectors. However, identification of metaeffectors is challenging, and relies on robust phenotypes resulting from effector dysregulation. Urbanus and colleagues conducted the most extensive effector-pair screen to date using a yeast expression model. However, other metaeffector-effector pairs may be incognito within this unnatural expression in the absence of a toxic effector phenotype [22]. Extreme functional redundancy within L. pneumophila's effector repertoire creates challenges, as deletion of a single effector rarely leads to a discernable phenotype [6,18]. MesI and SidJ are two of less than a dozen effectors that are individually important for L. pneumophila intracellular replication. Thus, metaeffectors play a major role in the virulence strategy of L. pneumophila, which emphasizes the importance of both effector interplay and functional regulation. Metaeffectors represent a noncanonical effector regulatory system that is likely not unique to L. pneumophila. Identification of metaeffector and metaeffector-like functions has been contingent on observable phenotypes, such as toxicity or intracellular replication; however, scrutiny of genomic organization of effector genes may lead to identification of additional metaeffectors encoded by other Legionella species and other bacterial pathogens. Further investigation will undoubtedly reveal additional mechanisms of effector regulation arising from host-pathogen co-evolution, and could provide a foundation for development of anti-virulence therapeutics.
3,754.8
2021-01-22T00:00:00.000
[ "Biology", "Medicine" ]
Gene regulatory networks inference using a multi-GPU exhaustive search algorithm Background Gene regulatory networks (GRN) inference is an important bioinformatics problem in which the gene interactions need to be deduced from gene expression data, such as microarray data. Feature selection methods can be applied to this problem. A feature selection technique is composed by two parts: a search algorithm and a criterion function. Among the search algorithms already proposed, there is the exhaustive search where the best feature subset is returned, although its computational complexity is unfeasible in almost all situations. The objective of this work is the development of a low cost parallel solution based on GPU architectures for exhaustive search with a viable cost-benefit. We use CUDA™, a general purpose parallel programming platform that allows the usage of NVIDIA® GPUs to solve complex problems in an efficient way. Results We developed a parallel algorithm for GRN inference based on multiple GPU cards and obtained encouraging speedups (order of hundreds), when assuming that each target gene has two multivariate predictors. Also, experiments using single and multiple GPUs were performed, indicating that the speedup grows almost linearly with the number of GPUs. Conclusion In this work, we present a proof of principle, showing that it is possible to parallelize the exhaustive search algorithm in GPUs with encouraging results. Although our focus in this paper is on the GRN inference problem, the exhaustive search technique based on GPU developed here can be applied (with minor adaptations) to other combinatorial problems. Background The cell is a complex system where its activity is controlled by gene regulatory networks [1]. The mRNA concentration produced by each gene indirectly reflects its expression level. These concentrations can be an indication of the biological state of the cell, since they represent the proteins synthesized by ribosomes [2]. Thus, the biological processes studies can be based on the analysis of mRNA concentrations (expression levels) of the genes. DNA microarrays [3], SAGE (Serial Analysis of Gene Expression) [4] and RNA-Seq [5] are among the most common techniques to measure the expression level of thousands of genes at the same time. A vast amount of transcriptome data has been provided by these large scale techniques, whose analysis requires efficient computational tools. In this context, the inference of gene regulatory networks (GRNs) aim to obtain the interactions among genes from gene expression data. Due to its relevance, several methods for GRN inference have been proposed, including Bayesian networks based [6,7], relevance networks [8], ARA-CNE (Algorithm for the Reconstruction of Accurate Cellular NEtworks) [9] CLR (Context Likelihood of Relatedness) [10], and SFFS-MCE (Sequential Floating Forward Selection -Mean Conditional Entropy) [11][12][13]. For reviews on this topic, the reader can be referred to [14][15][16][17][18][19]. Although many GRN inference methods are available, there are still challenges to overcome, such as noisy data, computational complexity and the curse of dimensionality (number of variables much larger than the number of available samples). Solutions based on highperformance computing are interesting when the objective is to infer GRNs with thousands of genes, although traditional platforms are expensive and difficult to maintain. In this context, GPU (Graphics Processing Unit) for general purpose computing (GPGPU) is an emergent technology which allows to perform high-performance computing with relatively low cost [20,21]. CUDA (Compute Unified Device Architecture) is a programming platform which provides a parallel programming model allowing the NVIDIA GPU architectures to perform efficient general purpose computing. The employment of GPUs to address the GRN inference problem is very recent though. Shi et al. proposed a parallelization scheme for GRN inference based on information-theoretic framework which involves matrices multiplication, optimizing the benefit obtained by applying GPU [22]. This method results in an approximation considering only pairwise relationships between genes, without taking into account the multivariate predictiveness nature of certain predictor genes with respect to the target genes. Here we present a GPU-based parallel exhaustive search algorithm, with mean conditional entropy as criterion function, for GRNs inference with two multivariate gene predictors per target gene. The gene network inference approach of the proposed algorithm is based on probabilistic gene networks [11], which displayed interesting results in obtaining the best predictor pairs for the considered target genes, given a data set with ternary values (-1,0,+1). We obtained speedups (six-core CPU was taken as reference) of 190 for ternary data samples and 260 for binary data samples when using 4 GPUs in networks with 8192 genes and almost linear increases in the speedup versus the number of GPUs. Consequently, using our algorithm, the exaustive search of predictor genes in GRNs can be performed in a reasonable amount of time. The present paper is an extended version of the paper "Accelerating gene regulatory networks inference through GPU/CUDA programming" [23]. The main improvements found in this manuscript include: i -an improved version of the algorithm that works on multiple GPUs, instead of a single GPU; ii -a more complete description of the model used to infer gene networks from temporal gene expression data (probabilistic gene networks); iii -novel experiments considering binary and ternary genes (instead of only binary) and adopting single and multiple GPUs (one, two and four). Identifying predictors by probabilistic gene networks using mutual information Expression profiles of predictor genes display relevant informative content (individual or in conjunction with other predictors) about the expression profile of a given target gene. Feature selection methods can be employed to find the subset of genes (predictors) presenting the largest information content about the target gene values. We adopted the probabilistic genetic network (PGN) approach [11][12][13] which follows the feature selection principle: for each target, a search for the subset of predictors that best describes the target behavior according to their expression signals is performed. Barrera et al. discusses this approach in the context of the analysis of dynamical expression signals of the Plasmodium falciparum (one of the agents of the malaria disease), providing interesting biological results [11]. This approach assumes that the temporal samples follow a first order Markov chain in which each target gene value in a given instant of time depends only on its predictor values at the previous instant of time. The transition function is homogeneous (it is the same for every time step), almost deterministic (from any given state, there is one preferential state to go in the next time) and conditionally independent. Lopes et al. [13] provides a comparative study involving this approach (using the Sequential Floating Forward Search as search algorithm) and methods like MRNET [8], ARACNE [9] and CLR [10]. Such approach showed superior performance for retrieving multivariate predictors. The mean conditional entropy (MCE), indicating the average information content of the target gene given its predictors, was adopted as fitness function. Mutual information is a measure of dependence between variables that has been employed in many research fields such as image processing [24,25], physics [26] and bioinformatics [27,28]. The main advantage of mutual information compared to other similarity measures such as Pearson correlation is the capability to capture non-linear relationships between variables [28]. The exhaustive search that looks for all possible pairs of candidate predictors for each target was considered as search method. In fact, it is the only way to obtain optimality in feature selection due to the intrinsically multivariate prediction, which may be present in biological systems [29]. Such phenomenon is related to the nesting effect that occurs when a greedy feature selection algorithm or other sub-optimal heuristics are applied. Once the exhaustive search is applied for all genes considered as target, the network is achieved. Search algorithm Given a set G of genes, the search algorithm identifies, for each target gene y G, the best subset X ⊆ G that predicts y according to a criterion function. The following algorithm performs an exhaustive search in order to identify the pairs (X,y): Algorithm 1 : ExhaustiveSearch 1: for each target gene y G do 2: for each predictor genes subset X ⊆ G do 3: calculates the criterion function H of the prediction of y by X 4: end for 5: end for Criterion function The mean conditional entropy (MCE) was adopted as criterion function. The Shannon's Entropy [30] of a variable Y is defined as where P(Y = y) is the probability of the variable Y be equal to y. The conditional entropy of Y given X = x is: where X is a feature vector and P(Y = y|X = x) is the conditional probability of Y be equal to y given the observation of an instance x X. GPU architecture and CUDA GPUs (Graphics Processing Unit) are programmable graphic processor which, in combination with CPUs, can be used as a general purpose programming platform. They are optimized to perform vector operations and floating point arithmetics, executing in SIMD (Simple Instruction, Multiple Data) mode [31]. Each GPU has a set of Streaming Processors (SMs), each constituted by an array of processor cores, which are the logical-arithmetic units of the GPU, as shown in Figure 1(a). Each SM has a large number of registers, a small control unit and a small amount of shared memory, accessible from the threads executing in the SM. Graphical devices normally have a large amount of global memory, which is shared among the SMs. The latency for accessing this memory is high and, consequently, the shared memory is normally used as a user controlled cached. CUDA (Compute Unified Device Architecture) is a platform that provides an extension to the C language that enables the usage of GPUs as a general purpose computing device. A compiler generates executable code for the GPU device from the provided CUDA code. The programmer defines special functions called kernels, which are executed in the GPU. The user defines the number of threads to create, organizing them in thread blocks. The collection of blocks from a single kernel execution is called grid. Each thread block runs on a single SM, but multiple blocks can be assigned to the same SM in a time-shared way. The CUDA programming model is shown in Figure 1(a). A GPU/CUDA algorithm for GRNs inference The general concept of the parallel exhaustive search consists in distributing the fitness function computation along the SMs. The algorithm partitions the set of target genes T into k segments T 0 , T 1 , …, T k−1 and distributes these segments among the thread blocks. Each thread Figure 1 The CUDA platform. a) Architecture of a modern GPU containing a large global memory and a set of multiprocessors, each one with an array of floating-point processors, a small shared memory and a large number of registers. b) Hierarchical organization of CUDA threads in thread blocks and in kernel grids, where each thread block is assigned to a single multiprocessor. block is responsible for evaluating the criterion function for its assigned target genes T i for every pair of predictors from the set of genes P. Preliminary considerations and user settings Given G with n genes, the complexity of the exhaustive search is O(n |X| ) for each evaluated target, where |X| is the number of predictors. This occurs because for each target we must evaluate the entropy for every p-tuple of predictor genes. If every gene is used as target, we have a total complexity of O(n p+1 ). For larger values of |X|, this procedure becomes impractical for typical values of n (thousands in microarray experiments). In this way, the number of predictors was fixed in two (|X| = 2) to reduce the search space. From the biological point of view, this decision is reasonable since the average number of predictors in GRN is between 2 and 3 according to some previous studies [32]. Besides, in a typical microarray experiment, only dozens of samples are available, which leads to a weak statistical estimation if one considers subsets with 3 or more predictors per target [11]. Preprocessing Initially, the program reads the expression matrix from the disk and replicates it into two matrices T and P in the main memory. Matrices T and P represent the expressions of the target and candidate predictor genes, respectively. Each matrix has s lines, representing the experiment samples, and n columns, represent each gene. Figure 2(a) shows an example matrix with 4 genes and 7 samples. After loading the data into T and P into the main memory, the program allocates space and transfers the matrices to the GPU global memory. Local exhaustive search We consider that k blocks are started, denoted by Bl 0 , Bl 1 , …, Bl k−1 . The algorithm then partitions the set of target genes T into k segments T 0 , T 1 , …, T k−1 of size n/k and the set of predictor genes p into 2k segments P 0 , P 1 , …, P 2k−1 of size n/2k, as illustrated in Figure 3. Each thread block is responsible for evaluating the criterion function for its assigned target genes in T i for every pair of predictors from the set of genes P. Each thread evaluates the conditional entropies, for every pair of predictors, of a single target gene in T i . To evaluate the entropies, each thread block transfers to the shared memory of its SMs parts of tables T and P containing the set of target genes T i and two sets of predictors P j1 and P j2 . These data are transferred from the global memory in a coalesced way, which joins up to 32 individuals memory reads into a single one, increasing the effective memory bandwidth. Algorithm 2 describes the exhaustive search procedure executed by each block. To evaluate the conditional entropies of a target gene in T i for each pair of predictors in P j1 and P j2 , the thread creates a table, shown in Figure 2(b). This table contains the number of times a gene in T i assumed the values 0 or 1 for each combination of the predictor genes values, and the associated conditional entropy. The threads maintain this table at the registers of their associated SMs during the evaluation of the entropies, preventing expensive global memory accesses. Algorithm 2 : LocalExhaustiveSearch Require: segment T i of target genes and segments P j1 and P j2 of predictor candidates 1: for each target t T i do 2: for each pair (p1, p2) {P j1 × P j2 } do 3: calculates the power of (p1, p2) to predict t according to a criterion function H 4: end for 5: end for Global exhaustive search The global exhaustive search provides, for each thread block, all pairwise combinations of predictor subsets P j1 and P j2 . With these permutations along the segments of P, each thread block can evaluate all predictor candidate pairs for every target gene in T i , as described in Algorithm 3 and illustrated in Figure 4. Evaluates the entropy for every pair of predictors (p1, p2) {P j1 × P j2 } 7: end for 8: end for As we will analyze in the next section, this algorithm reduces the number of transfers from global memory for each predictor gene from set P. Moreover, it transfers each target gene from set T only a single time, in the beginning of the algorithm. Besides reducing the number of global memory transfers for each gene, by dividing the tables into contiguous sets, we can perform coalesced transfers [31] from the global to the shared memory, further increasing the bandwidth of the memory. In this kind of transfer, up to w memory values are transferred as a single memory access. w is architecture dependent and has a value of 32 in the tested GPUs. Consequently, the algorithm works optimally for multiple of 32 genes, since the GPUs execute the threads in clusters (warps) of 32. Thus, the transfers between shared and global memories and the use of GPU cores are optimized. For different GRN sizes, dummy genes might be included to the GRN. Analysis of the algorithm Considering a single thread block, for each iteration of the outer loop from Algorithm 3, one segment of genes P j1 is transferred from the memory. For the inner loop, there are 2k − j1 iterations for each value of j1, where on each iteration the segment P j2 is transfered. Consequently, the number of segment transfers per thread block, considering the inner and outer loops, is: 2 * k − j1 + 1 = 2 + 3 + . . . + (2k + 1) = 2k 2 + 3k We must add to this value the transfer of T i in the beginning of the algorithm. Considering that there are k thread blocks operating simultaneously and that there Figure 4 Access rule of blocks to segments. An arrow P i Bl j indicates that the block Bl j accesses the segment P i . Arrows of the same color indicate that the accesses are simultaneously performed. are n/2k genes per segment, the total number of gene transfers will be: k * n 2k * (2k 2 + 3k + 1) = n × (k 2 + 3 2 k + 1 2 ) Consequently, we can see that the number of gene transfers from the global memory is O(n * k 2 ) and that each gene is transfered O(k 2 ) times. This means that by increasing the segment sizes, we have a smaller k less transfers from the global memory. Also, if the shared memory is not used, the total number of gene transfers would be O(n 3 ), resulting in a memory load (n/k) 2 times higher. For n = 4096 predictors and segments of size 128, resulting in k = 32 blocks, the number of transfers without the segmentation would be 128 * 128 = 1.6 * 10 4 times higher. This difference occurs because when a segment is transferred to the shared memory, the values for each gene from the segment are used multiple times. Multi-GPU algorithm In order to provide scalability for our method and improve its performance, we extended our inference algorithm to work with multiple GPUs. The general idea of the multi-GPU algorithm is to partition the set of target genes among the available GPUs and execute Algorithms 2 and 3 in each GPU. Consequently, each GPU is responsible for calculating the entropy of a subset of target genes. Suppose we have m GPUs denoted by C 0 , C 1 , …, C m−1 . The multi-GPU algorithm is described as follows: 1. Copy matrix P to the global memory of each GPU. Then, partition matrix T in m supersegments of size n/m, which we denote by T 0 , T 1 , …, T m−1 (we assume here that n is a multiple of m). Copy each supersegment T i to the global memory of C i , 0 ≤ i ≤ m − 1. 2. Launch the kernels with k/m thread blocks per GPU, where k is the total number of blocks. We denote by Bl i j the block Bl j started in C i , where 0 ≤ j ≤ k/m − 1 and 0 ≤ i ≤ m − 1. 3. Execute Algorithms 2 and 3 on each GPU, so that GPU C i receives the blocks Bl i 0 , Bl i 1 . . . , Bl i k/m−1 , which operate on the segments T i 0 , T i 1 . . . , T i n/m−1 . Here T i j denotes the segment j belonging to supersegment i. 4. Copy the best predictor pairs for each target gene along with their corresponding entropy values from the GPU global memory to the CPU main memory. Figure 5 shows a schematic representation of the main characteristics of the multi-GPU algorithm. Implementation The implementation of the parallel exhaustive search algorithm was performed using CUDA. We applied all optimizations described in the algorithm description. The implementation source code can be obtained at https://sourceforge.net/p/inferencemgpu/. The CPU implementation, which we used to evaluate the speedups, utilizes OpenMP to enable the usage of all cores from the processor. OpenMP is an API (Application Program Interface) designed for implementing parallel algorithms in architectures with shared memory multiprocessors. We divided the target genes among the threads, resulting in a good load-balancing among the cores. Results and discussion We executed the CPU version of the algorithm in a machine with a six-core Intel i7 3930K 3.2 GHz processor and 32GB of DDR3 RAM memory. For the GPU implementation execution, we used a quad-core computer with Intel i7 920 2.6 GHz with 6 GB of DDR3 RAM memory and 2 NVIDIA GTX 295 graphic boards, with 2 GPUs and 1792 Mb of memory on each board. Each GPU has 30 multiprocessors (SMs) with 8 cores on each, resulting in a total of 240 cores per GPU. We used Linux Ubuntu 12.04, with CUDA version 4.2 and gcc 4.6.3 compiler, configured with the option -O3. In both versions, we measured the elapsed times of the complete execution of the application. In the GPU version, this means that the time necessary to allocate the variables in the CPU and GPU memories and the data transfers are included in the measurements. The binary data samples used in the experiments presented in this section were generated using the Artificial Gene Network (AGN) simulator [33], considering the Erdös-Rényi Boolean network model. Such simulator allows to control the number of genes present in the network and the number of samples. We also performed experiments considering ternary data samples, i.e., genes can be underexpressed, normal and overexpressed (-1, 0 +1). For this set of experiments, we considered the Plasmodium falciparum database [34], the same database considered in [11], whose expression values were quantized in three values by applying normal transform (Z-Score). Execution times for binary samples We performed three experiments to evaluate the performance of our method, comparing the execution times of the CPU implementation with the GPU algorithm running in 1, 2 and 4 GPUs. We used datasets with 30 binary samples and GRNs with different sizes (1024, 2048, 4096, 8192). Tables 1, 2 and 3 show the average execution times for each experiment (3 executions for each experiment) considering 32, 64 and 128 target genes per block. For a fixed GRN dize, larger number of target genes per block implicates in smaller execution times. This happens because the higher the number of target genes processed per block, the higher the number of genes processed in parallel in each block, which leads to less traffic between shared and global memories. Experiments with targets/block > 128 were not performed, since the GPU shared memory does not support the segment lengths of the expression matrices T and P. The small amount of shared memory is an important restriction of the GPU architecture. We verified that as we increase the number of targets per block, the execution time decreased as expected, since the number of transfers from global memory for each gene is O(n * k 2 ). But the number of operations for evaluating the entropies is the same, regardless of the number of targets per block. This explains the almost linear gains in performance when increasing the block size from 32 to 128 targets per block. Execution times for ternary samples The same experiments performed for binary samples were also run considering 30 ternary samples (considering that each gene can assume three values). Tables 4, 5 Obtained speedups We also evaluated the speedups obtained with the GPU algorithm. We defined the speedup as the execution time spent by the multi-core algorithm parallelized on 6 CPU cores divided by the execution time spent by the GPU algorithm for the same instance of the problem. Figures 6(a), 6(b) and 6(c) show the speedup versus number of genes considering 32, 64 and 128 target genes per block, respectively. These results consider both binary and ternary samples. The results show good speedups on networks with 2048 or more genes, especially when using two or four GPUs. For example, using four GPUs for networks of 8192 genes, we obtain speedups of approximately 55, 110 and 260, when using 32, 64 and 128 target genes per block, respectively, for the binary samples case. For ternary samples, the speedup behaviors look similar, with speedups of approximately 30, 40 and 185 for 32, 64 and 128 targets per block, respectively. Moreover, the speedup tends to increase with the number of genes, since in this case we use all the cores in the GPUs more effectively. The speedup obtained with the ternary samples was smaller than with the binary samples because each thread uses more state variables. This results in a larger register utilization and, consequently, a smaller number of simultaneously executing threads. However, the obtained speedup is 185 when using 4 GPUs and 60 when using a single GPU. Regarding the usage of multiple GPUs, Figure 6(a) shows that, for the binary coding, there is no advantage in using two or four GPUs when we take 32 target genes per block and GRNs with 1024 genes. With ternary coding, the execution times with two and four GPUs were the same. A similar scenario occurs for networks with up to 2048 genes and 64 and 128 genes per block, as shown in Figure 6(b) and 6(c). This result can be explained considering the number of thread blocks required to represent all target genes. For instance, considering 32 targets per block, we need 32, 64 and 128 blocks for networks with 1024, 2048 and 4096 genes, respectively. In the experiments we used GPUs with 30 SMs, which can simultaneously execute a number of blocks multiple of 30. With 32 genes per block and 1024 genes, the speedups with 1, 2 and 4 GPUs were the same. In this case, it is clear that with one GPU, 2 SMs executed 2 blocks simultaneously, with the others executing a single block, without a performance penalty. With 2048 genes there are 64 blocks and there was a performance gain when using 2 or more GPUs. In this case, with one GPU some SMs had to execute three blocks, which could not be performed simultaneously. Consequently, it required almost twice the time when compared to the execution with 2 GPUs. Finally, for networks with larger number of genes, such as 8192 genes, the use of multiple GPUs provides important gains in the speedup. For instance, considering 128 genes per block using 4 GPUs (Figure 6(c)), the speedup was 2 times higher than the obtained by considering 2 GPUs and 3 times higher than the obtained by considering a single GPU when applying to binary samples. And in the ternary samples case, considering the same settings, the speedup obtained for 4 GPUs was 2 times higher than the obtained by considering 2 GPUs and 3.1 times higher than the obtained by considering a single GPU. Number of samples To evaluate the dependence of the runtime on the number of samples, we conducted an experiment using four GPUs and a network with 4096 genes. We varied the amount of samples and target genes per block. The results (see Figure 7) show that there is a linear dependence between the runtime and the number of samples for both binary and ternary samples cases. Conclusions In this paper we propose a multi-GPU algorithm that allows the inference of gene regulatory networks (GRNs) with multivariate predictions in significantly lower times than using multi-core CPUs. For instance, the inference of a GRN with 8192 genes, which took about two days in a six core CPU, was executed in less than 30 minutes using 1 GPU and about 10 minutes using 4 GPUs. The main contribution of the algorithm is to permit the execution of the exhaustive GRN inference method using large datasets in a reasonable time. Another important observation is that the proposed multi-GPU scheme is well scalable, since the speedups increased in an almost linear fashion with the employed number of GPUs. Such speedups results suggest that it is an efficient and low cost solution for researchers that need to infer GRNs of realistic sizes (order of thousands) from transcriptome data in a reasonable time, considering multivariate (N-to-1) relationships. Besides, this paper presents a proof of principle, showing that it is possible to parallelize the exhaustive search algorithm in GPUs with encouraging results. Although our focus was on the GRN inference problem, we developed an exhaustive search technique based on GPU which can be applied to other combinatorial problems with minor adaptations. As future work, the algorithm will be improved to work with predictor subsets with cardinality greater than 2, which allows to infer GRNs with more complex interactions. Such improvement requires new approaches for gene expression matrices division and for data traffic management between the global and shared memories. Also, we will also update the method to execute in clusters of heterogeneous GPUs, which will provide more performance, specially for inferences with larger networks and higher cardinalities.
6,766.2
2013-11-01T00:00:00.000
[ "Computer Science" ]
The Discovery of the Faintest Known Milky Way Satellite Using UNIONS We present the discovery of Ursa Major III/UNIONS 1, the least luminous known satellite of the Milky Way, which is estimated to have an absolute V-band magnitude of +2.2−0.3+0.4 mag, equivalent to a total stellar mass of 16−5+6 M ⊙. Ursa Major III/UNIONS 1 was uncovered in the deep, wide-field Ultraviolet Near Infrared Optical Northern Survey (UNIONS) and is consistent with an old (τ > 11 Gyr), metal-poor ([Fe/H] ∼ −2.2) stellar population at a heliocentric distance of ∼10 kpc. Despite its being compact (r h = 3 ± 1 pc) and composed of few stars, we confirm the reality of Ursa Major III/UNIONS 1 with Keck II/DEIMOS follow-up spectroscopy and identify 11 radial velocity members, eight of which have full astrometric data from Gaia and are co-moving based on their proper motions. Based on these 11 radial velocity members, we derive an intrinsic velocity dispersion of 3.7−1.0+1.4 km s−1 but some caveats preclude this value from being interpreted as a direct indicator of the underlying gravitational potential at this time. Primarily, the exclusion of the largest velocity outlier from the member list drops the velocity dispersion to 1.9−1.1+1.4 km s−1, and the subsequent removal of an additional outlier star produces an unresolved velocity dispersion. While the presence of binary stars may be inflating the measurement, the possibility of a significant velocity dispersion makes Ursa Major III/UNIONS 1 a high-priority candidate for multi-epoch spectroscopic follow-ups to deduce the true nature of this incredibly faint satellite. Classical globular clusters are typically bright and compact stellar systems, while dwarf galaxies are orders of magnitude more diffuse than globular clusters at similar magnitudes and cover a much broader range in characteristic size.Willman & Strader (2012) proposed that the key physical distinction between these two types of systems is that the dynamics of dwarf galaxies cannot be explained through a combination of baryonic processes and Newton's laws, while globular clusters can be explained in such a way.Therefore, in the framework of ΛCDM cosmology, dwarf galaxies are thought to lie at the center of their own dark matter halos.The faintest known dwarf galaxies (sometimes called ultra-faint dwarf galaxies or UFDs; Simon 2019) are observed to have dynamical masses (measured from stellar kinematics) many orders of magnitude larger than the mass implied by total luminosity (dynamical mass-tolight (M/L) ratios ∼ 10 3 M ⊙ /L ⊙ ).Dynamical analysis of globular clusters, on the other hand, shows that they do not have appreciable amounts of dark matter and are comprised solely of baryonic matter.Willman & Strader (2012) also suggested that a significant dispersion in the distribution of stellar metallicities could be used as a proxy for the presence of a dark matter halo.It is argued that the shallow potential wells of globular clusters are unable to retain the products of stellar feedback and thus form a stellar population of a single metallicity.In contrast, it is argued that dwarf galaxies can retain gas and have prolonged star formation histories, leading to self-enrichment.Significant metallicity dispersions have often been used to distinguish between dwarf galaxies and globular clusters in this way (e.g.Leaman 2012; Kirby et al. 2013;Li et al. 2022). In parallel with the increase of dwarf galaxy discoveries, a number of faint Milky Way satellites of ambiguous nature have also been unearthed.These faint systems are typically small in physical extent (half-light radius, r h ≲ 15 pc), well within the virial radius of the Milky Way (heliocentric distance, D ⊙ ≲ 100 kpc), and faint (absolute V -band magnitude, M V ≳ −3 mag) 1 .Beyond these general observations, these tiny Milky Way satellites are still poorly understood for two reasons: (1) In these observed properties, they lie at the interface of dwarf galaxies and globular clusters, and (2) their internal dynamics and chemical properties are not well studied en masse. Diagnostics such as size (e.g.Balbinot et al. 2013;Conn et al. 2018), stellar mass segregation (e.g.Koposov et al. 2007;Kim et al. 2015), and comparison to the dwarf galaxy stellar mass-metallicity relation (e.g.Jerjen et al. 2018) have been used to argue that some of these systems are more likely to be ultra faint star clusters (i.e.lacking dark matter).However, neither the presence nor lack of a dark matter halo has been demonstrated conclusively for any one of these system. Dynamically confirming the nature of any one of these faint, ambiguous satellites could extend either the globular cluster or dwarf galaxy luminosity functions by up to a few orders of magnitude, and could extend the dwarf galaxy scale-length function by up to a factor of 10.Globular clusters are valuable for studying the evolution of the interstellar medium and stellar populations over cosmic time (e.g.Krumholz et al. 2019;Adamo et al. 2020) while dwarf galaxies have proven to be powerful probes of star formation (e.g.Bovill & Ricotti 2009), chemical enrichment (e.g.Ji et al. 2016aJi et al. ,b, 2023;;Hayes et al. 2023), and the nature of dark matter (e.g.Lovell et al. 2012;Wheeler et al. 2015;Bullock & Boylan-Kolchin 2017;Applebaum et al. 2021).The faintest and smallest dwarf MW satellites are particularly constraining.Their total number may be used to constrain alternative models, such as "warm" dark matter (Lovell et al. 2012), or "fuzzy" dark matter (Nadler et al. 2021).In addition, their characteristic densities place strong constraints on self-interacting dark matter models (Errani et al. 2022;Silverman et al. 2023).Additional studies of these faint, ambiguous systems, including radial velocities and metallicity measurements, will be needed to further understand individual satellites as well as how the characteristics of globular cluster and dwarf galaxy populations extend to such faint magnitudes and parsec-length scales. In this paper, we detail the discovery and characterization of Ursa Major III/UNIONS 1, the least luminous Milky Way satellite detected to date.Line-of-sight velocities of candidate member stars obtained through follow-up spectroscopic observations may imply a significant radial velocity dispersion, but repeat radial veloc-ity measurements are be needed to conclusively demonstrate whether dark matter is present in this system.We refer to this system as Ursa Major III/UNIONS 1 as its identity as a dwarf galaxy or star cluster is not clear at this time.In Section 2 we summarize the discovery dataset, detection of the system, and follow-up spectroscopy.In Section 3, we characterize the structural parameters of Ursa Major III/UNIONS 1, as well as its distance, luminosity, dynamics, and orbit.Finally, in Section 4, we discuss the classification of Ursa Major III/UNIONS 1, and summarize our results.together to support the Euclid space mission, providing robust ground-based ugriz photometry necessary for photometric redshifts that will be the main pillar of Euclid's science operations.However, UNIONS is a separate survey whose aim is to maximize the science returns of this powerful, deep, wide-field photometric dataset.UNIONS aims to deliver 5σ point source depths of 24.3, 25.2, 24.9, 24.3, 24.1 mag in ugriz which is roughly equivalent to the first year of observations expected from the Legacy Survey of Space and Time (LSST) at the Vera C. Rubin Observatory, making UNIONS the benchmark photometric survey in the northern skies for the coming decade. This work only utilizes the CFIS-r and Pan-STARRS-i band catalogs.The median 5σ point source depth in a 2 arcsecond ( ′′ ) aperture is 24.9 mag in r and 24.0 mag in i, although the Pan-STARRS-i depth will increase over time owing to the scanning-type observing strategy employed.Currently, the area common to the two datasets spans more than 3500 deg 2 across both the North Galac-tic Cap (NGC) and South Galactic Cap (SGC).These catalogs were cross-matched with a matching tolerance of 0.5 ′′ , though sources typically matched to better than 0.13 ′′ . The median image quality of the CFIS-r observations is an outstanding 0.69 ′′ , allowing us to perform star galaxy separation morphologically.We correct for galactic extinction using E(B − V ) values from (Schlegel et al. 1998) assuming the conversion factors given by (Schlafly & Finkbeiner 2011) for a reddening parameter of R V = 3.1.The CFHT r-band filter is not present in the Schlafly & Finkbeiner (2011) list, so we adopt the conversion factor for Dark Energy Camera (DECam) rband filter (Flaugher et al. 2015) given that their fullwidth at half-maximum (FWHM) are identical, and that the DECam-r centroid is shifted redwards from CFHT-r by only 2 nm.For the remainder of the manuscript, r will refer to CFIS-r and i will refer to Pan-STARRS-i unless otherwise specified. Detection Method UMa3/U1 was discovered as a spatially resolved overdensity of stars using a matched-filter approach.Variations on this general methodology have proven to be efficient and productive in discovering faint dwarf galaxies and faint star clusters in wide-field surveys (e.g.Koposov et al. 2007;Walsh et al. 2009;Bechtol et al. 2015;Drlica-Wagner et al. 2015;Koposov et al. 2015;Homma et al. 2018). Our particular implementation of the algorithm will be described here in brief, though we refer the reader to Smith et al. (2022) for a more detailed overview of the search method.A matched-filter seeks to isolate stars that belong to some particular stellar population, in order to create contrast in stellar density between the member stars of a putative satellite and the Milky Way stellar background.We start by selecting all stars in the UNIONS footprint that are consistent with a 12 Gyr, [Fe/H]= −2 PARSEC isochrone (Bressan et al. 2012), constructed from the CFHT-r and PanSTARRS-i bands, with the isochrone shifted to a test distance.Stars are then tangent plane projected, binned into 0.5 ′ × 0.5 ′ pixels, and smoothed with Gaussian kernels with FHWMs of 1.2, 2.4, and 4.8 ′ .The broad-scale mean and variance are then subtracted and divided out from the smoothed maps, respectively, to find the significance of each pixel with respect to the Milky Way background.All peaks in the significance map are then recorded for future examination, and this process is repeated for a series of heliocentric distances spaced roughly logarithmically from 10 kpc to 1 Mpc.Our match-filter algorithm has been successful in detecting previously known dwarf galaxies.It should be noted that dwarf galaxy detections are the main focus of this search, but extra-galactic star clusters are also typically old and metal-poor, so the matched-filter ought to pick them up as well.We have used the known dwarf galaxy population to do a first-pass assessment of the algorithm's efficiency.Within 1 Mpc, we have recovered, with high statistical significance, all known Local Group dwarf galaxies (including M31 dwarfs) that had been found in shallower surveys with matched-filter methods.We also detect other galaxies in the Local Universe out to 2 Mpc, and several Milky Way globular clusters. UMa3/U1 is one of the most prominent candidates without a previous association produced from our search, on par with the detection significance of some known UFDs in the UNIONS sky.UMa3/U1 is most prominently detected in the significance map produced by the 1.2 ′ smoothed, 10 kpc iteration at a statistical significance of 3.7σ above the background.For reference, Draco II is an ultra-faint satellite of the Milky Way whose true nature is unknown, but it has been estimated to be ∼ 20 kpc distant, and is detected at 2.8σ above the background in our own search.Figure 1 shows a color-magnitude diagram (CMD) using r, i photometry, demonstrating the detection of stars about UMa3/U1 that meet the matched-filter selection criteria.We also show an equivalent CMD of a reference field, which is an equal-area elliptical annulus, to indicate the expected level of source contamination in the detection. Keck/DEIMOS Spectroscopy We obtained spectroscopic data for 59 stars towards UMa3/U1 using the DEIMOS spectrograph (Faber et al. 2003) on the Keck II 10-m telescope.Data were taken on April 23rd, 2023 using a single mutlislit mask in excellent observing conditions.10 targets were initially selected from membership analysis based on full astrometric data from Gaia (this selection is further explained in Section 3.3).The remaining 49 targets were selected from stars that were consistent with the observed stellar population of UMa3/U1 in r, i photometry which filled the slit mask around UMa3/U1.We used the 1200G grating that covers a wavelength range of 6400 − 9100 Å with the OG550 blocking filterwith the aim of measuring the Calcium II infrared triplet (CaT) absorption feature.Observations consisted of 3 × 20 minute exposures, or 3600s of total exposure time.Data were reduced to one dimensional wavelengthcalibrated spectra using the open-source python-based data reduction code PypeIt (Prochaska et al. 2020).PypeIt reduces the eight individual DEIMOS detectors as four mosaic images, where each red/blue pair of detectors is reduced independently.When reducing data for this paper, PypeIt's default heliocentric and flexure corrections are turned off and a linear flexure term is determined as part of the 1D data reductions below. Stellar radial velocities and Calcium Triplet equivalent widths (EWs) were measured using a preliminary version of the DMOST package (M.Geha et al., in prep).In brief, DMOST forward models the 1D stellar spectrum for each star from a given exposure with both a stellar template from the PHOENIX library and a telluric absorption spectrum from TelFit (Gullikson et al. 2014).The velocity is determined for each science exposure through an MCMC procedure constraining both the radial velocity of the target star as well as a wavelength shift of the telluric spectrum needed to correct for slit miscentering (see, e.g.Sohn et al. 2007).The final radial velocity for each star is derived through an inverse-variance weighted average of the velocity measurements from each exposure.The systematic error reported by the pipeline, derived from the reproducibility of velocity measurements across masks and validated against spectroscopic surveys, is ∼ 1 km s −1 (see M. Geha et al., in prep).Lastly, DMOST measures the equivalent width from the CaT by fitting a Gaussian-plus-Lorentzian model to the coadded spectrum (for stars at S/N > 15) or a Gaussian model (for stars below S/N < 15).We assume a 0.2 Angstrom systematic error on the total equivalent width determined from independent repeat measurements.Thanks to excellent observing conditions, we measured velocities and combined CaT equivalent widths (EW) for 31 of 59 targets and achieved spectra with a median signal-to-noise (S/N) per pixel ranging from 15 to 74 for the 10 suspected members. Distance and Stellar Population The matched-filter detection algorithm only searches for stellar populations with an age of 12 Gyr and metallicity of [Fe/H] = −2, and results of the search provide an initial estimate of the distance to UMa3/U1.We further refine these properties through visual inspection in an iterative process, where we make small systematic tweaks to the distance, metallicity, and age of the stellar population until a satisfactory model is settled upon.We also point out that UMa3/U1 has an extraordinarily low stellar mass (which will be explored further in Section 3.6) so this by-eye analysis is based on only a handful of stars spanning ∼ 6.5 mag on a color-magnitude diagram (CMD). We initially set the distance to be 10 kpc and explored a grid of PARSEC isochrones, spanning 5 − 13 Gyr in age (τ ) in steps of 1 Gyr and −2.showing that all such isochrones approximate the data reasonably well. appears to be a good fit to the data.τ ≥ 11 Gyr is a good fit to the few stars at the Main-Sequence Turn Off (MSTO).Changing the age by a ∼ 1 Gyr results in minor changes in the shape of the isochrone, while changes in metallicity are negligible, so we adopt τ = 12 Gyr and [Fe/H] = −2.2 for all calculations and derivations going forward.We note, however, that [Fe/H] = −2.2 is the most metal-poor isochrone available in our set, so this isochrone may only be an upper limit on the true systemic metallicity. For the distance estimate, we prioritize fitting the MSTO to the four brightest stars due to their small photometric uncertainties, though the fainter stars appear to be slightly bluer on average than our bestfit isochrone, the most metal-poor in the set.While this could be explained by the large photometric uncertainties at this magnitude, we overlaid slightly more metal-poor MIST isochrones (Choi et al. 2016).We did not find a significantly better fit, and note that variance between isochrones from different databases is large enough that attempting to constrain the distance with greater accuracy may not be appropriate.We therefore adopt a 10% uncertainty on the distance estimate, giving 10 ± 1 kpc.Adjustments of 1 kpc were incrementally made to the distance of the isochrone, and this ap- pears to be appropriate.Figure 2 shows CMDs with all matched-filter-selected stars brighter than i 0 ∼ 23 mag, where PARSEC isochrones are plotted over top with small variations in distance and age.Unfortunately, we are unable to use either Tip of the Red Giant Branch (TRGB) or Horizontal Branch (HB) stars to further constrain the distance to UMa3/U1 as this system is so sparsely populated that no stars currently exist in these more highly evolved, shorter duration stages of the stellar lifecycle.Given the relative closeness of the system, along with the r saturation limits of r ∼ 17.5 mag, we also searched through the Third Data Release (DR3) from Gaia (Gaia Collaboration et al. 2016Collaboration et al. , 2021)), but a lack of bright stars in the direction of UMa3/U1 confirms that no TRGB or HB stars are present.Additionally, we searched the PS1 RR Lyrae (Sesar et al. 2017) and Gaia variability (Gaia Collaboration 2022) catalogs, but could not identify any RR Lyrae stars within a few arcminutes of UMa3/U1.We therefore cannot use the properties of these variable stars to further constrain the heliocentric distance. Structural Parameters We follow a procedure, which is based on the methodology laid out in Martin et al. (2008Martin et al. ( , 2016a)), to estimate the structural parameters of UMa3/U1 assuming the distribution of member stars is well described by an elliptical, exponential radial surface density profile and a constant field contamination.The profile, ρ dwarf (r), is parameterized by the centroid of the profile (x 0 , y 0 ), the ellipticity ϵ (defined as ϵ = 1 − b/a where b/a is the minor-to-major-axis ratio of the model), the position angle of the major axis θ, (defined East of North), the half-light radius (which is the length of the semi-major axis r h ), and the number of stars N * in the system.The model is written as where r, the elliptical radius, is related to the projected sky coordinates (x, y) by We assume that the background stellar density is uniform, which is reasonable on the scale of arcminutes up to a degree or so.The background, Σ b , is calculated as follows: where n is the total number of stars in the field of view and A is the total area, normalizing the background density with respect to the selected region.We combine the elliptical, exponential surface density model with the uniform background term to construct our posterior distribution function, which we sample with the the afine-invariant Markov Chain Monte Carlo sampler emcee (Foreman-Mackey et al. 2013) to estimate the most likely structural parameters to describe UMa3/U1.We apply flat priors to all parameters, the bounds of which are given in Table 1. Prior to invoking this method, we select only those stars which are selected by the matched-filter, this time using the 12 Gyr, [Fe/H] = −2.2isochrone which was determined in Section 3.1.We apply an additional magnitude cut, retaining stars with i ≤ 23.5 mag to ensure incompleteness effects do not skew the parameter estimation. The emcee routines are initialized with 64 walkers, each running for 10,000 iterations with the first 2,000 iterations thrown out to account for burn-in.The program shows good convergence and the median value for each parameter, along with the 16% and 84% percentiles taken as uncertainties, are provided in Table 2 along with all other measured and derived properties.UMa3/U1 is compact, with a physical half-light radius (r h , semi-major axis of elliptical distribution) derived to be 0.9 +0.4 −0.3 ′ , or 3 ± 1 pc in physical units.c Systematic uncertainties were investigated by Lindegren et al. (2021). Despite its extreme dearth of giant stars, UMa3/U1 lies close enough that several stars are brighter than the approximate limiting magnitude (G ∼ 21 mag) of Gaia DR3.Using stars with full astrometric data (Lindegren et al. 2021), we estimate the systemic proper motion of UMa3/U1 and assign likelihoods for individual stars to be members of this faint stellar system. We follow the methodology of Jensen et al. (2023), which builds upon McConnachie & Venn (2020a,b), and we refer the reader to those papers for more details regarding the algorithm.Briefly, the method uses spatial, photometric, and astrometric information about each star, in conjunction with the structural parameters derived for some stellar system, to compute the likelihood that a given star is a member of a putative stellar sys-tem rather than a member of the Milky Way foreground stellar population.The Milky Way foreground prior distributions for each parameter are calculated empirically using a subset of stars detected by Gaia in a 2 deg circle about the location of the system, where the Gaia photometry was extinction-corrected following Gaia Collaboration et al. (2018). In this application to UMa3/U1, we use the structural parameters derived from UNIONS data in Section 3.2 and the stellar population estimates of τ = 12 Gyr and [Fe/H] = −2.2.The systemic proper motion of UMa3/U1 is estimated by the membership algorithm to be (µ α cosδ, µ δ ) = (−0.75± 0.09 (stat) ± 0.033 (sys), 1.15 ± 0.14 (stat) ± 0.033 (sys)) mas year −1 .This process also identifies 7 stars with P sat > 0.99 (where P sat is membership likelihood) and an additional star with P sat = 0.75.Jensen et al. (2023) have investigated the workings of this code, finding that P sat ∼ 0.2 actually corresponds to ∼ 50% probability of being a member (based on known members identified via radial velocities), so we find 8 high-likelihood member stars in total based on Gaia measurements.2 additional stars with marginal membership likelihoods (0.01 < P sat < 0.10) near to the centroid of UMa3/U1 were also targeted in our spectroscopic follow-up for further investigation. Membership As discussed in Section 2.3, 31 of 59 target spectra were successfully measured following Geha et al. (in prep.), producing heliocentric radial velocities (v ⊙ ) and combined CaT EWs (sum of all three CaT absorption features, Σ EW ).Traditionally, methods of inferring metallicity ([Fe/H]) from CaT EWs have been calibrated using red giant branch stars (Starkenburg et al. 2010;Carrera et al. 2013) its CMD, with this one star lying at the inflection point of the best fitting isochrone where the sub-giant branch (SGB) transitions to the red giant branch.We estimate this star to have log(g) = 3.51 using the best-fit isochrone, so while it has evolved off the MS, it is outside the range of log(g) for which [Fe/H] estimators have been calibrated.The calibration from Starkenburg et al. (2010) considers model spectra computed for RGBs with 0.5 ≤ log(g) ≤ 2.5 while Carrera et al. (2013) calibrated the [Fe/H] − CaT Σ EW relationship using RGBs that lay in the range 0.7 ≤ log(g) ≤ 3.0, where log(g) was derived from high resolution spectroscopy.However, we still are able to use the measured Σ EW to help with membership identification. Using a combination of v ⊙ , Σ EW , and membership likelihoods assigned from the algorithm described in Section 3.3, we isolate member stars.In Figure 3, we plot all potential member stars on-sky and on a CMD of extinction-corrected r, i photometry.The four brightest members are all in good agreement with the bestfit isochrone.In the range 20 ≤ G 0 ≤ 21 mag, there are several stars that are all roughly consistent with the isochrone, though there is some scatter.The eight sources marked with blue circles are those with high membership likelihoods (P sat ≥ 0.75) and the orange squares are three additional stars that are consistent with Gaia-identified members both spatially and in v ⊙ , but do not have full astrometric data, and could therefore not be identified by the membership algorithm.The red 'X' demarcates one of the marginal members (P sat = 0.02) noted in Section 3.3 and its formal membership will be discussed below (Section 3.4.1).All 8 member stars are seen to cluster in proper motion space (bottom-left panel of Figure 3), and we note that 5 of these members are tightly clustered at the systemic proper motion (see Table 2).The other 3 other mem-bers have large uncertainties, as they are the faintest of the members with measured proper motions, but are still consistent with the systemic proper motion (within 2 − 3σ).Additionally, the bottom-right panel of Figure 3 shows the Σ EW − v ⊙ plane of all stars observed with Keck/DEIMOS.The suspected members cluster at v ⊙ ∼ 90 km s −1 and Σ EW ∼ 1.5 Å, neatly separated from all other measured stars.Photometric and dynamical properties are listed for all likely members and the two confirmed non-members in Table 3. Marginal Members Here, we offer evidence to suggest that the two marginal members identified in Section 3.3 (0.01 < P sat < 0.10) are in truth not members of UMa3/U1.First, star #5 in Table 3 had very unusual measurements in both velocity and Σ EW , so we examined its spectrum and found it to lack any CaT absorption features while featuring a broad emission-like bump around 7200 Å. Paired with a proper motion of close to zero, we suspect that this source may be a background quasar and consequently exclude it from our analysis. Star #8 in Table 3 is featured in both Figures 3 & 4 as a red 'X' where measurements show evidence that this is not a member.While this star does lie close to the systemic proper motion of UMa3/U1, it also has a proper motion near zero.In Figure 3, the grey contours show the empirical distribution of all stars measured by Gaia within 2 deg of UMa3/U1 and there is an overdensity near the origin, giving this star a high likelihood of being part of the proper motion background.Additionally, this star is more than 7 × r h from the centroid of the stellar overdensity.Figure 4 shows that this star is offset from the mean velocity by 3.8σ, where the dispersion of the distribution is calculated in Section 3.5.The final piece of evidence comes from the Σ EW relative to other likely member stars.Despite not being able to calculate [Fe/H], we can see on the Gaia CMD in Figure 3 that there are 7 suspected members, as well as the marginal member being discussed here, in the range 20 ≤ G 0 ≤ 21 mag.Therefore, if all these stars are true members, they are all similar types of dwarf stars and thus should have similar values of log(g), T ef f , and [Fe/H].Additionally, the isochrone fitting implies that this is a metal-poor population ([Fe/H] ∼ −2.2).The 7 suspected members are all clustered around ∼ 1.5 Å while star #8 has Σ EW = 4.42 ± 0.44 Å.If this star in question really is a main sequence dwarf star in UMa3/U1 (and therefore at the same distance of 10 kpc), it must be far more metal-rich than than b These three stars do not have full astrometric measurements in Gaia DR3 and therefore Psat could not be computed, but are nonetheless suspected to be members based on their velocities and CaT equivalent widths. the other suspected member stars, which would be extremely anomalous.All told, we feel this accumulation of evidence implies that star #8 is very unlikely to be a member of UMa3/U1, so we exclude it from the member list for the following analysis of the velocity distribution. Velocity Distribution We follow Walker et al. (2006) to measure the mean and intrinsic dispersion in heliocentric velocity.We construct the following log-likelihood function, (5) where the v i and σ i are the measured velocity and uncertainty for each individual star.⟨v ⊙ ⟩ is the mean heliocentric radial velocity of UMa3/U1, and σ v is the intrinsic velocity dispersion, which are the model parameters and quantities of physical interest.The log-likelihood function is maximized with respect ⟨v ⊙ ⟩ and σ v using emcee.The final estimates are ⟨v ⊙ ⟩ = 88.6 ± 1.3 km s −1 and σ v = 3.7 +1.4 −1.0 km s −1 where uncertainties are the 16th and 84th percentiles of the distributions produced by the MCMC.This result is shown in the left-most panels of Figure 5. We investigate the robustness of this result, to understand how the selection of UMa3/U1 member stars might change the measured intrinsic velocity dispersion. We systematically exclude individual stars from the velocity dispersion estimation, one-by-one, and find that star #2 (denoted in Table 3), the largest velocity outlier, causes the largest change by reducing the velocity dispersion to σ v = 1.9 +1.4 −1.1 km s −1 .Continuing in this direction, we keep star #2 out of the member list and systematically exclude individual stars to find the next most impactful source.Removing star #4, a high S/N measurement at the high-velocity end of the distribution, produces an unresolved velocity dispersion.This systematic analysis of the velocity distribution is relevant because the presence of binary stars in dwarf galaxies can inflate the measured dispersion by several km s −1 relative to the true intrinsic dispersion (McConnachie & Côté 2010;Minor et al. 2010), and binary fractions in the "classical" dwarf spheroidals has been found to vary broadly, ranging from 14% − 78% (Minor 2013;Spencer et al. 2017Spencer et al. , 2018;;Arroyo-Polonio et al. 2023). Our spectroscopic measurements indicate that the intrinsic velocity dispersion of UMa3/U1 is resolved, but we note that repeat observations are critical for this to be confirmed.Results from the MCMC calculations for the systematic removal of stars #2 and #4 are shown in the central and right-most panels of Figure 5 and identify which sources require particularly careful spectroscopic follow-up observations.Left: 2D and marginalized posterior probability distributions for the mean heliocentric radial velocity and its intrinsic dispersion measured from 11 likely member stars.The median values of the heliocentric radial velocity and intrinsic dispersion are shown with uncertainties indicating the 16% and 84% percentiles.Center: Same as the left panel, except we have excluded star #2 (in Table 3) and note that the intrinsic velocity dispersion drops to 1.9 +1.4 −1.1 km s −1 .Right: Same as the left panel, except we have excluded stars #2 and #4 (in Table 3).In this case, we no longer resolve an intrinsic velocity dispersion.Upon testing the systematic removal of all combinations of stars, the exclusion of stars #2 and #4 are the most impactful upon the velocity dispersion.We report 68% and 95% percentile upper limits on the velocity dispersion of 2.3 and 4.4 km s −1 respectively. We now aim to derive the total stellar mass by creating a sample of mock stellar populations that emulate the characteristics of UMa3/U1.We follow a similar methodology to Martin et al. (2016a) when creating the mock populations. We assume the underlying stellar population of UMa3/U1 is described by a canonical two-part Kroupa initial mass function (IMF) (Kroupa 2001) and a stellar population of τ = 12 Gyr, [Fe/H] = −2.2.We create a single mock stellar population by first drawing a distance from a normal distribution with a mean of 10 kpc and a standard deviation of 1 kpc, and shifting the theoretical isochrone to that distance.We then similarly draw a number of stars (N * ) from a normal distribution with a mean of 21 and a standard deviation of 5.5 (average of 16% and 84% percentiles reported in Table 2) which will act as the target number of stars above the adopted completeness limit, i = 23.5 mag.Randomly sampling D ⊙ and N * propagate previously derived uncertainties through to the final stellar mass estimation.We then sample individual stellar masses from the IMF, converting each to the i-band and checking if i star ≥ 23.5 mag.We sample the IMF until N * stars above the completeness limit have been accrued, at which point we sum the stellar mass of all stars (including those below the completeness limit).We repeat this process to create 100,000 mock stellar populations of UMa3/U1 and find the median total stellar mass to be M tot = 16 +6 −5 M ⊙ where the uncertainty spans the 16% and 84% percentiles of the total stellar mass distri-bution.This can be recast in terms of the frequentist p-value; we reject that the total stellar mass is greater than 38 M ⊙ at the 99.9% confidence level. Additionally, we convert this mass to luminosity and absolute V-band magnitude (M V ).We calculate the empirical baryonic mass-to-light ratio (M/L) to be ∼ 1.4 for a 12 Gyr, [Fe/H] = −2.2stellar population, so a stellar mass of 16 +6 −5 M ⊙ implies a total luminosity of 11.4 ± 3.6 L ⊙ , which is equivalent to a total absolute V -band magnitude of +2.2 +0.4 −0.3 mag.We also compute the effective surface brightness by dividing half the total flux by the area enclosed by one elliptical half-light radius and converting to mag arcsec −2 .This comes to 27 ± 1 mag arcsec −2 .All properties derived here can be found in Table 2. Several assumptions are used in this methodology, so we performed several modified analyses to assess the robustness of this result with respect to these choices.Given a 5σ point source depth of 24.0 mag in i, we chose a magnitude cut of i = 23.5 mag to mitigate issues with stellar completeness when deriving stellar parameters, notably the total number of stars in the system.We did not perform a detailed stellar completeness investigation, but we did repeat the same stellar mass analysis using more restrictive magnitude cuts of 23.2 mag (10σ depth) and 22.4 mag (20σ depth).These produced total stellar mass estimates of 21 +7 −6 M ⊙ and 22 +9 −8 M ⊙ respectively, which are consistent with the initial estimate within uncertainties.We also repeated the analysis using a Chabrier IMF (Chabrier 2003), which produced a nearly identical result of 17 +6 −5 M ⊙ .Finally, we investigated the impact of varying age and metallicity as these parameters were constrained by eye alone.We ran the same analysis for isochrones of ages 11 & 13 Gyr (holding [Fe/H] fixed at −2.2), which gave total stellar masses of 17 and 16 M ⊙ and absolute V -band magnitude was +2.0 and +2.2 mag, respectively.Similarly, we used isochrones of metallicities −2.1 & −2.0 (holding age fixed at 12 Gyr), producing total stellar masses of 17 and 18 M ⊙ and absolute V -band magnitudes of +2.2 and +2.1 mag, respectively.For all isochrone changes, we recalculated the empirical baryonic mass-to-light ratio.This analysis appears to be very robust to small variations in age and metallicity. We note that a common approach to calculate absolute magnitude is to add up the luminosity contribution of every star and obtain an absolute magnitude for each individual mock stellar population (e.g.Martin et al. 2016a;Martínez-Delgado et al. 2022;Collins et al. 2022Collins et al. , 2023;;McQuinn et al. 2023a,b).This is in contrast to simply counting up the luminosity contribution of each individually observed star on the CMD, which gives the present day luminosity of the system.Direct counting can be challenging due to difficulties in effectively accounting for background/foreground contamination in the member star sample.UMa3/U1 has a median of 57 total stars (down to to 0.1 M ⊙ , over the 100,000 mock populations) meaning that small number statistics play a huge role.Based on the isochrones used to model UMa3/U1, the most massive star is 0.8 M ⊙ while the MSTO is at ∼ 0.77 M ⊙ .Late-stage stars (RGB, HB) have a massive contribution to the total luminosity of such a tiny system, so mock populations that happen to sample a single star in the mass range 0.77 − 0.8 M ⊙ have a dramatically boosted total absolute magnitude, skewing the distribution.For this reason, we consider total stellar mass to be a more stable tracer of the stellar content of UMa3/U1, thus providing an estimate of absolute magnitude whose variance is less heavily affected by the occasional sampling of a single late-stage star. In Smith et al. (2022), we developed a similar method to estimate absolute magnitude of the Boötes V UFD.This methodology was found to produce an excess in the number of bright, late-stage stars (RGB, HB) which resulted in an over-estimation of the total absolute magnitude.We re-derive the total stellar mass and absolute magnitude of Boötes V following the methodology described in this Section by producing 1,000 realisations using stellar population and structural parameters found in Smith et al. (2022).We measure the stellar mass of Boötes V to be 1044 +775 −485 M ⊙ , which, when converted to absolute V -band magnitude gives M V = −2.4+0.7 −0.6 mag.This is consistent with the V -band magnitude found by Cerny et al. (2023b) using deeper, targeted follow-up observations from GMOS on Gemini North, measured to be −3.2 +0.3 −0.3 mag. Orbital Estimation With estimates for all six phase space parameters available, we now use a simple dynamical model to investigate the orbit of UMa3/U1 and its interaction history with the Milky Way.We approximate UMa3/U1 as a point-mass in a Milky Way potential, implemented with the python-wrapped package gala (Price-Whelan 2017).The Milky Way potential used for this analysis is comprised of three components: (1) a Miyamoto & Nagai (1975) disk, (2) a Hernquist (1990) bulge, and (3) a spherical NFW dark matter halo (Navarro et al. 1996).The parameters used for the bulge and disk are taken from the listed citations whereas the NFW dark matter halo parameters (M DM 200,MW , R 200 ) are adopted from estimates by Cautun et al. (2020), where a concentration parameter of 12 is chosen.This potential produces a circular velocity at the radius of the Sun similar to a recent estimate (v circ (R ⊙ ) = 229 km s −1 ; Eilers et al. 2019).We use a right-handed Galactocentric coordinate system such that the Sun is located at (X, Y, Z) = (8.122,0.0, 0.0) kpc, with local-standard-of-rest velocities of [U,V,W] = [10.79, 11.06, 7.66] km s −1 (Robin et al. 2022). To characterize the orbit, we perform a Monte Carlo randomisation, where we generate 1,000 samples of the initial orbital conditions (i.e.input parameters { α J2000 , δ J2000 , D ⊙ , µ α cosδ, µ δ , v r }), which are previously measured in this analysis.Each parameter, aside from sky positions (α J2000 , δ J2000 ) as uncertainties are negligible, is modelled by a Gaussian distribution with standard deviation given by errorbars indicated in Table 2.The orbit of each point-mass is integrated 0.5 Gyr both forwards and backwards in time in steps of 10 −3 Gyr. Figure 7 displays the orbits of all 1,000 realisations, with the blue tracks tracing backwards in time and the red tracks tracing forwards in time.The coordinate system is arranged such that Milky Way rotation proceeds clockwise on the xy-plane depicted in the left panel of Figure 7, meaning that the orbit of UMa3/U1 is prograde.The mean orbit is shown in black and the distribution of all 1,000 realisations shows that the orbit is quite stable to uncertainties on input parameters.Several key orbital parameters, namely the pericenter (r peri , closest approach to Milky Way), apocenter (r apo , furthest point from Milky Way), z max (maximum height above the disk), time between pericenters (orbital time), time since last pericenter, and orbital eccentricity, are calculated for each orbit, and the median, along with the 16% and 84% percentiles on each parameter, are presented in Table 2. The potential that we use does not include the Large Magellanic Cloud (LMC).We note that Pace et al. (2022) computed the change in orbital parameters of all the known UFDs (at the time) given the inclusion and exclusion of the LMC.With r peri , r apo of 12.9 and 26 kpc respectively, UMa3/U1 has a smaller apocentre than all the UFDs considered by Pace et al. (2022), and a smaller pericentre than all but one UFD in the Pace et al. (2022) list when integrated in a Milky Way-only potential.To find the best sample to compare to UMa3/U1, we selected all UFDs in Pace et al. ( 2022) with r peri < 30 kpc and r apo < 50 kpc.This group comprises Tucana III, SEGUE 1, SEGUE 2, and Willman 1.Of these, Tucana III is on a nearly radial orbit and thought to be tidally disrupting.SEGUE 1, SEGUE 2, and Willman 1 are all relatively unaffected by the inclusion of the LMC in the gravitational potential in Pace et al. (2022).Given that UMa3/U1 orbits more closely to the Milky Way than any of these UFDs, we conclude that its orbit is unlikely to be strongly affected by the LMC. DISCUSSION We have presented the discovery of UMa3/U1, an old (τ > 11 Gyr), metal-poor ([Fe/H] ∼ −2.2), tiny (3 ± 1 pc) Milky Way satellite with an orbit that remains within ∼ 25 kpc of the galactic center.Most notably, UMa3/U1 is comprised of astonishingly few stars.We have estimated that the total stellar mass is 16 +6 −5 M ⊙ , which, when converted to magnitudes using M/L ∼ 1.4, gives a total absolute V -band magnitude of +2.2 +0.4 −0.3 mag, making UMa3/U1 least luminous Milky Way satellite, and by some margin.Of the faint, ambiguous Milky Way satellites, the faintest are Kim 3 (M V = +0.7 mag; Kim et al. 2016) and DELVE 5 (M V = +0.4mag; Cerny et al. 2023b).Recasting these magnitudes into total stellar mass (again assuming M/L ∼ 1.4), Kim 3 has M tot ∼ 63 M ⊙ while DELVE 5 has M tot ∼ 83 M ⊙ .Virgo I (Homma et al. 2018), the least luminous presumed dwarf galaxy which is classified based on its physical half-light radius of 47 pc, has an absolute V-band magnitude of −0.7 mag, representing a total stellar mass of ∼ 230 M ⊙ .All told, UMa3/U1 is a quarter the mass of the previously least luminous Milky Way satellite and some 15× less massive (in terms of stellar mass) than the faintest suspected dwarf galaxy.We now offer some interpretations regarding the nature of UMa3/U1 as well as the origins of this faint satellite system.4.1.On the Origin of Ursa Major III/UNIONS 1 Broadly speaking, there are two possible origins for UMa3/U1: it either formed in situ or it was accreted into the Milky Way.Based on the orbit derived in Section 3.7 UMa3/U1 does not appear to be a on a disk-or bulgelike orbit.Massari et al. (2019) describe bulge GCs as having r apo < 3.5 kpc and disk GCs z max < 5 kpc.UMa3/U1 does not satisfy the criteria for either category, with an apocenter of 25.2 kpc and z max = 16.7 kpc.The findings of Leaman et al. (2013) show that in situ, disk-like globular clusters rarely have a mean metallicity less than −2 and while Di Matteo et al. (2019) argue that a significant portion of the inner Milky Way halo may be comprised of metal-poor ([Fe/H] < −1), heated thick-disk stars, they likely extend to a metallicity of −2.The metallicity of UMa3/U1 is not strongly constrained, as the PARSEC isochrone database does not extend lower than a metallicity of [Fe/H] = −2.2,but it appears metal-poor nonetheless.UMa3/U1 is on a prograde orbit with a clear, but not too drastic, inclination with respect to the Milky Way plane and therefore could have formed in situ, though it would be fairly anomalous in its orbit, and somewhat anomalous in its metallicity with respect to the known properties of disk globular clusters. The alternative is that UMa3/U1 could have been accreted into the Milky Way halo.With an orbital period of 373 +32 −34 Myr, UMa3/U1 has likely had time to complete many pericentric passages of the Milky Way, which may have led to tidal stripping.There is no stellar stream in the galstreams (Mateu 2023) catalog that matches the position or kinematics of UMa3/U1, but it could be fruitful to search for a faint stream along the orbital path given that this satellite has likely been interacting with the outer disk for many orbits. UMa3/U1 may have been accreted on its own or as a companion to some larger system, so we computed the orbital properties of all Milky Way satellites with measured velocities and proper motions.At the total orbital energy of UMa3/U1, the orbits of globular clusters M68 and Ruprecht 106 are found to have the most similar X, Y, and Z angular momentum, with M68 being a particularly close match in apocenter and the maximum height above/below the disk when examining other orbital parameters.M68 is previously thought to have been accreted into the Milky Way from a satellite galaxy (Yoon & Lee 2002), so this orbital similarity could indicate that UMa3/U1 and M68 were accreted as part of the same system.If UMa3/U1 is the tidally stripped remains of a dwarf galaxy then it could have hosted M68 prior to accretion, whereas if UMa3/U1 is a star cluster, perhaps it and M68 formed in the same environment.We present the orbital parameters of all Milky Way satellites in Figure 8 where Milky Way dwarf galaxies are all dwarfs within 420 kpc (∼ the distance to Leo T) from McConnachie (2012)2 , "Classical Globular Clusters" measurements are taken from Harris (1996Harris ( , 2010 edition) edition), and the most similar systems, globular clusters M68 and Ruprecht 106, are highlighted. Although the low metallicity of UMa3/U1 does not exclude it from an in situ formation in the heated thick disk, the orbital parameters are inconsistent with the criteria used to define disk and bulge globular cluster populations.We favor a scenario where UMa3/U1 was accreted into the Milky Way halo. On the Nature of Ursa Major III/UNIONS 1 The M V − r h plane helps visualize the traits typical of dwarf galaxies, globular clusters, and the faint satellites whose nature remains ambiguous.We have reconstructed this space in Figure 9 where "Classical Globular Cluster" and Milky Way dwarf galaxies are taken from the same references as for Figure 8, and the faint satellite measurements were compiled from literature.A full list of references can be found in Appendix A. UMa3/U1 is far fainter and smaller than any confirmed Milky Way dwarf galaxies, and lies in a size range occupied by faint, ambiguous satellites and globular clusters. As suggested by Willman & Strader (2012), taken in the context of the ΛCDM framework, dwarf galaxies reside in their own dark matter halos while globular clusters do not.Dynamical mass estimators (e.g.Wolf et al. 2010;Errani et al. 2018) rely on the intrinsic stellar velocity dispersion within a system, which can then be compared with the total stellar luminosity to yield a dynamical mass-to-light ratio.The faintest dwarf galaxies have been measured to have M/L in excess of 10 3 M ⊙ /L ⊙ (Simon 2019, and references therein) while globular clusters typically have M/L ∼ 2 (Baumgardt et al. 2020), consistent with strictly baryonic mass being present. If UMa3/U1 is a star cluster, and is therefore composed solely of baryonic matter, then we can use the measured properties of total stellar mass and half-light radius to predict the line-of-sight velocity dispersion of its constituent stars using the mass estimator of Wolf et al. (2010): where σ v is given in km s −1 , r h is given in pc, and M 1/2 is the mass enclosed within r h .Solving for σ v , we estimate σ v ∼ 50 m s −1 .However, in Section 3.5, we measured the intrinsic velocity dispersion to be 3.7 km s −1 from 11 member stars.Now, given σ v , we compute the dynamical mass-to-light ratio of UMa3/U1 by dividing Eqn. ( 6) by the luminosity enclosed within r h , L 1/2 , which is found by converting M tot to L tot using a baryonic M/L ∼ 1.4 and taking half the result.We carry out the calculation using a 10 6 -realisation Monte Carlo procedure to propagate measurement uncertainties, where each input quantity is modelled as a Gaussian distribution with a mean and standard deviation given by the values listed in Table 2.For quantities with uneven uncertainties, we adopt the larger of the two bounds.The dynamical mass-tolight ratio is measured to be 6500 +9100 −4300 M ⊙ /L ⊙ , implying the presence of a massive dark matter halo, and that UMa3/U1 is a dwarf galaxy with astonishingly little stellar mass. In Section 3.5, we already discussed how the presence of binary stars can inflate the measured dispersion, and we identified the member stars whose velocities and uncertainties appear to contribute most significantly to the estimated value of 3.7 km s −1 .The removal of star #2 from the membership list led to σ v = 1.9 +1.4 −1.1 km s −1 , which translates to a dynamical mass-to-light ratio of 1900 +4400 −1600 M ⊙ /L ⊙ .The volatility of the velocity disper-sion with respect to the inclusion of certain candidate member stars makes it unclear as to whether it accurately represents the underlying gravitational potential of UMa3/U1.Additionally, the use of the Wolf et al. ( 2010) mass estimator assumes that the system being assessed is in dynamical equilibrium.The orbit of UMa3/U1 has a pericenter of 12.8 +0.7 −0.8 kpc and passes through the disk around 16 kpc from the galactic center where stellar mass density is ∼ 1/50th of that at the solar neighbourhood (Lian et al. 2022).It may be the case that repeated interactions with the outer Milky Way disk has led to tidal stripping, which could mean that some of the stars identified as members are in the midst of being stripped and have become unbound.If some of the stars that are actively becoming unbound have been observed as part of our spectroscopic follow-ups, their velocities would not be indicative of the gravitational potential underlying UMa3/U1.This could lead to an inflation of the measured velocity dispersion and a subsequent overestimation of the dynamical mass-to-light ratio.We investigated the Keck/DEIMOS velocity data by searching for a velocity gradient along the major axis of UMa3/U1, but no clear gradient is visible.We also might expect that stars in the outskirts would be unbound if there is active stripping, which could give them larger velocities relative to the mean.We reran the velocity dispersion MCMC estimate algorithm where we removed the three member stars that are most distant from the centroid (stars #1, #4, & #7).However, this led to a slight increase in the intrinsic velocity dispersion, giving σ v = 4.4 +1.9 −1.3 km s −1 , implying that outer stars are not wholly responsible for the well-resolved velocity disper- sion.Using these probes of a velocity gradient and outer stars, we do not find clear signs of unbound stars.While the measured velocity dispersion of σ v = 3.7 +1.4 −1.0 km s −1 may be tracing a massive dark matter halo, we emphasize that the presence of binary stars and unbound stars could impact the interpretation of σ v as an direct indicator of dark matter.Multi-epoch spectroscopic data taken over a sufficiently long time baseline will be particularly crucial for identifying binary stars and assessing the dark matter content of UMa3/U1. Focused dynamical modelling of the evolution of UMa3/U1 in the Milky Way halo may provide further clues as to whether this faint, tiny system is a dwarf galaxy or a star cluster."Micro-galaxies" and dwarf galaxies embedded in cuspy dark matter halos are predicted to be rather resilient to tidal disruption (Errani & Peñarrubia 2020;Errani et al. 2022) and the very existence of UMa3/U1 may place constraints on various dark matter models (Errani et al. 2023).We refer the reader to work by Errani et al. (in prep.) for a detailed analysis of UMa3/U1 and the implications of its survival in the Milky Way halo. SUMMARY Ursa Major III/UNIONS 1 is the least luminous known satellite of the Milky Way.We identified this satellite as a resolved overdensity of stars consistent with an old, metal-poor isochrone in the deep, widefield survey, UNIONS.With radial velocities (from Keck/DEIMOS) and proper motions (from Gaia), we have confirmed that UMa3/U1 is a coherent system. We have measured an intrinsic velocity dispersion of 3.7 +1.4 −1.0 km s −1 which could be interpreted as the signature of a massive dark matter halo.However, we have demonstrated that the measured line-of-sight velocity dispersion (upon which the presence of dark matter is predicated) is highly sensitive to the inclusion of two stars within the sample of 11 candidate members.It is for this reason that we have referred to this system as UMa3/U1 throughout, with its nature as either a star cluster or a dwarf galaxy remaining ambiguous at this time. With a half-light radius of 3 pc, UMa3/U1 occupies a scale-length regime that has typically been assumed to contain star clusters, satellites devoid of dark matter.There have only been four moderate resolution spectroscopic studies of these faint, ambiguous systems prior to this one: SEGUE 3 (Fadely et al. 2011), Muñoz 1 (Muñoz et al. 2012), Draco 2 (Longeard et al. 2018), andLaevens 3 (Longeard et al. 2019).All of these programs found inconclusive evidence for the presence or lack of a surrounding dark matter halo, as even Fadely et al. (2011) were only able to put an upper bound on the mass-to-light ratio, which favored a baryon-only scenario but could not fully rule out the presence of darkmatter within 1σ uncertainties.The study presented in this work highlights the need for further mediumto-high resolution, multi-epoch spectroscopic follow-ups for the whole population of faint, ambiguous satellites.With such sparse stellar populations it remains a technical challenge to observe a sufficient number of stars with sufficient accuracy over a sufficient length of time to confidently measure velocity dispersions in these faint systems.Dedicated observing time to obtain stellar spectra within these satellites may show that some of these previously assumed star clusters are in fact tiny, faint dwarf galaxies hiding in plain sight. Each newly found satellite of the Milky Way provides an additional target for investigation and implies that contemporary (e.g.DELVE, UNIONS, DESI Legacy Imaging Surveys) and future (e.g.LSST, the Euclid space telescope) wide-field, digital, photometric surveys will continue to uncover substructure in the halo of the Milky Way.Population-wide studies of these old, faint, metal-poor systems may provide a unique opportunity to understand the processes of star formation, chemical enrichment, and dynamical interactions, as well as the structure of dark matter, extending previously known relationships to parsec length-scales and tens of solar masses. ACKNOWLEDGMENTS We would like to respectfully acknowledge the L@ IJ k w @N@n Peoples on whose traditional territory the University of Victoria stands and the Songhees, Esquimalt and W ŚANE Ć peoples whose relationships with the land continue to this day. We would like to thank the anonymous referee whose comments and feedback helped improve the depth and clarity of the manuscript. As stated in individual acknowledgements below, data collection for this work was conducted at several observing sites atop Maunakea.Therefore, the authors wish first to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the Native Hawaiian community.We are most fortunate to have the opportunity to conduct observations from this mountain. This Some of the data presented herein were obtained at Keck Observatory, which is a private 501(c)3 non-profit organization operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration.The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia),processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium).Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.Here, we list all references that were compiled when investigating the faint Milky Way satellites whose nature remains ambiguous.At present count, this list includes 32 known systems that exist in the halo of the Milky Way meaning that we exclude the large number of recently discovered globular clusters orbiting in the bulge and disk of the Milky Way, although the majority of these systems are brighter than M V ∼ −3 mag anyways.Please now find our list of faint satellite references: Koposov 1 (Koposov et al. 2007), Koposov 2 (Koposov et al. 2007), SEGUE 3 (Fadely et al. 2011), Muñoz 1 (Muñoz et al. 2012), Balbinot 1 (Balbinot et al. 2013), Kim 1 (Kim & Jerjen 2015), Kim 2 (Kim et al. 2015), Crater/Laevens 1 (Laevens et al. 2015b;Weisz et al. 2016), Laevens 3 (Laevens et al. 2015a;Longeard et al. 2019), Draco II (Laevens et al. 2015a;Longeard et al. 2018), Eridanus III (Bechtol et al. 2015;Koposov et al. 2015;Conn et al. 2018), Pictor I (Bechtol et al. 2015;Koposov et al. 2015;Jerjen et al. 2018), SMASH 1 (Martin et al. 2016b), Kim 3 (Kim et al. 2016), DES 1 (Luque et al. 2016;Conn et al. 2018), DES J0111-1341 (Luque et al. 2017) Ursa Major II/UNIONS 1 (UMa3/U1) was discovered during an ongoing search for faint Local Group systems in the deep wide-field Ultraviolet Near Infrared Optical Northern Survey (UNIONS).UNIONS is a consortium of Hawaii-based surveys working in conjunction to image a vast swath of the northern skies in the ugriz photometric bands.Four distinct surveys are contributing independent imaging: the Canada-France Imaging Survey (CFIS) at the Canada-France-Hawaii Telescope (CFHT) is targeting deep u and r photometry, Pan-STARRS is obtaining deep i and moderately deep z observations, the Wide Imaging with Subaru HyperSuprime-Cam of the Euclid Sky (WISHES) program is acquiring deep z at the Subaru telescope, and the Waterloo-Hawaii IfA G-band Survey (WHIGS) is responsible for deep g imaging, also with Subaru.Together, these surveys are covering 5000 deg 2 at declinations of δ > 30 deg and Galactic latitudes of |b| > 30 deg.UNIONS was in part brought FieldFigure 1 . Figure 1.Detection Plot for UMa3/U1.Left: Tangent plane-projected sky positions of all stars within a 12 ′ × 12 ′ region around the overdensity.Isochrone-selected points are colored blue while unselected stars are grey.The dashed black ellipses have semi-major axes of 2 ×, 4 ×, and 6 × the half-light radius (r h ), where r h is determined by the MCMC structural-parameter estimation routine described in Section 3.2.Center : Color-magnitude diagram (CMD) of extinction-corrected r, i photometry for all stars within 4 × r h ellipse.An old (12 Gyr), metal-poor ([Fe/H] = −2.2) isochrone shifted to a distance of 10 kpc is overlayed (black), along with the matched-filter selection region (light blue).Sources are colored as in the left-hand panel.The median photometric uncertainties as a function of i magnitude are shown in black on the left side of the CMD.The median 5σ point source depth of i (24 mag) is shown as a black dashed line while the approximate saturation limit (17.5 mag) is shown as a grey dashed line.Right: Same as central panel, for all stars in an outer annulus with area equal to the area enclosed by the 4r h ellipse.Nominally, these sources should be comprised of Milky Way halo stars and incorrectly classified faint background galaxies, which demonstrates the level of contamination on the UMa3/U1 CMD. 0Figure 2 . Figure 2. Left: CMD of all matched-filtered stars within 4 × r h with a set 12 Gyr, [Fe/H] = −2.2isochrones plotted over top, shifted to distances of 9, 10, & 11 kpc to show the 10% uncertainty assigned to the distance estimate.The median photometric uncertainties as a function of i magnitude are shown in black on the left side of the CMD.Right: CMD same stars as on the left, with a set of [Fe/H] = −2.2isochrones shifted to a distance of 10 kpc where the age of the stellar population is varied between 11, 12, & 13 Gyr,showing that all such isochrones approximate the data reasonably well. Figure 3 . Figure3.Top Left: 12 ′ × 12 ′ region around the UMa3/U1.Blue markers are high likelihood members (Psat ≥ 0.75) as determined with membership analysis code using Gaia proper motions.Orange square markers do not have full astrometric measurements, but are members based on velocity and CaT EW.Red 'X' has marginal membership likelihood from Gaia and is likely not a member based on Keck/DEIMOS measurements.Black sources are all those selected by the matched filter in UNIONS (i.e.same as the blue sources in Figure1).Top Right: Suspected member stars observed with Keck/DEIMOS plotted on a UNIONS r, i extinction-corrected CMD, with a 12 Gyr, [Fe/H] = −2.2isochrone overlaid, shifted to a distance of 10 kpc.Color uncertainties are shown, though they are only just visible for the sources around i0 ∼ 20.5 mag.Bottom Left: Proper motion measurements from Gaia where coloring is the same as on the CMD.The underlying grey density plot is Milky Way foreground proper motion distribution, measured empirically from stars within 2 deg of UMa3/U1.Note that 5 of the likely members are all tightly clustered around the systemic proper motion at (µαcosδ, µ δ ) = (−0.75,1.15) mas year −1 .Bottom Right: Heliocentric velocity plotted against CaT ΣEW where coloring is the same as on CMD, and small black circles are other nonmember stars observed with Keck/DEIMOS.See Figure4for a closer look at the velocity distribution near the mean systemic velocity. Figure 4 . Figure4.Velocity distribution of candidate member stars (dark blue) and a star with marginal membership probability (dashed red, Psat = 0.02).The black dashed line is the velocity probability distribution function with a dispersion of 3.7 km s −1 as derived in Section 3.5. 3. 6 . Figure 5.Left: 2D and marginalized posterior probability distributions for the mean heliocentric radial velocity and its intrinsic dispersion measured from 11 likely member stars.The median values of the heliocentric radial velocity and intrinsic dispersion are shown with uncertainties indicating the 16% and 84% percentiles.Center: Same as the left panel, except we have excluded star #2 (in Table3) and note that the intrinsic velocity dispersion drops to 1.9 +1.4 −1.1 km s −1 .Right: Same as the left panel, except we have excluded stars #2 and #4 (in Table3).In this case, we no longer resolve an intrinsic velocity dispersion.Upon testing the systematic removal of all combinations of stars, the exclusion of stars #2 and #4 are the most impactful upon the velocity dispersion.We report 68% and 95% percentile upper limits on the velocity dispersion of 2.3 and 4.4 km s −1 respectively. Figure 6 . Figure 6.Distribution of total stellar mass calculated by creating mock stellar populations of UMa3/U1.The solid black line indicates the median value (∼ 16 M⊙), the dashed grey lines on either side of the solid line show the 16% and 84% percentiles, and the left-most, dotted grey line indicates the 99.9% confidence level for the upper bound on the stellar mass (∼ 38 M⊙). Figure 7 . Figure 7. Mean and distribution of orbits resulting from MC analysis in Section 3.7, plotted in the galactic XY, XZ, & ZY planes (from left to right), where the rotation of the Milky Way proceeds clockwise in the XY plane.UMa3/U1 is indicated as a yellow star while the position of the sun on this coordinate system is shown as a black '×'.The orbit is integrated both backwards (blue tracks) and forwards (red tracks) in time by 0.5 Gyr in steps of 10 −3 Gyr from the starting point of each orbit.The mean orbit goes through the yellow star, but each individual orbit has its own starting point, as the position and heliocentric distance are randomized in the MC procedure. Figure 8 . Figure8.Left: Total energy plotted against the Z-component of the angular momentum for Milky Way globular clusters (red circles), dwarf galaxies (blue circles) and UMa3/U1 (yellow square).The systems with similar orbits to UMa3/U1 are M68 (purple triangle) and Ruprecht 106 (green triangle).Center: Apocenter distance (in kpc) plotted against pericenter distance (in kpc) with the same coloring as in the left panel.Right: Maximum height above/below the Milky Way disk (in kpc) plotted against orbital eccentricity with the same coloring as in the left panel. work is based on data obtained as part of the Canada-France Imaging Survey, a CFHT large program of the National Research Council of Canada and the French Centre National de la Recherche Scientifique.Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA Saclay, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers (INSU) of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii.This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency.This research is based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.Pan-STARRS is a project of the Institute for Astronomy of the University of Hawaii, and is supported by the NASA SSO Near Earth Observation Program under grants 80NSSC18K0971, NNX14AM74G, NNX12AR65G, NNX13AQ47G, NNX08AR22G, YORPD20 2-0014 and by the State of Hawaii. Table 1 . Flat priors for each parameter in the MCMC analysis Table 2 . Measured and derived properties for Ursa Major 3/UNIONS 1 Myr a All isochrones with τ > 11 Gyr fit the data well, but we adopt τ = 12 Gyr for computations. b [Fe/H] = −2.2 is the most metal-poor isochrone available in our set. Table 3 . Candidate Member Stars Targeted by Keck/DEIMOS Ya Upon further inspection of the extracted 1D spectra, this source may be a distant quasar, as there are no CaT absorption features and a broad emission-like bump at ∼ 7200 Å.The best-fit model spectrum is not informative, and therefore the measured velocity is excluded. Figure 9. MV − r h plane with all known Milky Way satellites included.Dwarf galaxies are plotted in blue, "Classical Globular Clusters" are plotted in red (where classical refers to those in the Harris Catalog), and faint, ambiguous Milky Way satellites are open black diamonds.UMa3/U1 is shown as as an orange square with r h measurement uncertainties.MV uncertainties are about the same size as the square marker.
15,472.8
2023-11-16T00:00:00.000
[ "Physics" ]
On characterizing the Quantum Geometry underlying Asymptotic Safety The asymptotic safety program builds on a high-energy completion of gravity based on the Reuter fixed point, a non-trivial fixed point of the gravitational renormalization group flow. At this fixed point the canonical mass-dimension of coupling constants is balanced by anomalous dimensions induced by quantum fluctuations such that the theory enjoys quantum scale invariance in the ultraviolet. The crucial role played by the quantum fluctuations suggests that the geometry associated with the fixed point exhibits non-manifold like properties. In this work, we continue the characterization of this geometry employing the composite operator formalism based on the effective average action. Explicitly, we give a relation between the anomalous dimensions of geometric operators on a background $d$-sphere and the stability matrix encoding the linearized renormalization group flow in the vicinity of the fixed point. The eigenvalue spectrum of the stability matrix is analyzed in detail and we identify a"perturbative regime"where the spectral properties are governed by canonical power counting. Our results recover the feature that quantum gravity fluctuations turn the (classically marginal) $R^2$-operator into a relevant one. Moreover, we find strong indications that higher-order curvature terms present in the two-point function play a crucial role in guaranteeing the predictive power of the Reuter fixed point. INTRODUCTION General relativity taught us to think of gravity in terms of geometric properties of spacetime. The motion of freely falling particles is determined by the spacetime metric g µν which, in turn, is determined dynamically from Einstein's equations. It is then an intriguing question what replaces the concept of a spacetime manifold once gravity is promoted to a quantum theory. Typically, the resulting geometric structure is referred to as "quantum geometry" where the precise meaning of the term varies among different quantum gravity programs. An approach towards a unified picture of the quantum gravity landscape could then build on identifying distinguished properties which characterize the underlying quantum geometry and lend themselves to a comparison between different programs. While this line of research is still in its infancy, a first step in this direction, building on the concept of generalized dimensions, has been very fruitful. In particular, the spectral dimension d s , measuring the return probability of a diffusing particle in the quantum geometry, has been computed in a wide range of programs including Causal Dynamical Triangulations [1], Asymptotic Safety [2,3,4,5], Loop Quantum Gravity [6], string theory [7], causal set theory [8,9,10], the Wheeler-DeWitt equation [11], non-commutative geometry [12,13,14], and Hořava-Lifshitz gravity [15], see [16,17] for reviews. A striking insight originating from this comparison is that, at microscopic distances, d s = 2 rather universally. The interpretation of d s as the dimension of a theories momentum space, forwarded in [18], then suggests that the dimensional reduction of the momentum space may be a universal feature of any viable theory of quantum gravity. Following the suggestion [19], 1 a refined picture of quantum geometry could use the (anomalous) scaling dimension associated with geometric operators, comprising, e.g., spacetime volumes, integrated spacetime curvatures, and geodesic distances. Within asymptotic safety program [22,23], also reviewed in [24,25,26,27,28], these quantities have been studied based on the composite operator formalism [19,29,30,31,32]. This formalism allows to determine the anomalous scaling dimension of geometric operators based on an approximation of the quantum-corrected graviton propagator. 2 For the Reuter fixed point in four dimensions the quantum corrections to the scaling of four-volumes V d=4 ∼ L 4−γ 0 were determined in [19]. The result γ 0 = 3.986 lent itself to the interpretation that "spacetime could be much more empty than expected". Recently, ref. [32] generalized this computation by determining the anomalous scaling dimensions associated with an infinite class of geometric operators where R denotes the Ricci scalar constructed from g µν . While it was possible to extract analytic expressions for all γ n , it also became apparent that the single-operator approximation underlying the computation comes with systematic uncertainties. In parallel, the anomalous scaling properties of subvolumes and geodesic distances resulting from the renormalization group fixed points underlying Stelle gravity and Weyl gravity have recently be computed in [31]. In combination, the results show that the scaling of geometric quantities carries information about the renormalization group fixed point providing the high-energy completion of the theory. The purpose of present work is two-fold: Firstly, we extend the analysis [32] beyond the single-operator approximation and compute the complete matrix of anomalous dimensions associated with the class (1). This information allows to access the spectrum of the scaling matrix. We expect that the data linked to the scaling dimensions of the geometrical operators gives a refined characterization of the quantum spacetime underlying the Reuter fixed point. Our results are closely related but complementary to the ones obtained from solving the Wetterich equation [34,35,36,37] for effective average actions of f (R)type [38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60]. The comparison between the two complementary computations indicates that one indeed needs to go beyond the single-operator approximation in order to reconcile the results. Secondly, our work gives information on the gauge-dependence of the anomalous dimensions associated with the operators (1). In this light, the value γ 0 = 3.986 found in [19] may be rather extreme and quantum corrections to the scaling of volumes could be less drastic. The rest of this work is organized as follows. Section 2 introduces the composite operator formalism and the propagators entering in our computation. The generating functional determining the matrix of anomalous dimensions is computed in Section 3. The link to the stability matrix governing the gravitational renormalization group flow in the vicinity of the Reuter fixed point is made in Section 4.1 and the spectral properties of the matrix are analyzed in Section 4.2. Section 5 contains our concluding remarks and comments on the possibility of developing a geometric picture of Asymptotic Safety from random geometry. The technical details underlying our computation have been relegated to three appendices: Appendix A reviews the technical background for evaluating operator traces using the earlytime expansion of the heat-kernel, Appendix B derives the beta functions governing the renormalization group flow of gravity in the Einstein-Hilbert truncation employing geometric gauge [61,62], and Appendix C lists the two-point functions entering into the computation. COMPUTATIONAL FRAMEWORK AND SETUP Functional renormalization group methods provide a powerful tool for investigating the appearance of quantum scale invariance and its phenomenological consequences [63]. In particular, the Wetterich equation [34,35,36,37], plays a key role in studying the renormalization group (RG) flow of gravity and gravity-matter systems based on explicit computations. It realizes the idea of Wilson's modern viewpoint on renormalization in the sense that it captures the RG flow of a theory generated by integrating out quantum fluctuations shellby-shell in momentum space. Concretely, eq. (2) encodes the change of the effective average action Γ k when integrating out quantum fluctuations with momentum p close to the coarse graining scale k. The flow of Γ k is then sourced by the right-hand side where Γ (2) k denotes the second variation of Γ k with respect to the fluctuation fields, the regulator R k provides a k-dependent mass term for quantum fluctuations with momentum p 2 k 2 , and Tr includes a sum over all fluctuation fields and an integral over loop-momenta. Lowering k "unsuppresses" further fluctuations which are then integrated out and change the value of the effective couplings contained in Γ k . For later convenience, we then also introduce the "RG-time" t ≡ ln(k/k 0 ) with k 0 an arbitrary reference scale. In practice, the Wetterich equation allows to extract non-perturbative information about a theories RG flow by restricting Γ k to a subset of all possible interaction monomials and subsequently solving eq. (2) on this subspace. For gravity and gravity-matter systems such computations get technically involved rather quickly. Thus, it is interesting to have an alternative equation for studying the scaling properties of sets of operators O n , n = 1, · · · , N, which are not included in Γ k . Within the effective average action framework such an equation is provided by the composite operator equation [64,65,66,19]. As a starting point, the operators O n are promoted to scale-dependent quantities by multiplying with a k-dependent matrix Z nm (k) The analogy of Z nm to a wave-function renormalization then suggests to introduce the matrix of anomalous dimensions γ whose components are given by Following the derivation [19], the γ nm can be computed from the composite operator equation where O (2) n denotes the second functional derivative of O n with respect to the fluctuation fields. For the geometric operators (1) the evaluation of γ has so far focused on the diagonal matrix elements γ nn , c.f. [19,32]. The goal of the present work is to extend this analysis and, for the first time, study the eigenvalues of γ ij associated with the operators (1). COMPUTING THE MATRIX OF ANOMALOUS DIMENSIONS The computation of γ nm requires two inputs. First, one needs to specify the set of operators O n . In the present work, these will be given by the geometric operators (1). Secondly, one needs to specify the gravitational propagators Γ (2) k . These will be derived from Γ k approximated by the Euclidean Einstein-Hilbert (EH) action supplemented by a suitable choice for the gauge-fixing action (50). In practice, we obtain Γ (2) k from the background field method, performing a linear split of the spacetime metric g µν into a background metric g µν and fluctuations h µν : In order to simplify the subsequent computation, we then chose the background metric as as the metric on the d-sphere, so that the background curvature satisfies Moreover, we carry out a transverse-traceless (TT) decomposition of the metric fluctuations [67] where the component fields are subject to the differential constraints The Jacobians associated with the decomposition (9) are taken into account by a subsequent field redefinition and it is understood that in the sequel all propagators and the matrix elements O (2) i are the ones associated with the rescaled fields. In combination with the background (8), this decomposition ensures that the differential operators appearing within the trace combine into Laplacians ∆ ≡ −ḡ µνD µDν constructed from the background metric [61]. We then specify the gauge-fixing to geometric gauge, setting ρ = 0 and subsequently evoking the Landau limit α → 0. Substituting the general form of the matrix elements listed in Table 2 into the right-hand side of (5) and tracing the α-dependence one finds that the contributions of the transverse vector fluctuations ξ µ and the scalar σ drop out from the composite operator equation. As a consequence, the anomalous dimensions are only sourced by the transverse-traceless and conformal fluctuations. The relevant matrix elements are then readily taken from Table 2. They read together with and Here and R k (∆) = k 2 r(∆/k 2 ) is a scalar regulator function which later on will be specified to the Litim regulator (47). Substituting the expressions (12)- (14) into the composite operator equation (5) then yields Here the subscripts T and S indicate that the trace is over transverse-traceless (T ) and scalar (S) fluctuations, respectively, an the symbol | Om indicates the projection of the right-hand side onto the operator O m . The explicit form of the operator-valued functions W T and W S is Frontiers Before delving into the explicit evaluation of the traces, the following structural remark is in order. Inspecting (17), one observes that the right-hand side associated with the nth row contains at least R n−2 powers of the background curvature. This entails that the matrix of anomalous dimensions has the following triangular form The explicit value of the matrix entries (16) is readily computed employing the heat-kernel techniques reviewed in Appendix A. In practice, we will truncated the heat-kernel expansion at order R 2 , setting the coefficients a n , n ≥ 3 to zero. This is in the spirit of the "paramagnetic approximation" suggested in [68], that the curvature terms relevant for asymptotic safety originate from the curvature terms contained in the propagators. For the matrix entries γ nm this entails that all entries on the diagonal and below (marked in black) are computed exactly while contributions to the terms above the diagonal (marked in blue) will receive additional contributions from higher-orders in the heat-kernel. In particular all entries γ nm with m ≥ n + 3 are generated solely from expanding the curvature terms proportional to C T and C S in the transverse-traceless and scalar propagators. Evaluating (16) based on these approximations then results in an infinite family of generating functionals Γ n (R), n ≥ 0 ∈ N: Here we introduced the dimensionless couplings and the anomalous dimension of Newton's coupling η N ≡ (G k ) −1 ∂ t G k . The threshold functions q p n (w) are defined in eq. (42) and their arguments in the transverse-traceless and scalar sector are The coefficients c i k depend on d and n. In the tensor sector they are given by Their counterparts in the scalar sector read Finally, the a i n are the heat-kernel coefficients listed in Table 1. The entries in γ are then generated as the coefficients of the Laurent series expansion For instance, the two lines of entries below the diagonal, γ n,n−2 , n ≥ 2 and γ n,n−1 , n ≥ 1, obtained in this way are Eqs. (19) -(24) constitute the main result of this work. They give completely analytic terms for all entries of the anomalous dimension matrix γ. At this stage, a few remarks are in order. 1) The entries of the anomalous dimension matrix carry a specific k-dependence: γ nm ∝ (k 2 ) n−m . This can be understood by noticing that the matrix γ acts on operators O m with different canonical mass dimensions. The k-dependence then guarantees that the eigenvalues of γ are independent of k. 2) The entries γ n,n−2 are solely generated from the scalar contributions, i.e., the transverse-traceless fluctuations do not enter into these matrix elements. Technically, this feature is associated with the Hessians O (2) n , cf. Table 2: the matrix elements in the scalar sector start atR n−2 while the transversetraceless sector starts atR n−1 . 3) Notably, d = 4 is special. In this case the entries above the diagonal, γ nm with m ≥ n+3 are generated from the transverse-traceless sector only. All contributions from the scalar sector are proportional to at least one power of C S and thus vanish if d = 4. 4) The matrix γ is a function of the (dimensionless) couplings entering the Einstein-Hilbert action. Thus γ assigns a set of anomalous dimensions to every point in the g-λ-plane. Since γ is proportional to g, the magnitude of the anomalous dimensions becomes small if g ≪ 1. In particular, γ vanishes at the Gaussian fixed point g * = λ * = 0 where one recovers the classical scaling of the geometric operators. SCALING ANALYSIS FOR THE REUTER FIXED POINT Starting from the general result (24), we now proceed and discuss its implications for the quantum geometry associated with Asymptotic Safety. Relating the scaling of geometric operators and the RG flow By construction, the matrix γ assigns anomalous scaling dimensions to any point in the g-λ plane. In order to characterize the quantum geometry related to Asymptotic Safety, we study the properties of this matrix at the Reuter fixed point found in Appendix B, cf. eq. (60) Reuter fixed point: d = 3 : From the definition of the beta function ∂ t u n = β un (u i ) and the fact that at a fixed point β un (u * i ) = 0, it follows that the properties of the RG flow in the vicinity of the fixed point are encoded in the stability Let us denote the eigenvalues of B by λ n so that spec(B) = {λ n }. Eq. (27) then entails that eigendirections corresponding to eigenvalues with a negative (positive) real part attract (repel) the RG flow when k is increased, i.e., they correspond to UV-relevant (UV-irrelevant) directions. The number of UV-relevant directions then gives the number of free parameters which are not fixed by the asymptotic safety condition: along these directions the RG flow automatically approaches the Reuter fixed point as k → ∞. Formally, one can then derive a relation between γ and the stability matrix B [69,32], where d n = d − 2n is the canonical scaling dimension of the operator O n . This relation is remarkable in the following sense: The construction of the (approximate) fixed point solution (26) (26). Before embarking on this discussion, the following cautious remark is in order though. While the composite operator formalism may allow to obtain information on the stability properties of a fixed point beyond the approximation used for the propagators, it is also conceivable that the formalism becomes unreliable for eigenvalues λ n with n ≥ N max . Heuristically, this is suggested by the following argument: when studying fixed point solutions in the f (R)-approximation the propagators include powers ofR beyond the linear terms captured by the Einstein-Hilbert action. These terms give rise to additional contributions in the generating functional (24) which may become increasingly important in assessing the spectrum of B for eigenvalues with increasing numbers of n. This picture is also suggested by our results in Section 4.2. This said, we now investigate the properties of the stability matrix (28). Here we will resort to the following frameworks: I The spectrum of of B generated by the full generating functional (19) including the contribution of zero-modes in the heat-kernel for d = 4. II In the conformally reduced approximation [70]. In this case, the contribution of the tensor fluctuations is set to zero by hand, so that γ contains the contribution from the scalar trace in (16) only. The latter choice is motivated by the observation that this framework gives rise to the spec(B) which is the most robust under increasing the size of the matrix B. Clearly, one could easily envision other approximations which could be applied to the general result (19). Examples include the exclusion of the zero-mode terms appearing in d = 4 or the "sparse approximation" where only two lines above and below the diagonal are non-trivial, i.e., the entries in the upper-triangular sector which are solely created by expanding the curvature terms contained in the gravitational propagators are eliminated. In order to understand the working (and limitations) of the conformal operator formalism, the frameworks I and II are sufficient though. We checked by explicit computations that the exclusion of zero-modes or evaluating the spectrum of B in the sparse approximation leads to the same qualitative picture. Spectral properties of the stability matrix We first give the diagonal entries γ nn within framework I. This corresponds to the "single-operator approximation" of the composite operator formalism employed in [19,32]. At the fixed points (26) one finds These relations exhibit two remarkable features. Firstly, the structure of O n (cf. Table 2) entails that the entries of γ are second order polynomials in n. It is then remarkable that the diagonal entries essentially follow a linear scaling law up to n ≈ 30 (d = 3) or even exactly (d = 4). Secondly, eq. (29) entails that the diagonal entries of the stability matrix B are always negative. Thus the single-operator approximation predicts that all eigendirections of the Reuter fixed point in the f (R)-space are UV-attractive. It was noted in [32] that this is actually in tension with results obtained from solving the Wetterich equation on the same space. On this basis, it is expected that the off-diagonal entries in γ play a crucial role in determining the spectrum of B. We now discuss the properties of the stability matrices B evaluated at the Reuter fixed points (26). The generating functional (19) allows to generate truncations of B of size N = 100 rather easily and determine the resulting spectrum of eigenvalues numerically. The structure of B then entails that there is always one eigenvalue which is independent of the matrix size. For framework I its value is given by In the conformally reduced approximation (framework II) in d = 3 this feature extends to the second eigenvalue as well d = 3 : The properties of spec(B) beyond these universal eigenvalues obtained from the framework I in d = 4 and d = 3 as well as in the conformally reduced approximation in d = 3 (framework II) are shown in Figs. 1, 2 and 3, respectively. The left diagrams show the real part, Re(λ n ), n = 1, · · · , N of the stability matrices of size N = 25 (left line, green dots), N = 50 (middle line, orange dots), and N = 100 (right line, blue dots). The lines clearly illustrate that increasing N adds additional eigenvalues coming with both increasingly positive and increasingly negative real parts. This feature is shared by all frameworks discussed above. The middle diagrams illustrate the location of spec(B) for N = 100 in the complex plane. While the patterns are quite distinct, they share the existence of nodes where complex eigenvalues are created which then move out into the complex plane along distinguished lines. The right diagrams trace the first two negative eigenvalues as a function of the matrix size N. In all cases, the structure of B implies that the first eigenvalue is independent of N while the other parts of the spectrum exhibit an N-dependence. As illustrated in Figs. 1 and 2, the eigenvalues λ n , n ≥ 2 follow intriguing periodicity patterns. The average over the second and third eigenvalues found in the matrices of size up to N = 100 3 Our errors are purely statistical, giving the standard deviation based on the data set of eigenvalues. An estimate of the systematic errors is highly non-trivial and will not be attempted in this work. Carefully analyzing the N-dependence of spec(B) reveals that there is a close relation between the distribution of eigenvalues in the complex plane (middle diagrams) and the oscillations of λ 2 visible in the left diagrams: the oscillations are linked to the appearance of new complex pairs of eigenvalues. Focusing on the four-dimensional case where this feature is most prominent, one finds that singling out the values of λ 2 just before the occurrence of the new pair of complex eigenvalues in spec(B) essentially selects the λ 2 (N) constituting the maxima in the oscillations. The resulting subset of eigenvalues is displayed in the inset shown in Fig. 1 and is significantly more stable than the full set. The statistical analysis shows that in this case d = 4 :λ I,subset so that the fluctuations are reduced by a factor two as compared to the full set (32). At this stage, it is interesting to compare the averages (32) to the eigenvalue spectrum obtained from the smallest non-trivial stability matrix B with size N = 3: Thus we conclude that small values of N already give a good estimate of the (averaged) spectrum of B. We close this section with a general remark on the structure of spec(B). The stability matrix is not tied to the Reuter fixed point but well-defined on the entire g-λ-plane: the generating functional (19) assigns an infinite tower of eigenvalues to each point in this plane. At the Gaussian fixed point, (λ * , g * ) = (0, 0), γ = 0 and spec(B) follows from classical power counting. The strength of the quantum corrections to spec(B) is then controlled by the values of g and λ. In particular, there is a region in the vicinity of the Gaussian fixed point where these corrections are small. This motivates defining "perturbative domains" P by the condition that spec(B) is dominated by its classical part. Concretely, we define Quantum Geometry of Asymptotic Safety Loosely speaking, the definitions of these domains corresponds to imposing that the quantum corrections are not strong enough to turn more than one classically UV-marginal (d = 4) or UV-irrelevant (d = 3) eigendirection into a relevant one. CONCLUSIONS AND OUTLOOK In this work, we applied to composite operator formalism to construct a completely analytic expression for the matrix γ encoding the anomalous scaling dimensions of the geometrical operators O n ≡ d d x √ gR n , n ∈ N, on a background sphere. Our work constitutes the first instance where the composite operator formalism for gravity is extended beyond the single-operator approximation. Within the geometric gauge adopted in our work, the anomalous dimensions originate from the transverse-traceless and trace mode of the gravitational fluctuations. The gauge-modes, corresponding to the vector sector of the transversetraceless decomposition, decouple. Our derivation made two assumptions: firstly, we assumed that the propagators of the fluctuation fields can be approximated by the (gauge-fixed) Einstein-Hilbert action. Secondly, we assumed that terms appearing in the early-time expansion of the heat-kernel beyond the R 2 -level can be neglected. On this basis, we derived the generating functional (19) from which the matrix of anomalous dimensions (18) can be generated efficiently. As illustrated in Section 4 the stability matrix B resulting from the composite operator formalism allows to study the stability properties of the Reuter fixed point. This novel type of analysis provided the following structural insights on Asymptotic Safety: The analysis of the spectrum of the stability matrix as a function of the dimensionless Newton coupling g and cosmological constant λ reveals the existence of a domain where the eigenvalues are dominated by classical power counting. The resulting spectrum is then similar to the one encountered when solving the Wetterich equation in the polynomial f (R)-approximation which determined the eigenvalues of the stability matrix for N = 6 [38,39], N = 8 [40], N = 35 [44,47], and lately also N = 71 [58]. In particular, ref. [58] reported that for large values of n the real parts of the eigenvalues λ n follow an almost Gaussian behavior λ f (R) n ≈ a n − b , a = 2.042 ± 0.002 , b = 2.91 ± 0.05 . where a and b are the best-fit values. As indicated in Figure 4, the present computation places the Reuter fixed point outside of this scaling domain, i.e., for sufficiently large matrices one obtains new eigenvalues coming with both positive and negative real parts. This makes it conceivable that the higherorder curvature terms appearing in the propagators of the f (R)-approximation play a crucial role in extending the domain such that it includes the fixed point, thereby guaranteeing its predictive power. Conversely, one may use the structure of the stability matrix to analyze the conditions on its entries such that its eigenvalues exhibit "apparent convergence" discussed in [71]. Arguably, the most intriguing result of our work is the spectral analysis of the stability matrix showing the distributions of its eigenvalues in the complex plane, c.f. the middle diagrams of Figs. 1, 2, and 3. The resulting patterns are reminiscent of the Lee-Yang theory for phase transitions [72]. This suggests two immediate applications. First, the status of Asymptotic Safety makes it conceivable that there are actually an infinite number of Reuter-type fixed points arising from gravity and gravity-matter systems. Understanding the characteristic features of their eigenvalue distributions in terms of nodal points creating complex eigenvalues may then constitute a powerful tool for classifying these fixed points and giving a precise definition to the notion of "gravity-dominated" renormalization group fixed points in gravitymatter systems. Secondly, tracing the eigenvalues λ n along their Lee-Yang type orbits in the complex plane could provide a novel tool for testing the convergence of the eigenvalue distribution of B also outside of the perturbative domains (35) where the spectrum is governed by classical power counting. Clearly, it would be interesting to follow up on these points in the future. As a by-product our analysis also computed the diagonal entries of the anomalous dimension matrix in geometric gauge, cf. eq. (29). It is instructive to compare this result to the value of the diagonal entries obtained in harmonic gauge [19,32] This identifies two features which are robust under a change of gauge-fixing: in both cases, the values of γ nn up to n ≃ O(10) follows a linear scaling law: in all cases the coefficients multiplying the quadratic terms are small or even vanishing when adopting geometric gauge in four dimensions. Secondly, the entries in the stability matrix B nn are negative definite for all values n. At the same time, this comparison gives a first idea of the accuracy to which the composite operator formalism in the single-operator approximation is capable to determine the anomalous scaling dimension of the geometric operators: most likely, the results have the status of order-of-magnitude estimates: they should not be interpreted as "precision results" which one should try and reproduce to the given accuracy. Conceptually, it would be interesting to understand (and eliminate) the gauge-dependence of the result. Most likely, this will require imposing on-shell conditions to the master equation (5), following e.g., the ideas outlined in [73,74]. We leave this point to future work though. As one of its most intricate features, the composite operator formalism employed in this work could act as a connector between Asymptotic Safety [22,23] and more geometric approaches to quantum gravity based on causal dynamical triangulations [75,76] or random geometry. In d = 2 dimensions, a natural benchmark would involve a quantitative comparison of scaling properties associated with the geodesic length recently considered in [19,29,30,31,32] and exact computations for random discrete surfaces in the absence of matter fields [21,77] as well as rigorous and numerical bounds arising from Liouville Gravity in the presence of matter [78,79]. On the renormalization group side this will involve taking limits akin to [80]. Conversely, it is interesting to generalize the two-dimensional constructions to higher dimensions. The connection between the stability matrix B and the anomalous scaling dimension γ of geometric operators may then be an interesting link allowing to probe Asymptotic Safety based on geometric constructions of a quantum spacetime. Table 1. Heat-kernel coefficients a i n for scalars (S), transverse vectors (T V ), and transverse-traceless symmetric tensors (T ) on a background d-sphere [81]. The terms proportional to δ d,2 and δ d,4 are linked to zero modes of the decomposition (9) on the 2and 4-sphere. The dash −− indicates that the corresponding coefficient is not entering into the present computation. APPENDICES A HEAT-KERNEL, MELLIN TRANSFORMS, AND THRESHOLD FUNCTIONS The calculation of γ requires the evaluation of the operator traces appearing on the right-hand side of the composite operator equation (5). This computation can be done effectively by applying the early-time heat-kernel expansion for minimal second-order differential operators ∆ ≡ −ḡ µνD µDν . Following the ideas advocated in [81,61], we carry out a transverse-traceless decomposition of the fluctuation fields. Paired with a maximally symmetric background geometry, this decomposition ensures that all differential operators in the trace arguments organize themselves into Laplacians ∆. These traces can then be evaluated using the Seeley-deWitt expansion of the heat-kernel on the d-sphere S d : Here i = {S, T V, T } labels the type of field on which the Laplacian acts and the dots represent higherorder curvature terms. The relevant coefficients a i n have been computed in [81] and are listed in Table 1. Their derivation manifestly uses the identities (8) in order to simplify the heat-kernel expansion on a general manifold [82]. The expansion (38) is readily generalized to functions of the Laplacian. Introducing the Q-functionals one has [40] Tr In order to write γ and the beta functions of the Einstein-Hilbert truncation in a compact form, it is convenient to express the Q-functionals in terms of the dimensionless threshold functions [37] Here r(z) is the dimensionless profile function associated with the scalar regulator R k (z) = k 2 r(z) introduced in eq. (46) and the prime denotes a derivative with respect to the argument. For later convenience we also define the combination The arguments of the traces appearing in γ, eq. (16), and the Einstein-Hilbert truncation studied in Appendix B have a canoncial form. Defining P k ≡ z + R k (z), the identity allows to convert the corresponding Q-functionals into the dimensionless threshold functions. For q = 0 this reduces to Notably, the second set of identities suffices to derive the beta functions of the Einstein-Hilbert truncation while the evaluation of γ requires the generalization (43). For maximally symmetric backgrounds the background curvatureR is covariantly constant. As a consequence, it has the status of a parameter and can be included in the argument of the threshold functions. Expansions in powers ofR can then be constructed from the recursion relations Throughout the work, we specify the (scalar) regulator to the Litim regulator [83,84]. In this case the dimensionless profile function r(z) is given by with Θ(x) the unit-step function. For this choice the integrals (41) can be carried out analytically, yielding Φ p,Litim n (w) = 1 Γ(n + 1) B THE EINSTEIN-HILBERT TRUNCATION IN GENERAL GAUGE Structurally, the composite operator equation provides a map from the couplings contained in the Hessian Γ (2) k to the matrix of anomalous dimensions γ. This map is independent of the RG flow entailed by the Wetterich equation. In order to characterize the geometry associated with the Reuter fixed point, the map has to be evaluated at the location of the fixed point. This appendix then studies the flow of Γ k in the Einstein-Hilbert truncation supplemented by a general gauge-fixing term. The key result is the position of the Reuter fixed point, eq. (26), which underlies the spectral analysis of Sect. 4. Our analysis essentially follows [85,61,62], to which we refer for further details. The Einstein-Hilbert truncation approximates the effective average action Γ k [h;ḡ] by the Einstein- This ansatz contains two scale-dependent couplings, Newton's coupling G k and the cosmological constant Λ k . In the present analysis, we work with a generic gauge-fixing term where α and ρ are free, dimensionless parameters. The harmonic gauge used in [69,32] corresponds to α = 1, ρ = d/2−1 while the present computation significantly simplifies when adopting geometric gauge, setting ρ = 0 before evoking the Landau limit α → 0. The ghost action associated with (50) is Following the strategy employed in the gravitational sector, c.f. eq. (9), the fieldsC µ , C µ are decomposed into their transverse and longitudinal parts followed by a rescaling The part of the ghost action quadratic in the fluctuation fields then becomes We now proceed by constructing the non-zero entries of the Hessian Γ k . These are obtained by expanding Γ k to second order in the fluctuation fields, substituting the transverse traceless decomposition (9) and (52), and implementing the field redefinitions (11) and (53). Subsequently taking two functional variations with respect to the fluctuation fields then leads to the matrix elements listed in the middle block of Table 2. The final ingredient entering the right-hand side of the Wetterich equation is the regulator R k . We generate this matrix from the substitution rule dressing each Laplacian by a scalar regulator R k (∆). The latter then provides a mass for fluctuation modes with momentum p 2 k 2 . In the nomenclature introduced in [40] this corresponds to choosing a type I regulator. The non-zero entries of R k generated in this way are listed in the bottom block of Table 2. We now have all the ingredients to compute the beta functions resulting from the Wetterich equation projected onto the Einstein-Hilbert action. Adopting the geometric gauge ρ = 0, α → 0 used in the main section, all traces appearing in the equation simplify to the Q-functionals evaluated in eq. (44). Defining where the anomalous dimension of Newton's coupling is parameterized by [37] η N (g, λ) = gB 1 (λ) 1 − gB 2 (λ) , the explicit computation yields and Here the threshold functions Φ p n , Φ p n and q p n (w) are defined in eqs. (41) and (42) and their arguments w T and w S have been introduced in (21). It is now straightforward to localize the Reuter fixed point by determining the roots of the beta functions (56) numerically. For the Litim regulator (47) Analyzing the stability properties of the RG flow in its vicinity, it is found that the fixed point constitutes a UV attractor, with the eigenvalues of the stability matrix given by These results agree with the ones found in [61] at the 10% level. The difference can be traced back to the two distinct regularization procedures employed in the computations, so that the findings are in qualitative agreement. This completes our analysis of the Einstein-Hilbert truncation underlying the scaling analysis in the main part of this work. C MATRIX-ELEMENTS OF GEOMETRIC OPERATORS The expansions of O n and Γ k in the fluctuation fields are readily computed using the xPert extension [86] of xAct. For completeness, the relevant expressions are listed in Table 2. The d-dependent coefficients C i multiplying the curvature terms in Γ (2) k are Table 2. Components of the Hessians entering the right-hand side of the composite operator equation (5) and the Wetterich equation evaluated for the Einstein-Hilbert truncation. The fluctuations are expressed by the component fields (9) and (52) followed by the field redefinitions (11) and (53). The matrix elements are labeled by the fluctuation fields, i.e., O n | h T h T results from taking two functional derivatives of O n with respect to h T . The off-diagonal terms are symmetric, e.g., O
8,841
2020-03-16T00:00:00.000
[ "Physics" ]
Statistics review 2: Samples and populations The previous review in this series introduced the notion of data description and outlined some of the more common summary measures used to describe a dataset. However, a dataset is typically only of interest for the information it provides regarding the population from which it was drawn. The present review focuses on estimation of population values from a sample. Strictly speaking, the theoretical Normal distribution is continuous, as shown in Fig. 1. However, data such as those shown in Fig. 2, which presents admission haemoglobin concentrations from intensive care patients, often provide an excellent approximation in practice. There are many other theoretical distributions that may be encountered in medical data, for example Binary or Poisson [2], but the Normal distribution is the most common. It is additionally important because it has many useful properties and is central to many statistical techniques. In fact, it is not uncommon for other distributions to tend toward the Normal distribution as the sample size increases, meaning that it is often possible to use a Normal approximation. This is the case with both the Binary and Poisson distributions. One of the most important features of the Normal distribution is that it is entirely defined by two quantities: its mean and its standard deviation (SD). The mean determines where the peak occurs and the SD determines the shape of the curve. For example, Fig. 3 shows two Normal curves. Both have the same mean and therefore have their peak at the same value. However, one curve has a large SD, reflecting a large amount of deviation from the mean, which is reflected in its short, wide shape. The other has a small SD, indicating that individual values generally lie close to the mean, and this is reflected in the tall, narrow distribution. It is possible to write down the equation for a Normal curve and, from this, to calculate the area underneath that falls between any two values. Because the Normal curve is defined entirely by its mean and SD, the following rules (represented by parts a-c of Fig. 4) will always apply regardless of the specific values of these quantities: (a) 68.3% of the distribution falls within 1 SD of the mean (i.e. between mean -SD and mean + SD); (b) 95.4% of the distribution falls between mean -2 SD and mean + 2 SD; (c) 99.7% of the distribution falls between mean -3 SD and mean + 3 SD; and so on. The proportion of the Normal curve that falls between other ranges (not necessarily symmetrical, as here) and, alternatively, the range that contains a particular proportion of the Normal curve can both be calculated from tabulated values [3]. However, one proportion and range of particular interest is as follows (represented by part d of Fig. 4); 95% of the distribution falls between mean -1.96 SD and mean + 1.96 SD. The standard deviation and reference range The properties of the Normal distribution described above lead to another useful measure of variability in a dataset. Rather than using the SD in isolation, the 95% reference range can be calculated as (mean -1.96 SD) to (mean + 1.96 SD), provided that the data are (approximately) Normally distributed. This range will contain approximately 95% of the data. It is also possible to define a 90% reference range, a 99% reference range and so on in the same way, but conventionally the 95% reference range is the most commonly used. For example, consider admission haemoglobin concentrations from a sample of 48 intensive care patients (see Statistics review 1 for details). The mean and SD haemoglobin concentration are 9.9 g/dl and 2.0 g/dl, respectively. The 95% reference range for haemoglobin concentration in these patients is therefore: (9.9 -[1.96 × 2.0]) to (9.9 + [1.96 × 2.0]) = 5.98 to 13.82 g/dl. Thus, approximately 95% of all haemoglobin measurements in this dataset should lie between 5.98 and 13.82 g/dl. Comparing this with the measurements recorded in Table 1 of Statistics review 1, there are three observations outside this range. In other words, 94% (45/48) of all observations are within the reference range, as expected. Admission haemoglobin concentrations from 2849 intensive care patients. Figure 3 Normal curves with small and large standard deviations (SDs). Figure 1 The Normal distribution. Now consider the data shown in Fig. 5. These are blood lactate measurements taken from 99 intensive care patients on admission to the ICU. The mean and SD of these measurements are 2.74 mmol/l and 2.60 mmol/l, respectively, corresponding to a 95% reference range of -2.36 to +7.84 mmol/l. Clearly this lower limit is impossible because lactate concentration must be greater than 0, and this arises because the data are not Normally distributed. Calculating reference ranges and other statistical quantities without first checking the distribution of the data is a common mistake and can lead to extremely misleading results and erroneous conclusions. In this case the error was obvious, but this will not always be the case. It is therefore essential that any assumptions underlying statistical calculations are carefully checked before proceeding. In the current example a simple transformation (e.g. logarithmic) may make the data approximately Normal, in which case a reference range could legitimately be calculated before transforming back to the original scale (see Statistics review 1 for details). Two quantities that are related to the SD and reference range are the standard error (SE) and confidence interval. These quantities have some similarities but they measure very different things and it is important that they should not be confused. From sample to population As mentioned above, a sample is generally collected and calculations performed on it in order to draw inferences regarding the population from which it was drawn. However, this sample is only one of a large number of possible samples that might have been drawn. All of these samples will differ in terms of the individuals and observations that they contain, and so an estimate of a population value from a single sample will not necessarily be representative of the population. It is therefore important to measure the variability that is inherent in the sample estimate. For simplicity, the remainder of the present review concentrates specifically on estimation of a population mean. Consider all possible samples of fixed size (n) drawn from a population. Each of these samples has its own mean and these means will vary between samples. Because of this variation, the sample means will have a distribution of their own. In fact, if the samples are sufficiently large (greater than Available online http://ccforum.com/content/6/2/143 Figure 4 Areas under the Normal curve. Because the Normal distribution is defined entirely by its mean and standard deviation (SD), the following rules apply: (a) 68.3% of the distribution falls within 1 SD of the mean (i.e. between mean -SD and mean + SD); (b) 95.4% of the distribution falls between mean -2 SD and mean + 2 SD; (c) 99.7% of the distribution falls between mean -3 SD and mean + 3 SD; and (d) 95% of the distribution falls between mean -1.96 SD and mean + 1.96 SD. approximately 30 in practice) then this distribution of sample means is known to be Normal, regardless of the underlying distribution of the population. This is a very powerful result and is a consequence of what is known as the Central Limit Theorem. Because of this it is possible to calculate the mean and SD of the sample means. The mean of all the sample means is equal to the population mean (because every possible sample will contain every individual the same number of times). Just as the SD in a sample measures the deviation of individual values from the sample mean, the SD of the sample means measures the deviation of individual sample means from the population mean. In other words it measures the variability in the sample means. In order to distinguish it from the sample SD, it is known as the standard error (SE). Like the SD, a large SE indicates that there is much variation in the sample means and that many lie a long way from the population mean. Similarly, a small SE indicates little variation between the sample means. The size of the SE depends on the variation between individuals in the population and on the sample size, and is calculated as follows: where σ is the SD of the population and n is the sample size. In practice, σ is unknown but the sample SD will generally provide a good estimate and so the SE is estimated by the following equation: It can be seen from this that the SE will always be considerably smaller than the SD in a sample. This is because there is less variability between the sample means than between individual values. For example, an individual admission haemoglo-bin level of 8 g/dl is not uncommon, but to obtain a sample of 100 patients with a mean haemoglobin level of 8 g/dl would require the majority to have scores well below average, and this is unlikely to occur in practice if the sample is truly representative of the ICU patient population. It is also clear that larger sample sizes lead to smaller standard errors (because the denominator, √n, is larger). In other words, large sample sizes produce more precise estimates of the population value in question. This is an important point to bear in mind when deciding on the size of sample required for a particular study, and will be covered in greater detail in a subsequent review on sample size calculations. The standard error and confidence interval Because sample means are Normally distributed, it should be possible to use the same theory as for the reference range to calculate a range of values in which 95% of sample means lie. In practice, the population mean (the mean of all sample means) is unknown but there is an extremely useful quantity, known as the 95% confidence interval, which can be obtained in the same way. The 95% confidence interval is invaluable in estimation because it provides a range of values within which the true population mean is likely to lie. The 95% confidence interval is calculated from a single sample using the mean and SE (derived from the SD, as described above). It is defined as follows: (sample mean -1.96 SE) to (sample mean + 1.96 SE). To appreciate the value of the 95% confidence interval, consider Fig. 6. This shows the (hypothetical) distribution of sample means centred around the population mean. Because the SE is the SD of the distribution of all sample means, approximately 95% of all sample means will lie within 1.96 SEs of the (unknown) population mean, as indicated by the shaded area. A 95% confidence interval calculated from a sample with a mean that lies within this shaded area (e.g. confidence interval A in Fig. 6) will contain the true population mean. Conversely, a 95% confidence interval based on a sample with a mean outside this area (e.g. confidence interval B in Fig. 6) will not include the population mean. In practice it is impossible to know whether a sample falls into the first or second category; however, because 95% of all sample means fall into the shaded area, a confidence interval that is based on a single sample is likely to contain the true population mean 95% of the time. In other words, given a 95% confidence interval based on a single sample, the investigator can be 95% confident that the true population mean (i.e. the real measurement of interest) lies somewhere within that range. Equally important is that 5% of such intervals will not contain the true population value. However, the choice of 95% is purely arbitrary, and using a 99% confidence interval (calculated as mean ± 2.56 SE) instead will make it more likely that the true value is contained within the range. However, the cost of this change is that the range will be wider and therefore less precise. Lactate concentrations in 99 intensive care patients. As an example, consider the sample of 48 intensive care patients whose admission haemoglobin concentrations are described above. The mean and SD of that dataset are 9.9 g/dl and 2.0 g/dl, respectively, which corresponds to a 95% reference range of 5.98 to 13.82 g/dl. Calculation of the 95% confidence interval relies on the SE, which in this case is 2.0/√48 = 0.29. The 95% confidence interval is then: (9.9 -[1.96 × 0.29]) to (9.9 + [1.96 × 0.29]) = 9.33 to 10.47 g/dl So, given this sample, it is likely that the population mean haemoglobin concentration is between 9.33 and 10.47 g/dl. Note that this range is substantially narrower than the corresponding 95% reference range (i.e. 5.98 to 13.82 g/dl; see above). If the sample were based on 480 patients rather than just 48, then the SE would be considerably smaller (SE = 2.0/√480 = 0.09) and the 95% confidence interval (9.72 to 10.08 g/dl) would be correspondingly narrower. Of course a confidence interval can only be interpreted in the context of the population from which the sample was drawn. For example, a confidence interval for the admission haemoglobin concentrations of a representative sample of postoperative cardiac surgical intensive care patients provides a range of values in which the population mean admission haemoglobin concentration is likely to lie, in postoperative cardiac surgical intensive care patients. It does not provide information on the likely range of admission haemoglobin concentrations in medical intensive care patients. Confidence intervals for smaller samples The calculation of a 95% confidence interval, as described above, relies on two assumptions: that the distribution of sample means is approximately Normal and that the population SD can be approximated by the sample SD. These assumptions, particularly the first, will generally be valid if the sample is sufficiently large. There may be occasions when these assumptions break down, however, and there are alternative methods that can be used in these circumstances. If the population distribution is extremely non-Normal and the sample size is very small then it may be necessary to use nonparametric methods. (These will be discussed in a subsequent review.) However, in most situations the problem can be dealt with using the t-distribution in place of the Normal distribution. The t-distribution is similar in shape to the Normal distribution, being symmetrical and unimodal, but is generally more spread out with longer tails. The exact shape depends on a quantity known as the 'degrees of freedom', which in this context is equal to the sample size minus 1. The t distribution for a sample size of 5 (degrees of freedom = 4) is shown in comparison to the Normal distribution in Fig. 7, in which the longer tails of the t-distribution are clearly shown. However, the t-distribution tends toward the Normal distribution (i.e. it becomes less spread out) as the degrees of freedom/sample size increase. Fig. 8 shows the t-distribution corresponding to a sample size of 20 (degrees of freedom = 19), and it can be seen that it is already very similar to the corresponding Normal curve. Calculating a confidence interval using the t-distribution is very similar to calculating it using the Normal distribution, as described above. In the case of the Normal distribution, the calculation is based on the fact that 95% of sample means fall within 1.96 SEs of the population mean. The longer tails of the t-distribution mean that it is necessary to go slightly further away from the mean to pick up 95% of all sample means. However, the calculation is similar, with only the figure of 1.96 changing. The alternative multiplication factor depends on the degrees of freedom of the t-distribution in question, and some typical values are presented in Table 1. As an example, consider the admission haemoglobin concentrations described above. The mean and SD are 9.9 g/dl and 2.0 g/dl, respectively. If the sample were based on 10 patients rather than 48, it would be more appropriate to use the t-distribution to calculate a 95% confidence interval. In this case the 95% confidence interval is given by the following: mean ± 2.26 SE. The SE based on a sample size of 10 is 0.63, and so the 95% confidence interval is 8.47 to 11.33 g/dl. Figure 6 The distribution of sample means. The shaded area represents the range of values in which 95% of sample means lie. Confidence interval A is calculated from a sample with a mean that lies within this shaded area, and contains the true population mean. Confidence interval B, however, is calculated from a sample with a mean that falls outside the shaded area, and does not contain the population mean. SE=standard error. Note that as the sample sizes increase the multiplication factors shown in Table 1 decrease toward 1.96 (the multiplication factor for an infinite sample size is 1.96). The larger multiplication factors for smaller samples result in a wider confidence interval, and this reflects the uncertainty in the estimate of the population SD by the sample SD. The use of the t-distribution is known to be extremely robust and will therefore provide a valid confidence interval unless the population distribution is severely non-Normal. Standard deviation or standard error? There is often a great deal of confusion between SDs and SEs (and, equivalently, between reference ranges and confidence intervals). The SD (and reference range) describes the amount of variability between individuals within a single sample. The SE (and confidence interval) measures the precision with which a population value (i.e. mean) is estimated by a single sample. The question of which measure to use is well summed up by Campbell and Machin [4] in the following mnemonic: "If the purpose is Descriptive use standard Deviation; if the purpose is Estimation use standard Error." Confidence intervals are an extremely useful part of any statistical analysis, and are referred to extensively in the remaining reviews in this series. The present review concentrates on calculation of a confidence interval for a single mean. However, the results presented here apply equally to population proportions, rates, differences, ratios and so on. For details on how to calculate appropriate SEs and confidence intervals, refer to Kirkwood [2] and Altman [3]. Key messages The SD and 95% reference range describe variability within a sample. These quantities are best used when the objective is description. The SE and 95% confidence interval describe variability between samples, and therefore provide a measure of the precision of a population value estimated from a single sample. In other words, a 95% confidence interval provides a range of values within which the true population value of interest is likely to lie. These quantities are best used when the objective is estimation. Figure 7 The Normal and t (with 4 degrees of freedom) distributions. Figure 8 The Normal and t (with 19 degrees of freedom) distributions.
4,400.2
2002-02-07T00:00:00.000
[ "Computer Science", "Psychology" ]
Experiment Centric Teaching for Reconfigurable Processors This paper presents a setup for teaching configware to master students. Our approach focuses on experiment and leaning-bydoing while being supported by research activity. The central project we submit to students addresses building up a simple RISC processor, that supports an extensible instructions set thanks to its reconfigurable functional unit. The originality comes from that the students make use of the Biniou framework. Biniou is a research tool which approach covers tasks ranging from describing the RFU, synthesizing it as VHDL code, and implementing applications over it. Once done, students exhibit a deep understanding of the domain, ensuring the ability to fast adapt to state-of-the-art techniques. Introduction Innovative lectures and lab courses are required to offer high quality training in the field of configware.Being either an electrical engineer (EE) or a computer scientist (CS) expert will not be enough to meet the needs we foresee in terms of interdisciplinary for the future.As teachers, our goal is not to output Computer-Assisted-Design (CAD) end-users but highly educated experts, who will easily self-adapt to new technologies. Our contribution to this in-depth rethinking of curricula goes through providing cross expertise training centered around CAD environments design.CAD tools embed the full expertise both from an architectural and from an algorithmic point of view.Affording the design of CAD environments ensures a full understanding of the domain. As teachers, we make use of some research tools we have developed, that offer a full design suite for reconfigurable accelerators.The key principle behind this is to let students design and implement simple schemes (processors, processor-to-accelerator coupling, etc.) while taking advantage of research tools that promote high productivity.After students have manipulated these toys examples, they show a promising learning curve when addressing stateof-the-art technology (processor soft cores, Xilinx design suite, FSL Fast Serial Links, etc.).This second stage is when performances issue arises.At this point, some discussions happen: fine-versus coarse-grained accelerators, compiler friendly architecture, reconfigurable functional unit versus coprocessor, and so forth. Splitting the learning activities in such a way emphasizes simplicity.A first consideration is that a simple design always takes less time to finish than a complex one, exhibits more readability, and offers a better support for further refactoring.Another thing about simple designs is that they require knowledge to recognize.Knowledge is different from information.Information is what you get as a student, when gaining access to a lecture.However, you can have plenty of information and no knowledge.Knowledge is insight into your problem domain that develops over time.Our teaching approach aims at accompanying students from information to knowledge.This paper reports this experience.The rest of the paper is organized as follows: Section 2 introduces the lecture's context along with the experiment centric approach we followed.Section 3 focuses on the project we submit to students.Section 4 shifts from the toy example to a more realistic scope.Section 5 summarizes the benefits of our approach.university of Brest.This curriculum addresses emerging trends in embedded systems and highly focuses on reconfigurable embedded systems, with a set of courses for teaching hardware/configware/software codesign.The master gathers students from both CS and EE former curricula.The current master size is 12, coming from half a dozen countries.Half of the students are former local students hence own a sound background in terms of CS but suffer from lacks in electronic system design.The reconfigurable computing courses are organized around two main topics covering the hardware (architectures) and software (CAD tools and compiler basics) aspects.These courses enable students to build from their previous knowledge a cross-expertise giving a complete vision of the domain. Experiment Centric Teaching A strength of this teaching approach is to partially rely on a research environment rather than purely on Xilinx hands-on tutorials.This offers the opportunity to exercise internal changes on algorithms and architectures, and to address both state-of-the art concepts and both some more prospective topics such as innovative-and still confidential in the industry-architectural trends. First an overview of the reconfigurable computing (RC) landscape is introduced.Both industrial and academic architectural solutions are considered.This course is structured in three parts: (i) overview of RC for embedded systems (2 sessions), (ii) virtualization techniques for RC (2 sessions), (iii) Modeling and generation of reconfigurable architectures (1 session).The second item addresses both state-of-the-art tools and algorithms in one hand as well as locally designed tools in another hand. The key idea is that students tend towards learning classical (or vendors's) tools so that they can bring a direct added-value to any employer of the field, hence get in an interesting and well-paid job. However, tools obviously encapsulate the whole domainspecific expertise, and letting students "open the box" closes the gap between "lambda users" and experts.This takes up the challenge of providing a valuable and innovative curriculum.Obviously a single class is not wide enough to address all the-above mentioned items, but this course is closely integrated with some others such as "Numeric and Symbolic synthesis" or "Test & Simulation". Legacy CAD Development. The research group behind this initiative is the Architectures & Systems team from the Lab-STICC (UMR 3192).This group owns a legacy expertise in designing parallel reconfigurable processor (the Armen [1] project was initiated in 1991) but has been focusing on CAD environment developments (Madeo framework [2]) for the past 15 years.The Madeo framework is an open and extensible modeling environment that allows to represent reconfigurable architectures then acts as a one-stop shopping point providing basic functionality to the programmer (place&route, floorplanning, simulation, etc.).The Madeo project ended in 2006 while being integrated as facilities in a new framework.This new framework, named Biniou, embeds additional capabilities such as, from the hardware side, VHDL export of the modeled architecture and, from the software side, wider interchange format and extended synthesis support.Biniou targets reconfigurable System-On-Chip (SOCs) design and offers middleware facilities to favor a modular design of reconfigurable IPs within the SOC. Figure 1 provides an overview of Biniou.In the application side (right) an application is specified as C-code, memory access patterns and some optimizing contexts we use to tailor the application.This side outputs some postsynthesis files conforming to mainstream formats (Verilog, EDIF, BLIF, PLA).Results can be further processed by the Biniou Place and Route (P&R) layer to produce a bitstream.Of course the bitstream matches the specification of the underlying reconfigurable target, being the target modeled using a specific Architecture Description Language (ADL).A model is issued on which the P&R layer can operate as previously mentioned, and a behavioral VHDL description of the target is generated for simulation purposes and FPGA implementation. Once a bitstream is generated out of the application specification, the designer can download it to configure its platform. Also some debugging facilities can be added either in the architecture itself or as parts of the application [3]. From Research to Teaching.Biniou has been exercised as a teaching platform for Master 2 students.This happened in a reconfigurable computing course.In addition to lectures, students practice reconfigurable computing through practical sessions and exercise their new skills through a project.This project covers VHDL hand-writing, reconfigurable architecture modeling and programming, code generation, and modules assembly in order to exhibit a simple processor with a reconfigurable functional unit.This extra-unit allows to extend its instructions set. Practical Sessions. Practical sessions are organized as three activities.The first activity is to gather documentation and publications related to a particular aspect of the course; the students have to present their short bibliographic study individually in front of the whole class. The second activity is centered around algorithms used to implement applications over a reconfigurable architecture: point-to-point and global routers, floorplanners, placers.Some data structures such as Transitive Closure Graphs (TCG) are introduced later on in order to point out the need for refactoring and design patterns use [4].This bridges the software expertise to the covered domain (CAD tools for reconfigurable architecture). The third activity is related to tools and formats.Three slots are dedicated to VHDL that most of the students do not know.Manual description of fine grained reconfigurable architecture is introduced within this amount of time. Some sessions are dedicated to practicing required tools; students manipulate logic synthesis tools (SIS [5], ABC), file formats conversion (Verilog, EDIF, BLIF, PLA), and behavioral synthesis according to some data access pattern (Biniou).We also offer a web-based tool [6] to output RTL netlist that students use to exercise several options for netlist generation. Students create their own FPGA using Biniou, that is further reused in the project under a tuned up version. Project Description. The project consists in designing a simple RISC processor, that can perform spatial execution through a Reconfigurable Functional Unit (RFU). International Journal of Reconfigurable Computing Coupling an RFU along with a processor to get a reconfigurable processor is one out of other alternatives for accelerating intensive tasks.The concept of instruction set metamorphosis [7] is defined and a set of architectures are described.For example, P-RISC [8], Garp [9], XiRISC [10], and Molen [11].A specific focus is set on the Molen programming model and its architectural organization.The Molen approach is presented as a meeting point between the software domain (sequential programming and compiler) and the hardware domain (specific instruction designed in hardware). Figure 2 illustrates the schematic view of the whole processor, including the RFU. The processor supports a restricted instructions set, that conforms to a SET-EXECUTE-STORE Molen paradigm [11].In order to keep the project reasonably simple, we restrict the use of the RFU to implementing Data Flow Graphs (DFGs) on one hand, and we provide students with the Biniou framework on the other hand.Restricting the use of the reconfigurable part as a functional units also mitigates the complexity of the whole design.However, this covers the need for being reachable by average students while preserving the ability to arouse's top students curiosity, by offering a set of interesting perspectives for further developments. This project let students build and stress new ideas in many disciplines related to reconfigurable computing such as spatial versus temporal execution, architectures, programming environments, and algorithms. 2.3.1. Context.This project takes place during the fall semester, from mid October to early January.A noticeable point is that almost no free slots within the timetable are dedicated to this project, that overlaps with courses as well as with "concurrent" projects.This intends to stress students and make them aware of handling competing priorities. Expected Deliverables. We define three milestones and three deliverables.The milestones are practical sessions in front of the teacher. Three main milestones are as follows. M1: RISC processor, running its provided test programs. M2: RFU, with Galois Field-based operations implemented as bitstream. Schedule. The schedule is provided during the project "kick-off ".To prevent students from postponing managing this project we use the collaborative platform to monitor activities, to specify time-windows for uploading deliverables, and to broadcast updates/comments/additional information.Reminders can be sent by mail when the deadline is approaching.Once the deadline expires, over-due deliverables are applied a penalty per extra half-day.[12].However, keeping in mind that half the students have never exercised writing VHDL description, and given practice makes success, we decided to let students design their own processor.Although, a preliminary version with missing control structures was provided in order to ensure a minimal compatibility through the designs. Obviously, the matter here was to ease evaluation from a scholar point of view as well as to force students to handle kind of legacy system and refactoring rather than full redesign. We also provided the instruction set and opcode.In an ideal world, and with a more generous amount of time to spend on the project, as the design is highly modular, building a working design by picking best-fit modules out of several designs would have also been an interesting issue.1.This information is provided to ensure compatibility as well as programmability (as no compiler support is considered). Test Bench Program. Students are familiar with agile programming, test-driven development and characterization tests.When designing a processor, the same approach applies but at a wider granularity (program execution instead of unit test).Hence, we distributed some test bench programs.Analyzing at specific timestamps (including after the application stops) the internal states (some signals plus registers contents) leads to design scoring. Reconfigurable RFU Design 3.2.1.Background.In order to give to students the main architectural concepts behind FPGAs, we first focus on a simple mesh of basic processing elements composed of one 4 entries Look-Up Table (LUT) each.Combination of the basic blocks (LUT, switch, buses, and topology) is presented as a template to be extended (in terms of routing structure and processing elements) for building real FPGA.A more realistic example from the industry (a Xilinx Virtex-5) is considered with a highlight on template basic blocks in Xilinx schematics.As a result, students are able to locate the essential elements for a better understanding of state-of-theart architectures.Drawbacks of fine-grained architectures such as low computation density and routing congestion are highlighted to introduce coarse-grained architectures.This type of reconfigurable architecture is firstly presented as a specialization of FPGA suited for DSP application domain. Modeling. Before entering the generation phase, students learn to hand-design an FPGA.Every elements of a basic FPGA are detailed and a corresponding VHDL behavioral description is provided.The bottom-up description starts from atomic elements, such as pass gates, multiplexers, that are combined to form input/output blocks and configurable logic blocks.A daisy chain architecture is detailed as well as a configuration controller. Then, the second part describes the Biniou generation of the architecture from an ADL description.An FPGA is described using an ADL increasing the level of abstraction compared to a VHDL description.The configuration plan is described as a set of domains to support partial reconfiguration.The approach relies on model transformation, with an automatic VHDL code generation from a high-level description. RFU Structure. As a preliminary approach, students have to design an island style mesh architecture, what means sizing the matrix, defining a basic cell, and isolating border cells that deserve special attention.The basic cell is either used as is for the internal cells and tuned to generate the border cells because their structure is slightly different from the common template.Defining the domains appears as shown by Figure 3. The basic cell schematic view is provided by Figure 4. Ultimately, the full matrix appears as an array of N 2 cells as illustrated by the snapshot of the Biniou P&R layer (Figure 5). Reconfigurable Functional Unit Integration.The reconfigurable functional unit (RFU) is composed of three main components: the reconfigurable matrix (RM) generated by Biniou, a configuration cache, and the RFU controller both hand-written (see bottom right in Figure 2). Configuration is triggered by the processor controller which reacts to a SET instruction by sending a signal to the RFU controller.The RFU controller drives the configuration cache controller, which provides back a bitstream on demand. The processor controller gets an acknowledgment after the configuration completed. One critical issue about the processor-RFU coupling lies in data transfers to/from the RFU.Students have to design a simple adapter which connects a set of RFU's iopads to the processor registers holding input and output data (Op1, Op2, and Res in Figure 2). Figure 6 gives a detailed view of the adapter. Application Synthesis over the RFU. To let students figuring out the benefit of adding the RFU to the processor design, it is desirable that students can assess and compare the impact of several options.One classical approach lies in isolating a portion of the application to be further converted into an accelerated function.In this case, we implement a DFG to exhibit spatial execution.Another option consists in defining novel primitive operators.As an example, defining a multiplier instead of performing several processors instructions (addition, shifts, etc.) can make sense due to a high reuse rate. In both cases, the RFU extends the instructions set.Additionally, the underlying arithmetic can vary keeping the instructions set stable despite adding new variants for implementing these instructions.This goes through either a library-based design or dedicated synthesizers.Libraries are typically targeted to a reduced set of predefined macroblocks, and they are not easily customizable to new kinds of functions or use-cases. We chose to focus on the second topics as this seems to carry extra added-value compared to classical flows, while reducing the need for a coding extra effort thanks to provided synthesis facility. Figure 7 illustrates the Biniou behavioral application synthesizer.The optimizing context here is made up of typing as Galois Field GF16 values the two parameters.A so-called high-level truth table is computed per graph node for which values are encoded and binarized.The logic minimization [19] produces a context-dependent BLIF file. This BLIF file is further processed by the Biniou P&R layer.As application is simple enough to keep the design flatten, no need exists for using a floorplanner.However, for modular designs, a TCG-based floorplanner [20] is integrated within Biniou.Some constraints are considered, such as making some location immutable to conform to the pinout of the adapter (Figure 6) with regards to the ones assigned to the I/O of a placed and routed application (see Figure 8). Once the P&R process ends, a bitstream is generated.Each element of the matrix both knows its state (used, free, which one out of N, etc.) and its layout structure.The full layout is gained by composing recursively (bottom up) these sub-bitstreams.An interesting point is that the bitstream structure can vary independently from the architecture by applying several generation schemes.As a result, in a partial reconfiguration scope, the students benefit from enriched architectural prospection capabilities.In the frame of the project an example of bitstream structure is provided by Figure 9. Reports and Oral Defense.Students had to provide three reports, one per milestone.The reports conformed to a common template and ranged from 10 to 25 pages each.The last report embedded the previous ones so that the final document was made available straight after the project and students were given second opportunity to correct their mistakes. Some recommendations were mandatory such as embedding all images as source format within the package, so that we could reuse some of them.As an illustration, more or less half of the figures in this papers come from students reports.The students had no constraints over the language but some of them chose to give back English-written reports.We selected some reports to be published on line as examples for next year students. The last deliverable was made up of a report, working VHDL code and an oral defense.Students had to expose within 10 minutes, in front of the group, course teachers, and a colleague responsible for the "communication and job market" course.Some students chose to center their defense around the project and the course versus project adequation, some others around the "product", that was their version of the processor. Results Coming out of the Project.The simulation environment is ModelSim [21] as illustrated by Figure 10.The loader module-that loads up the program-was not provided but students could easily get one by simply reusing and adapting the generated test bench.Only one group out of five got it right. This allowed to set a properly initialized state prior to execution's start.Of course, this was a critical issue, and students would have done well to fix it in an early stage as tracing values remained the one validation scheme.This was all the more important as the full simulation took a long time to complete and rerun had a real cost for students. The simulation of the processor itself is time-affordable but the full simulation takes around 4 hours, including bitstream loading, and whole test bench program execution. Optimizations. Students came to us with several policies to speed up the simulation.A first proposal is to let simulation happen at several abstraction levels, with a high rate of early error detection.Second, some modules have been substituted by a simpler version.As an example, by providing a RFU that only supports 8 bits ADD/SUB operations, the bitstream size is downscaled to 1 bit with no compromise on the architecture decomposition itself.This approach is very interesting as it confines changes to the inside of the RFU while still preserving the application programming interface.In addition, it joins back the concern of grain increase in a general scope (i.e., balancing the computation/flexibility and reducing the bitstream size).Also this approach must be linked to the notion of "mock object" [22], software engineers are familiar with, when accelerating code testing. Third, as the application is outputed as RTL code, the code can be used as a hard FU instead of using reconfigurable one.In this way, the students validated the GF-based synthesis.Grabbing these last two points, the global design can be validated very fast, being the scalability issue.This issue has been ignored during the project, but is addressed as the global design is given a physical implementation. Analysis. The students sampling cannot be considered representative from a statistical point of view.However, some preliminary remarks seem to make sense. Figure 11 shows that the deliverable 2 is harder to complete than the first one, but that more than half of the students got a success rate between 70% and 90%. We chose to make students pair-achieve the project.In this way, beyond simply averaging the prerequisites matching so that the pairs are equally offered a chance to succeed, we intended to favor incidental learning as pointed out by chanck [23]. The increase of the standard deviation (Figure 12) highlights that one group failed in properly using the toolset (left border, Figure 11); another way to analyze this is that the toolset allowed to overcome the complexity of deliverable 2. Another interesting point is that the global understanding raises up during the full project, being the group who gave up after the first milestone (right border, Figure 11).The difference between regular and restricted lines is that restricted lines ignore this group.Finally, the standard deviation line points out that most homogeneous results came from integration, manual design of the processor, and last using the tool set. Real Case Study 4.1.Experimentation Platform.The physical implementation was out of the scope of this project mainly due to some timetable hard constraints.Not all of the students proceeded in implementing their circuits.But the lessons we have learned are really inlined with the feedback we got from those of our students who applied for an internship in another lab. The development platform we use for this demonstrator is a Virtex-5 FXT ML510 Embedded Development Platform from Xilinx. Processor. A first noticeable difference between their former experience and the real case implementation lies in abandoning their hand written processor.Instead, the students had to instantiate a soft core. Soft-Core. The soft-core processor is a Micro-Blaze and comes along with a full software environment. Programmability. Not only, using this soft-core ensures a knowledge of state-of-the-art techniques but also it eases porting application.On the other hand, mixing soft and hard components within a single application is pretty clear to students who extended by hand the ISA of the toy processor. Simulation. Another interesting features is the observability the simulation environment provides.On the opposite, gaining visibility during ModelSim simulation required to group/color/rename signals in the first processor.This is also important for performances extraction as scanning a done signal was used for time measurement. Accelerator. The first version of the accelerator was a fine grained mesh.However, these architectures suffer from a long synthesis process, hence some coarser-grained architectures have been proposed in the literature to overcome this limitation.The second version reflects this architectural shift by exhibiting coarse-grained elements. Grain Considerations. In addition to these general considerations, students faced a performance issue when implementing fine-grained mesh over an FPGA.First, the synthesizer exhibited very low frequency.Secondly, the placement efficiency was unsurprisingly very poor.At this point, another option emerged.A coarser grained architecture, inspired of PicoGA [24] but not as complex, was considered.The new architecture is organized as pipelined stripes.Logic elements are ALUs.[25] algorithm.The students got wrong configuration until we provided them a refactored version of the placer, that conforms to the stripe-based organization. 4.4. Processor-Accelerator Pairing.The third move between the project and the real case lies in changing the way the processor and the accelerator are connected to each other.The processor must support non blocking accelerated function calls which prohibits the former coupling scheme. Coupling. Instead we asked the students to isolate the accelerator as an autonomous entity (coprocessor).The implementation was realized using FSLs, which is a classical option.Combining network concerns (FIFO, hand-shake, negotiation, etc.) with the simple adapter (Figure 6) made FSL a very natural concept to computer engineers. Timing Constraints. Gaining high performances requires to force constraints when calling the ISE synthesizer. 4.4.3.Layout.Figure 14 illustrates a layout of a coarsegrained reconfigurable architecture (see Figure 13) acting as an accelerator for a Micro-Blaze. Manual Domain Space Exploration. Once acquired a sound knowledge of the domain (architecture, platform, tools), students started to address Domain Space Exploration (DSE).First, this stage was kept manual still following the precept of "simplicity" and "just-fit approach".each.Then, the last measured impact is related to the number of configuration contexts. Metrics. It is important to measure the quality of solutions, especially the specific amount of a certain resource and architectural solution needs.Examples of such resources would be area, time, or memory storage. Speed up Measurements. Computing a speed up requires two things: first, measuring an execution time, then comparing versus a reference execution time.A nonobvious point to students is how to make a fair measurement.As an example, the coarse grained architecture may affect the processor's frequency.Hence, two speed-up must be analyzed.The first one makes use of a pure software execution time whereas the second one considers the execution time of a full software variant running on a processor/coprocessor architecture. Of course, this speed-up remains highly application dependent.A FIR execution has been considered as this was enough for teaching purposes; as an example, the speed up factor for an FIR with 8 coefficients and 6500 data hits 31.6. Towards an Automatic DSE. Creating spike solutions helps to figure out answers to tough technical or design problems.A spike solution is a very simple program to explore potential solutions.Students are encouraged to design spike solutions to stress some hypothesis before any announcement.The spike must be built only to address the problem under examination and ignore all other concerns.The goal is to reduce the risk of a technical problem or to increase the reliability of their feelings and estimate. Spike solutions are applied for grabbing synthesis information and scripting the design tool suite. Synthesis Report Analysis. The synthesis reports provide a set of information for quality measurement.The first metric is the amount of used resources.This appears as used Luts/FlipFlops pairs, plus internal fragmentation.The students have no control over the algorithms, and some results are difficult to analyze.As an example, in Figure 12, the depopulated center of the coprocessor may reflect the torus nature of the coarse-grained architecture.Nevertheless, stressing the constraints change the topology at the expense of a frequency scaling down. Frequency is the second metric that the students concentrated on, all the more so as violations can occur which invalidate the full design. The students knew to find the relevant information.Going further though would have required to write a parser, then to extract scoring out of generated reports.This would be an interesting step forward command/scoring the tool suite. 4.6.2.Xilinx SDK Scripting.In order to detect the system files that are involved in a potential scripting, a first design is done through the user interface.Then, all modified files are reported, and a command is issued to let the students precisely locate internal changes.Then, code generation happens and recompiling the projet results in refactoring the design.2 and 3 summarize for illustration purposes some of the DSE results the students collected. Conclusion This paper presents an experience report of course setup for master students discovering configware.This course tends to overcome the information pick-up limit to offer a real knowledge to students.This goes through manual design of toy examples that forces students to emphasize simple designs.Once acquired such an insight, commercial design suite are introduced for up-to-date training.Beside, research tools support complex tasks such as reconfigurable platform design, and DSE in a general way. Forces. One interesting point regarding this project lies in the change in the students feeling.When we presented at the first time the project, they thought they would never complete the goals.After the first milestone, one group gave up to avoid paying the over due penalty and bounded their work to the first deliverable.They finally reached 7 points out of 20.The other groups faced the challenge and discovered that the key issue lies in getting proper tools to free oneself from manually developing both architectures and application mapping.The final results were very likely acceptable and we collected several working packages. With this experience in mind, students are now ready for entering a very competitive job market.They share a deep understanding of both hardware design over reconfigurable architecture, microprocessors, reconfigurable cross integration, and tools and algorithms development. This effect has been clearly pointed out when migrating from a toy example to real design environment.This move has offered several dimensions for DSE: reconfigurable unit grain, processor, coupling, and so forth. A Very Positive Feedback. The actual success of this teaching experience lies in the highly efficient learning curve we noticed when students started to experience Xilinx design Kit.Obviously, neither the test bench examples we first provided nor the students population size are sufficient to practice real metrics-based measurements.Exploring the benefits of this approach (e.g., measuring speed-up) requires an easy path from a structured programming language such as C to the processor execution.Hence, the application's change would carry no need for hand-written adjustments.From our point of view, such an add-on in the project would be a fruitful upgrade to the course, and would spawn new opportunities for cross H/S expertise; keeping in mind that the reconfigurable computing course intends to get out with highly trained students sharing skills in both area. Developing a small compiler was out of the scope of this project due to some timing constraints, but remains one hot spot to be further addressed.This could benefit from some Biniou facilities such as the C-entry synthesizer. An open option is then to benefit from another course and invited keynoters to fulfill the prerequisites so that adapting/developing simple C parser becomes feasible in the scope of our project, at the cost of around an extra week. Going Further. The second very positive feedback we got is that students are ready for new experiences, even with research tools that do not offer the same QoS than commercial design suite.This offered a path to reconfigurable units design with a full high level synthesis support.Now, an interesting option is to introduce more efficient RFU, by generating coarse-grained architectures that support virtualization.Applying virtualization techniques allows to leverage some well-known limitations of reconfigurable architectures: limited amount of resources, lack of high-level programming model, and nonportability of bitstream. Biniou offers a smart framework for design-space exploration of reconfigurable IPs.Fine-grained architectures offer a nice teaching testbed, but shifting from fine to coarsegrained architecture rather make sense for current technologies.This brings no extra cost as Biniou fully supports this architectural scheme.Instead, this carries extra value as it underlines the resulting shift from "hardware" netlist design to "software" operation graphs editing. Ensuring students will get the appropriate strength to self-adapt to such changing environment remains our educational goal.Once done, hard-soft co-design and applicative needs adequation driven platform development are on their way. Figure 3 : Figure 3: On the right, view of the different cell types composing the matrix (border cells, middle cells, IO cells).On the left, configuration domains are defined as a set of rectangular boxes.They can be reconfigured independently from each other. Figure 4 : Figure 4: Structure of a basic cell (middle cell) within the RFU matrix. Figure 8 :Figure 9 : Figure 8: An application placed and routed over the RFU. 4. 5 . 1 . Considered Cases.The first dimension for variability is the matrix sizing.Several instances have been designed (5 * 2, 5 * 4, 5 * 10, 40 * 40).The second axis is the reconfiguration grain.For a similar matrix, several instances are issued with a different partial reconfiguration page size Figure 13 :Figure 14 : Figure 13: The view Biniou provides over a Coarse Grained Reconfigurable Architecture under use. Table 2 : Sizing matrix impact over frequency, resources, and synthesis time. Table 3 : Multiple context impact over frequency, resources, and synthesis time.
7,463.4
2011-03-30T00:00:00.000
[ "Computer Science" ]
On regular G-grading Let A be an associative algebra over an algebraically closed field F of characteristic zero and let G be a finite abelian group. Regev and Seeman introduced the notion of a regular G-grading on A, namely a grading A= {\Sigma}_{g in G} A_g that satisfies the following two conditions: (1) for every integer n>=1 and every n-tuple (g_1,g_2,...,g_n) in G^n, there are elements, a_i in A_{g_i}, i=1,...,n, such that a_1*a_2*...*a_n != 0. (2) for every g,h in G and for every a_g in A_g,b_h in A_h, we have a_{g}b_{h}=theta(g,h)b_{h}a_{g}. Then later, Bahturin and Regev conjectured that if the grading on A is regular and minimal, then the order of the group G is an invariant of the algebra. In this article we prove the conjecture by showing that ord(G) coincides with an invariant of A which appears in PI theory, namely exp(A) (the exponent of A). Moreover, we extend the whole theory to (finite) nonabelian groups and show that the above result holds also in that case. Introduction and statement of the main results Group gradings on associative algebras (as well as on Lie and Jordan algebras) have been an active area of research in the last 15 years or so. In this article we will consider group gradings on associative algebras over an algebraically closed field F of characteristic zero. The fact that a given algebra admits additional structures, namely graded by a group G, provides additional information which may be used in the study of the algebra itself , e.g. in the study of group rings, twisted group rings and crossed products algebras in Brauer theory (indeed "gradings" on central simple algebras is an indispensable tool in Brauer theory, as it provides the isomorphism of Br(k) with the second cohomology group H 2 (G k ,k × ). Here k is any field, G k is the absolute Galois group of k andk × denotes the group of units of the separable closure of k). In addition, and more relevant to the purpose of this article, G-gradings play an important role in the theory of polynomial identities. Indeed, if A is a PI-algebra which is G-graded, then one may consider the T -ideal of G-graded identities (see Subsection 2.1), denoted by Id G (A), and it turns out in general, that it is easier to describe G-graded identities than the ordinary ones for the simple reason that the former ones are required to vanish on G-graded evaluations, rather than on arbitrary evaluations. Nevertheless, two algebras A and B which are G-graded PI-equivalent are PI-equivalent as well, that is, Id G (A) = Id G (B) ⇒ Id(A) = Id(B). Recall that a G-grading on an algebra A is a vector space decomposition A particular type of G-gradings which is of interest was introduced by Regev and Seeman in [13], namely regular G-gradings where G is a finite abelian group. Let us recall the definition. Definition 1 (Regular Grading). Let A be an associative algebra over a field F and let G be a finite abelian group. Suppose A is G-graded. We say that the G-grading on A is regular if there is a commutation function θ : G × G → F × such that 1. For every integer n ≥ 1 and every n-tuple (g 1 , g 2 , . . . , g n ) ∈ G n , there are elements a i ∈ A gi , i = 1, . . . , n, such that n 1 a i = 0. 2. For every g, h ∈ G and for every a g ∈ A g , b h ∈ A h , we have a g b h = θ g,h b h a g . Remark 2. One of our main tasks in this article is to extend the definition above to groups which are not necessarily abelian and prove the main results in that general context. For clarity we will continue with the exposition of the abelian case and towards the end of the introduction we will discuss extensions to the nonabelian setting. It seems to us that the extension to the nonabelian case is rather natural in view of the abelian case. In those cases below where the statement in the general case is identical to the abelian case, we will make a note indicating it. As for the proofs (sections 2 and 3), in case the result holds for arbitrary groups, we present the general setting only, possibly with some remarks concerning the abelian case. A key property of the "envelope operation" is the following equality of Z /2Z-graded T -ideals of identities (and hence, also of the corresponding ungraded T -ideals of identities). IdZ /2Z (E ⊗(E ⊗B))) = IdZ /2Z (B). (1) We refer to the "envelope operation" as being involutive. It is well known that applying this operation, one can extend the solution of the Specht problem and proof of "representability" from affine to nonaffine PI-algebras (see [10], [3]). Interestingly, the property satisfied by the Grassmann algebra we just mentioned follows from the fact that the Z /2Z-grading on E is regular, and indeed in Theorem 6 we show that a similar property holds for arbitrary regular graded algebras. In order to state the result precisely we introduce the notion of G-envelope of two algebras A and B where G is a finite abelian group. Definition 5 (G-envelope). Let A, B be two G-graded algebras. We denote by A ⊗B the G-graded algebra defined by A ⊗B g = A g ⊗ B g . The following result generalizes equation (1). The proof is presented in section §2. Theorem 6. Let A be a regularly G-graded algebra with commutation function θ, and let B, C be two G-graded algebras. Letà = ⊗ |G|−1 A be the envelope of |G| − 1 copies of A, thenà is regularly G-graded and Our main goal in this article is to investigate the general structure of (minimal) regular gradings on associative algebras over an algebraically closed field of characteristic zero and in particular to give a positive answer to conjecture 2.5 posed by Bahturin and Regev in [5]. It is easy to see that a given algebra A may admit regular gradings with nonisomorphic groups and even with groups of distinct orders. Therefore, in order to put some restrictions on the possible regular gradings on an algebra A, Bahturin and Regev introduced the notion of regular gradings which are minimal. A regular G-grading on an algebra A with commutation function θ is said to be minimal if for any e = g ∈ G there is g ′ ∈ G such that θ(g, g ′ ) = 1. Given a regular G-grading on an algebra A with commutation function θ, one may construct a minimal regular grading with a homomorphic imageḠ of G. To see this, let H = {h ∈ G | θ(h, g) = 1 : for all g ∈ G}. One checks easily that θ is a skew symmetric bicharacter and hence H is a subgroup of G. Consequently, the commutation function θ on G induces a commutation functionθ onḠ = G/H. Moreover, the induced regularḠ-grading on A is minimal. In this article we consider the problem of uniqueness of a minimal regular G-grading on an algebra A (assuming it exists). It is not difficult to show that an algebra A may admit nonisomorphic minimal regular G-gradings. Furthermore, an algebra A may admit minimal regular gradings with nonisomorphic abelian groups. However, it follows from our results (as conjectured by Bahturin and Regev) that the order of the group is uniquely determined. In fact, the order of any group which provides a minimal regular grading on an algebra A coincides with a numerical invariant of A which arises in PI-theory, namely the PI-exponent of the algebra A (denoted by exp(A)). In order to state the result precisely we need some terminology which we recall now. Given a regular G-grading on an F-algebra A we consider the corresponding commutation matrix M A defined by M A g,h = θ(g, h), g, h ∈ G (see [5]). The commutation matrix encodes properties of θ. For instance, a regular grading is minimal if and only if there is only one row of ones in M A (resp. with columns). Next we recall the definition of exp(A). For any positive integer n we consider the n!-dimensional F-space P n , spanned by all monomials of degree n on n different variables {x 1 , . . . , x n } and let c n (A) = dim F (P n /(P n ∩ Id(A)). This is the n-th coefficient of the codimension sequence of the algebra A. It was shown by Giambruno and Zaicev (see [6], [7]) that the limit lim n→∞ c n (A) 1/n exists and is a nonnegative integer. The limit is denoted by exp(A). We can now state the main result of the paper in case the gradings on A are given by abelian groups. 3. In fact, det(M A G ) = ± |G| |G|/2 . Remark 8. Some of the results stated in Theorem 7 were conjectured in [5]. Specifically, as mentioned above, Bahturin and Regev conjectured that the order of a group which provides a minimal regular gradings on an algebra A is uniquely determined. Moreover, they conjectured that if M A G and M A H are the commutation matrices of two minimal regular gradings on A with groups G and H respectively, then det(M A G ) = det(M A H ) = 0. Not necessarily abelian groups Suppose now that G is an arbitrary finite group and let A be a G-graded algebra. As above, A is an associative algebra over an algebraically closed field F of characteristic zero. We denote by A g the corresponding g ∈ G-homogeneous component. Definition 9. We say that the G-grading on A is regular if the following two conditions hold. Remark 10. In the special case where the elements g, g ′ ∈ G commute we write θ g,g ′ instead of θ ((g,g ′ ), (12)) . In particular we will often use the notation θ g,g . Note that if G is abelian, then θ ((g1,...,gn),σ) is determined by θ gi,gj , Typical examples of regularly graded algebras (G arbitrary) are the well known group algebras FG, and more generally, any twisted group algebra F α G where α is a 2-cocycle on G with values in F × . Indeed, this follows easily from the fact that each homogeneous component is 1-dimensional and every nonzero homogeneous element is invertible. Additional examples can be obtained as follows. 1. If A is a regularly G-graded algebra then E ⊗ A has a natural regular Z /2Z × G-grading where E is the infinite dimensional Grassmann algebra. Let A be a regularly G-graded algebra and suppose the group G contains a subgroup H of index 2. Then we may view A as a Z /2Z ∼ = G/H-graded algebra and we let be the Grassmann envelope of A and consider the following G-grading on it. For any g ∈ H, we put We claim the grading is regular. Indeed, let (g 1 , . . . , g n ) ∈ G n and let σ ∈ Sym(n) be a permutation such that g 1 · · · g n = g σ(1) · · · g σ(n) . Then for elements z g1 , . . . , z gn where z gi ∈ E(A) gi , we have where τ is the commutation function of the infinite Grassmann algebra. For future reference we denote the commutation function which corresponds to the regular G-grading on E(A) by τ θ. Following the discussion in the abelian case we define now nondegenerate gradings for arbitrary finite groups as the counterpart of minimal gradings. Let A be an associative algebra and suppose it has a regular G-grading with commutation function θ. We say that the grading is nondegenerate if for every g = e in G, there is an element g ′ ∈ C G (g) (the centralizer of g in G) such that θ (g,g ′ ) = 1. Remark 11. It turns out (see Lemma 37) that g → θ g,g is a homomorphism from G to {±1} and therefore its kernel H = {g ∈ G : θ g,g = 1} is a subgroup of G (of index ≤ 2). In case H = G, there is a cohomology class [α] ∈ H 2 (G, F × ) such that F α G has commutation function θ. Then, the nondegeneracy of the G-grading on A corresponds to α being a nondegenerate 2-cocycle. Groups G which admit nondegenerate 2-cocycle are called "central type". It is a rather difficult problem to classify central type groups. It is known, using the classification of finite simple groups(!), that any central type group must be solvable. It seems to be an interesting problem to classify finite groups which admit nondegenerate commutation functions (modulo the classification of central type groups). Our main results in the general case are extensions of the results appearing in Theorem 7. Remark 13. In case G is abelian, the commutation function θ is a skew symmetric bicharacter (see Definition 24). In this case, it is a well known theorem of Scheunert [14] that θ arises from a 2-cocycle (as mentioned in the previous remark). The notion of a bicharacter was considerably generalized to cocommutative Hopf algebras (and hence in particular to group algebras) (see [4]). Furthermore, whenever the bicharacter is skew symmetric, the theorem of Scheunert can be extended to that case. However, it should be noted that already for group algebras FG where G is a nonabelian group, the linear extension of β(g, h) = f (g, h)/f (h, g) for a 2-cocycle f : G × G → F × is not in general a skew symmetric bicharacter on FG. Our results should be viewed or interpreted as to overcome this problem by considering commutation functions which satisfy certain natural necessary conditions in case they arise from 2-cocycles on G, and then Lemma 32 provides a generalization of Scheunert's theorem to that context, namely every such commutation function on G ideed arises from a 2-cocycle on G. Commutation matrix As for the commutation matrix and its characteristic values, we need to fix some notation. Suppose A has a regular G-grading and let θ A be the corresponding commutation function. Suppose first that θ A g,g = 1 for every g ∈ G. In that case we know that the commutation function corresponds to an element [α] ∈ H 2 (G, F × ). With this data we consider the corresponding twisted group algebra B = F α G which is regularly G-graded with commutation function θ A . It is well known that the algebra F α G is spanned over F by a set of invertible homogeneous elements {U g } g∈G that satisfy U g U h = α(g, h)U gh for every g, h ∈ G. Let us construct the corresponding commutation matrix. For every pair (g, h) ∈ G 2 we consider the element h for every g, h ∈ G and we note that this element does not depend on the choice of the basis {U g : g ∈ G}. Next we consider the general case. Let ψ : G → F × be the map determined by ψ(g) = θ g,g . The function ψ will be shown to be a homomorphism with its image contained in {±1} and we set H = ker(ψ) = {g ∈ G : θ g,g = 1}. Applying the construction above we may define a G-grading on E(A) where the Z /2Z-grading on A is defined by A = A H ⊕ A G\H . The commutation function τ · θ A of E(A) satisfies (τ θ A ) g,g = 1 for every g ∈ G. As in the previous case the function τ θ A corresponds to a cohomology class [α] ∈ H 2 (G, F × ) were α is a representing 2-cocycle. We let B = F α G be the corresponding twisted group algebra with commutation function θ B = τ θ A and for every g, h ∈ G we consider the element The commutation matrix is defined by h . We will usually write τ g,h instead of τ ψ(g),ψ(h) . Note that if ψ ≡ 1, then H = G and τ | G ≡ 1, so we have that θ B = θ A as in the first case. Theorem 14. Let A be an associative algebra over an algebraically closed field of characteristic zero. Suppose A admits a nondegenerate regular G-grading and let θ be the corresponding commutation function. Let M G be the commutation matrix constructed above. Then M 2 G = |G| · Id. As a consequence we extend Theorem 7(2,3) for arbitrary nondegenerate regular gradings (see Subsection 3.1, Corollary 47). In case θ g,g = 1 for every g ∈ G, we have that the elements and so the commutation matrix may be viewed as a matrix in M r 3 (F ). In that case we obtain the following corollary. Preliminaries, examples and some basic results In the first part of this section we recall some general facts and terminology on G-graded PI-theory which will be used in the proofs of the main results (we refer the reader [3] for a detailed account on this topic). In the second part of this section we present some additional examples of regular gradings on finite and infinite dimensional algebras. Finally, we present properties of regular gradings and prove Theorem 6. Graded polynomial identities Let W be a G-graded PI-algebra over F and I = Id G (W ) be the ideal of G-graded identities of W . These are polynomials in F X G , the free G-graded algebra over F generated by X G , that vanish upon any admissible evaluation on W . Here X G = g∈G X g and X g is a set of countably many variables of degree g. An evaluation is admissible if the variables from X g are replaced only by elements of W g . It is known that I is a G-graded T -ideal, i.e. closed under G-graded endomorphisms of F X G . We recall from [3] that the T -ideal I = Id G (W ) is generated by multilinear polynomials and so it does not change when passing toF, the algebraic closure of F, in the sense that the ideal of identities of WF overF is the span (overF) of the T -ideal of identities of W over F. Additional examples of regular gradings We present here some more examples (in addition to the ones presented in the introduction). The following example corresponds to the grading determined by the symbol algebra (1, 1) n . Example 16. Let M n (F) be the matrix algebra over the field F, and let G = Z /nZ × Z /nZ. For ζ a primitive n-th root of 1 we define Let us check the G-grading is regular. For any two basis elements we have that and hence the second condition in the definition of a regular grading is satisfied. The first condition in the definition follows at once from the fact that the elements X and Y are invertible. Finally we note that since ζ is a primitive n-th root of unity, the regular grading is in fact minimal. Example 17. For any n ∈ N and c ∈ F × , we can define a regular Z /nZ-grading on A = F[x] / x n −c by setting A k = F · x k . Clearly, the commutation function here is given by θ h,g = 1 for all g, h ∈ Z /nZ. Example 18. For any algebra A we have the trivial G = {e}-grading by setting A e = A. In this case the grading is regular if and only if A is abelian and nonnilpotent. Example 19. We present an algebra with a nondegenerate regular G-grading where G is isomorphic to the dihedral group of order 8. Consider the presentation x, y : x 4 = y 2 = e, yxy −1 = x 3 of the group G. It is well known that there is a (unique) nonsplit extension The map π is determined by π(u) = x and π(v) = y. Note that the extension is nonsplit on any nontrivial subgroup of G, that is, if {e} = H ≤ G, then the restricted extension y, x 2 y} and so we can consider the corresponding Grassmann envelope E(F αG G). We show that E(F αG G) is regularly G-graded and moreover the grading is nondegenerate. Clearly the natural G-grading on the twisted group algebra F αG G is regular and hence the corresponding G-grading on E(F αG G) is also regular. Let θ be the corresponding commutation function. To see that the G-grading on E(F αG G) is nondegenerate, note that since the cocycle α H is nontrivial on every subgroup H = {e} of G, the group π −1 (K) is isomorphic to the quaternion group of order 8 and hence the twisted group subalgebra F αK K of F αG G is isomorphic to M 2 (F). This shows that the nondegeneracy condition (see Definition 21) is satisfied by any nontrivial element of K. For elements g in G \ K we have that θ g,g = −1 and we are done. We will return to this example at the end of the paper. The commutation function θ and the commutation matrix We now turn to study some properties of the commutation function θ. We start with some notation. Let G be a group andḡ = (g 1 , ..., g n ) ∈ G n . The conditions in the following lemma correspond to the properties of T-ideal, namely (1)-closed to multiplication (2)-closed to substitution and (3)-closed to addition. Lemma 20. Let G be a group and A a regularly G-graded algebra with commutation function θ. Then θ satisfies the following conditions. Proof. 1. This is an immediate consequence of the associativity of the product in A. Next, we define G-commutation functions. We remind the reader that if g, h ∈ G commute we may denote by θ g,h the scalar θ ((g,h),(1,2)) . Definition 21. Let θ be a function from the pairsḡ = (g 1 , ..., g n ) ∈ G n , σ ∈ Sym(ḡ) with values in F × . We say that θ is a G-commutation function if it satisfies conditions (1, 2, 3) from the last lemma. The function θ is said to be nondegenerate if for any e = g ∈ G there is some h ∈ C G (g) such that θ g,h = 1. In Lemma 37 below we show that each G-commutation function is in fact the commutation function of some regularly G-graded algebra. By the definition, we get that a regular grading is nondegenerate if and only if the commutation function is nondegenerate. Lemma 22. Let θ be a G-commutation function. Then the following hold. For every commuting pair For any fixed g ∈ G, the functions h → θ g,h and h → θ h,g are characters on C G (g). If 3. By the conditions in Lemma 20 we get that if g ∈ G and h 1 , h 2 ∈ C G (g), then Similarly we have that θ h1h2,g = θ h1,g θ h2,g . If G is abelian, then the commutation function θ (ḡ,σ) is defined by its values on pairs θ g,h . In that case we get that C G (g) = G for all g ∈ G and the conditions in Lemma 20 follow from those in the last lemma. We recall the definition of such functions. Definition 24 (Bicharacter). Let η : G × G → F × be a map where G is a group and F × is the group of units of the field F. We say that the map η is a bicharacter of G if for any A bicharacter is said to be nondegenerate if for any e = g ∈ G there is an element h ∈ G such that θ(g, h) = 1. Remark 25. In general, if θ is a commutation function on a finite group G, then for any commuting elements g, h ∈ G we have ord( θ(g, h) ) | gcd( ord(g), ord(h) ), so θ(g, h) is contained in the group of roots of unity of order |G| in F × . In fact, as it will be shown below, this holds for any θ (ḡ,σ) . Also, we have that θ(g, g) = θ(g, g) −1 so θ(g, g) ∈ {±1} for every g ∈ G. We present now two lemmas which summarize properties of the commutation function and the "G-envelope operation". The proof of the first lemma follows directly from the definitions and is left to the reader. Lemma 26. Suppose that A, B are G, H-regulary graded algebra with commutation functions θ and η respectively. Then the following hold. Furthermore, the corresponding commutation function is θ. In particular n 1 A is regularly G-graded for any n ∈ N. 3. If N ≤ G is a subgroup, then A N = g∈N A g is a regularly N -graded algebra with commutation function θ | N -the restriction to tuples in N . If the groups G, H are abelian, then the commutation matrix which corresponds to the cases considered in the lemma are calculated as follows: (1) In the nonabelian case we have a similar connection between the commutation matrices, though the ring over which the matrices are defined may differ. More details are presented in the end of Subsection 3.1. Suppose A is regularly G-graded and let θ be the corresponding commutation function. Given a multilinear polynomial f (x g1,1 , ..., Lemma 27. Let A be a regularly G-graded algebra with commutation function θ and let B be any Proof. By multilinearity of f we only check that f vanishes on a spanning set. For any a i ∈ A gi and b i ∈ B gi we get that then the first term is always zero. Since the grading on A is regular, we can find a i such that a i = 0, Before we proceed with the proof of Theorem 6 we recall that for any G-graded algebra over a field of characteristic zero F, the T -ideal of G-graded identities is generated by multilinear polynomials which are strongly homogeneous, namely polynomials of the form Proof. of Theorem 6 1. This is immediate since f θ = f when θ ≡ 1. 2.à ⊗A is the product of |G| copies of A. This is a regularly G-graded algebra with commutation function θ |G| ≡ 1. We now use the associativity of the envelope operation and part (1) to conclude that Id G (à ⊗(A ⊗B)) = Id G (B). 3. This follows immediately from the previous lemma. Main Theorem Our main objective in this section is to prove Theorem 7. The first step is to translate the definition of "regular grading" into the language of graded polynomial identities. Lemma 28. Let A be an algebra over F, G a finite group and A = g∈G A g a G-grading on A. Then the grading is regular if and only if the following conditions hold. There is a function Proof. The proof is clear. Indeed, condition (1) (resp. (2)) of the lemma is equivalent to the first (resp. second) condition in the definition of a regular grading. As mentioned above, the conditions in Lemma 20 correspond to the properties of the T-ideal Id G (A), where (1), (2), (3) correspond to closure under multiplication, closure under endomorphisms and closure under addition respectively. Here is the precise statement. Then the following hold. 1. The vector space I is a T -ideal. 2. The G-grading on F XG /I is regular with commutation function θ. In particular, any G-commutation function is a commutation function of some regular algebra. Proof. The proof is based on translating the conditions of 20 into the language of T -ideals. We give here only an outline of the proof and leave the details to the reader. 1. By definition I is closed under addition. To see I is closed under the multiplication of arbitrary polynomials, it is sufficient to show it is closed under multiplication by x g,j for any g ∈ G and j ∈ N which is exactly condition (1) in Lemma 20. Next we show the ideal I is closed under endomorphisms. Notice that if s ∈ S is multilinear and ϕ ∈ End(F X G ), one can decompose ϕ = ϕ 1 • ϕ 2 such that ϕ 2 sends each x gj ,ij to a sum of multilinear monomials, all disjoint from each other, and ϕ 1 sends each x g,j to some x g ′ ,j ′ . It now follows from condition (2) in Lemma 20 that ϕ 2 (s) = s l for some s l ∈ S multilinear, and that ϕ 1 (S) ⊆ S. This completes the proof. Definition 30. Let θ be a G-commutation function. The algebra F XG /I defined in the previous proposition is called the θ-relatively free algebra. Let A be a G-graded algebra. Let π : G →Ḡ be a surjective homomorphism and let A = ḡ∈Ḡ Aḡ be the induced grading on A byḠ (that is Aḡ = π(g)=ḡ A g ). Clearly, for multilinear polynomial f we have f (xḡ 1 ,1 , ..., xḡ n ,n ) ∈ IdḠ(A) if and only if f (x g1,1 , ..., x gn,n ) ∈ Id G (A) for every g i ∈ G with π(g i ) =ḡ i and so, in the particular case where π : G → {e}, we obtain the aforementioned fact that algebras which are G-graded PI-equivalent, are also PI-equivalent. This simple but important fact will enable us to replace the algebra A by a more tractable G-graded algebra B (satisfying the same G-graded identities as A) from which it will be easier to deduce the invariance of the order of the group which provides a nondegenerate regular grading on A. For the rest of this section, unless stated otherwise, we assume that F is algebraically closed and char(F) = 0. Proof. Clearly, it is sufficient to consider multilinear polynomials. Letḡ = (g 1 , ..., g n ) ∈ G n . Applying binomial G-graded identities of A (see Lemma 28), a poly- Thus, the statement f (x g1,1 , ..., x gn,n ) ∈ Id G (A) is equivalent to a condition on the commutation function θ A and the result follows. As mentioned above, we wish to replace any regularly G-graded algebra A with commutation function θ A by a better understood regularly G-graded algebra B with commutation function θ B = θ A . We first deal with the case where θ g,g = 1 for all g ∈ G (we remind the reader that in general θ g,g = ±1 for all g ∈ G). Here the algebra B will be isomorphic to a suitable twisted group algebra B = F α G, where α is a 2-cocycle on G with values in F × . Recall that B = F α G is isomorphic to the group algebra FG as an F-vector space and if {U g : g ∈ G} is an F-basis of F α G, then the multiplication is defined by the rule U g U h = α(g, h)U gh for every g, h ∈ G. It is well known that up to a G-graded isomorphism, the twisted group algebra F α G depends only on the cohomology class of α ∈ H 2 (G, F × ) and not on the representative α. In order to construct the 2-cocycle α = α θ , we show that the commutation function θ = θ A (with θ g,g = 1, for all g ∈ G) determines uniquely an element in Hom(M (G), F × ), where M (G) denotes the Schur multiplier of the group G. Then applying the Universal Coefficient Theorem, we obtain an element in H 2 (G, F × ) which by abuse of notation we denote again by θ. Here is the precise statement and its proof. Lemma 32. Let θ be a G-commutation function such that θ g,g = 1 for all g ∈ G. Then there is a 2-cocycle α ∈ Z 2 (G, F × ) such that the commutation function of B = F α G is θ. Recall that from the Universal Coefficient Theorem we get that for any group G we have an exact sequence where M (G) is the Schur multiplier of G. Note that since F is assumed to be algebraically closed, we have that Ext 1 (G ab , F × ) = 0 and hence in that case, the map π is an isomorphism. Thus, our task is to find a suitable element in Hom(M (G), F × ) and then show that its inverse image in H 2 (G, F × ) satisfies the required property. To start with, we fix a presentation of M (G) via the Hopf formula: Let F be the free group F = y g | g ∈ G and define ϕ : F → G by ϕ(y g ) = g. Setting R = ker(ϕ) we have the exact sequence Let A = F XG /I be the θ-relatively free algebra defined in Proposition 29. Then A is regularly G-graded with commutation function θ. If we can find elements a g ∈ A g , g ∈ G, which are invertible, then we can define a group homomorphismψ : F → A × induced by the map y g → a g . Notice that the image of any commutator in [R, F ] is mapped to 1 (because R is mapped to A e which is in the center) while y g1 · · · y gn y −1 g σ(n) · · · y −1 g σ(1) is mapped to θ(ḡ, σ)1. The induced map ϕ : M (G) → F × is the required map. In general A might not have such invertible elements, so we need to construct new elements. Let S = x g,1 x g −1 ,2 g∈G ⊆ F X G . Note that it is sufficient to show that the elements in S represent nonzero divisors in A since in that case, the localized algebra A ′ = AS −1 will still be regularly G-graded with commutation function θ and in addition each x g,1 will be invertible (notice that x g −1 ,2 x g,1 = θ(g −1 , g)x g,1 x g −1 ,2 and θ(g −1 , g) = θ(g, g) −1 = 1, so x g,1 is right and left invertible). Suppose that there is some 0 = f ∈ A such that x g,1 x g −1 2 · f ≡ 0. We can assume that f is homogeneous (i.e. its monomials have the same G-homogeneous degree), and by standard methods (since the field is infinite) we can assume that every variable x h,i appears with the same total degree in each monomial of x g,1 x g −1 2 · f and therefore this is true also in f . Finally, using the binomial identities we can assume that f is a monomial. Now, by assumption x g,1 x g −1 2 · f cannot be a monomial with different variables, so we need to show there is no general monomial identities (i.e. with possibly repeated variables). Let a 1 x g,i a 2 x g,i a 3 · · · a n x g,i a n+1 ∈ Id G (A) be a monomial identity where x g,i does not appear in the monomials a i . Then using linearization we get that the polynomial f = σ∈Sn a 1 z g,σ(1) a 2 z g,σ(2) a 3 · · · a n z g,σ(n) a n+1 . is an identity as well. We claim that the monomials in f are equal modulo identities of A. In order to see this suppose that a, b, c are monomials and denote by h the degree of a. Let g be some element in G. Then where the middle equation is true since θ g,g = 1 and the first and third equalities are true because monomials of degree e are in the center. We therefore have ax g,1 bx g,2 c ≡ ax g,2 bx g,1 c. Applying this equivalence we have that σ∈Sn a 1 y g,σ(1) a 2 y g,σ(2) a 3 · · · a n y g,σ(n) a n+1 ≡ n!a 1 y g,1 a 2 y g,2 a 3 · · · a n y g,n a n+1 . Finally, we see that if we repeat this process for every pair g ∈ G, i ∈ N such that x g,i has total degree greater than 1 in our monomial identity, we obtain a monomial identity with distinct variables -contradiction. Suppose that A has a nondegenerate regular grading with commutation function θ such that θ(g, g) = 1 for all g ∈ G. Let B = F α G as constructed in the last lemma. Clearly, the twisted group algebra B is regularly G-graded and the commutation function is θ. Invoking Lemma 31 we have the following Corollary. Our goal is to extract the cardinality of G from Id(B). By Maschke's theorem, we know that any twisted group algebra B = F α G is a direct sum of matrix algebras. We wish to show that the commutation function θ is nondegenerate if and only if B is simple, or equivalently dim(Z(B)) = 1. It is easily seen that the center Z(B) is spanned by elements of the form σ∈G λ σ U σgσ −1 where g ∈ G and λ σ ∈ F. We call a conjugacy class that contribute a nonzero central element a ray class. The determination of the ray classes and their corresponding central elements is well known (for example see [11], section 2). The next lemma gives the condition for a conjugacy classes to be a ray class and Lemma 38 will generalize this idea to the Z /2Z-simple case. Lemma 34. Let g ∈ G and choose some set of left coset representatives {t i } k 1 of C G (g) in G. For any 2-cocycle α ∈ Z 2 (G, F × ) the following conditions are equivalent: In addition, if there are λ i ∈ F, i = 1, ..., k, not all zero, In particular we get that a = 1 λ1 b is central in F α G. Proof. Suppose first that (1) holds. Let w ∈ G. Then for every i ∈ {1, ..., k}, there are τ (i) ∈ {1, ..., k}, h i ∈ C G (g) and c i ∈ F × such that U w U ti = c i U t τ (i) U hi . Note that τ = τ w is a permutation of {1, ..., k}. Then we have that and so U w a = aU w . Since the set {U w : w ∈ G}, spans F α G, we get that a is central. On the other hand if a is central and h ∈ C G (g), then there is some so we must have that c = 1 and we get that (2) ⇒ (1). By the last lemma, each ray class contributes only one central element element up to a scalar multiplication, which we call a ray element. In addition, ray elements from different ray classes are linearly independent. Thus we get that dim(Z(B)) is the number of ray classes. We can complete now the proof of Theorem 7 in case the commutation function satisfies θ g,g = 1 for every g ∈ G. Indeed, in [12] Regev showed that exp(M k (F)) = dim (M k (F)) = k 2 and since the exponent of an algebra depends only on its ideal of identities, we have from Lemma 31 that if A has a regular G-grading such that θ g,g = 1 for all g ∈ G, then |G| is an invariant of A (as an algebra and independent of the grading). Corollary 36. Let A be an algebra over an algebraically closed field F of characteristic zero. If G is a finite group such that A has a nondegenerate regular G-grading with θ g,g = 1 for all g ∈ G then |G| = exp(A). We move on to the general case where θ g,g can be −1. Let H = {g ∈ G | θ g,g = 1}. We are to show that H is a subgroup of G of index 1 or 2. Then, if the index is 1, we are in the previous case where θ g,g = 1 for all g ∈ G whereas in the second case, we will find a twisted group algebra for the group G such that its Grassmann envelope will be PI-equivalent to A. Let E = E 1 ⊕ E −1 be the infinite dimensional Grassmann algebra over the field F, where E 1 and E −1 are the even and odd components of E. As noted above, this grading on E is a regular C 2 -grading with commutation function τ : Lemma 37. Let θ be a G-commutation function. Then there is a 2-cocycle α ∈ Z 2 (G, F × ) and a subgroup H ≤ G such that for is a regularly G-graded algebra with commutation function θ. Proof. Let ψ : G → {±1} be the map ψ(g) = θ g,g . We claim that ψ is a homomorphism. To see this, let A be the θ-relatively free algebra (see Proposition 29). For h, g ∈ G, let θ gh,gh be the (unique) scalar such that x g,1 y h,1 x g,2 y h,2 = θ gh,gh x g,2 y h,2 x g,1 y h,1 in A (we use y h, * instead of x h, * for clarity). From Remark 23 we see that monomials of total degree e are central and hence we have It follows that θ gh,gh = θ g,g θ h,h , and hence letting H = ker(ψ) we have either H = G or [G : H] = 2. The case where H = G is the case considered above so we can assume that H = G. In this case, roughly speaking, we apply first the Grassmann envelope operation to "turn the −1's (in the image of θ g,g ) into +1's", then use the previous case to find some B = F α G, and finally apply the Grassmann envelope operation once again in order to return to the original identities. Let us pause for a moment and summarize what we have so far. By the previous lemma we have constructed an algebraB which has a regular G-grading whose commutation function coincides with a given commutation function θ and hence, if θ is the commutation function of a regularly G-graded algebra A, we have in fact constructed a regularly G-graded algebraB with the same commutation function. It follows from Lemma 31 that Id G (A) = Id G (B), Id(A) = Id(B) and hence exp(A) = exp(B). The main point for constructing the algebraB is that in case the grading is nondegenerate, it enables us to show that ord(G) = exp(B). For this we need to further analyze the algebraB (constructed in Lemma 37). Note that the algebras B andB in the lemma above satisfy θ g1,g2 = τ ψ(g1),ψ(g2)θg1,g2 . In particular if h ∈ H and g ∈ C G (h) then θ g,h =θ g,h . Since the grading onB is nondegenerate then for every h ∈ H there is some g ∈ C G (h) with θ h,g = 1, and from what we just said, this is also true for B. Lemma 38. Let G be a finite group and H a subgroup of index 2. Let B = F α G be a twisted group algebra such that for every e = h ∈ H there is some Proof. Suppose first that the twisted group algebra F β H, where β = α | H , is simple. Let 0 = I be a Z 2 -graded ideal of B and denote I 0 = I ∩ B H and I 1 = I ∩ B G\H , so I = I 0 ⊕ I 1 . Observe that since I 0 is an ideal in F α H, it is either 0 or F α H. On the other hand, taking any U g where g / ∈ H, we have U g · I 0 ⊆ I 1 and since U g is invertible in F α G we have equality. It follows that I 0 = F α H for otherwise I = 0. We now have so we see that I = B. This proves that B is Z 2 -simple in that case. If B is simple, then it must also be Z 2 -simple, so assume that neither B nor F β H are simple, or equivalently both α and β are degenerate 2-cocycles. This means that there is h 0 ∈ H such that U h0 U h = U h U h0 for all h ∈ C H (h 0 ), and similarly there is g 0 ∈ G such that U g U g0 = U g0 U g for all g ∈ C G (g 0 ). Note that by the assumption on B we must have g 0 / ∈ H. Let {t i } , {s i } be left coset representatives of C H (h 0 ) and C G (g 0 ) respectively. By Lemma 34 we have a = U ti U h0 U −1 ti ∈ Z(F β H) and b = U si U g0 U −1 si ∈ Z(F α G). If s i / ∈ H then s i g 0 ∈ H is a representative of the same left coset of C G (g 0 ) as s i , so we may assume that s i ∈ H for all i. By the assumption on B, there is some g 1 ∈ C G (h 0 ) such that U g1 U h0 U −1 g1 = cU h0 with c = 1, and in particular g 1 / ∈ H by the choice of h 0 . It is easily seen that is again a set of left coset representatives of C H (h 0 ) in H (H is normal in G and g 1 ∈ C G (h 0 )). We now have that Let h ∈ H be such that hg 1 = g 0 . Then and we get a contradiction. Thus, we must have that either F α G or F β H are simple. In both cases the algebra F α G is Z 2 ∼ = G /H-simple and the lemma is proved. Lemma 39. Let G be a finite group and H a subgroup of index 2. Let B = F α G and letB = (E 1 ⊗ B H )⊕ E −1 ⊗ B G\H be the Grassmann envelope of B. We denote by θ andθ the commutation functions of B andB respectively. If the regular G-grading onB is nondegenerate, then B is a Z 2simple algebra. Proof. By nondegeneracy of the grading, we have for any e = h ∈ H an element g ∈ C G (h) such that θ g,h = 1 inB. But the Grassmann envelope operation does not change this property so it holds for the G-graded algebra B. Now use the previous lemma. The fact that the algebra B = F α G is finite dimensional over F (F is algebraically closed of characteristic zero) and Z 2 -simple almost determines the structure of B. Proof. This is well known. See for instance Lemma 6 in [9]. In our case, the algebra B satisfies an additional condition, namely dim(B 1 ) = dim(B 2 ) = |H| so if B is of the second type above we must have n = 2m. We can now complete the proof of part 1 of Theorem 7. Corollary 41. Let A be an algebra over an algebraically closed field F of characteristic 0. For every finite group G, if A has a nondegenerate regularly G-graded structure then |G| = exp(A). Proof. We know that there is a simple Z 2 -graded algebra B = F α G (where B is one of the three types mentioned in the corollary above) such that the algebraB = E(B) satisfies Id G (B) = Id G (A). In We close this section with some additional corollaries of Lemma 37 and Lemma 39. Let us denote the Grassmann envelope of the algebra B in Corollary 40 (types (2) and (3) respectively) as follows: Corollary 42. Suppose that A has a nondegenerate regular G-grading for some finite group G. Then one of the following holds. It is well known that the families considered in the corollary above are mutually exclusive. Furthermore, different integers n or m yield algebras which are PI-nonequivalent. Indeed, algebras within the same type are PI-nonequivalent as their exponent is different. Next, any algebra of type one satisfies a Capelli polynomial whereas any algebra of type 2 or 3 does not. Finally, the exponent of any algebra of type 2 is an exact square whereas this is not the case for any algebra of type 3. Corollary 43. Suppose that A has a nondegenerate regular G-grading for some finite group G. Then there is a unique algebra C ∈ U such that A and C are PI-equivalent. From the results above we can now derive easily a consequence on the commutation matrix M A for a regularly G-graded algebra A with commutation function θ. The complete proof of Theorem 7 (parts 2 and 3) is presented in the next section. Recall that for nondegenerate regularly G-graded algebras of type 1 we have θ g,g = 1 for all g ∈ G, whereas for type 2 and 3 half of the entries on the diagonal of M A are 1's and half are −1's. From the definition of the commutation matrix (see Subsection 1.1.1) we see that M A g,g = θ g,g U e . This clearly implies the following corollary. The commutation matrix It is easy to exhibit algebras with nonisomorphic nondegenerate regular G-gradings for some group G as well as examples of algebras with minimal regular gradings with nonisomorphic groups. For instance the algebra M 4 (F) admits (precisely) two nonisomorphic minimal gradings with the group Z /4Z× Z /4Z = g, h . These gradings are determined by bicharacters θ 1 and θ 2 , where θ 1 (g, h) = ζ 4 and θ 2 (g, h) = ζ 3 4 . On the other hand the algebra M 2 (F) admits a (unique) nondegenerate grading with the Klein 4-group and hence the algebra M 4 (F) ∼ = M 2 (F) ⊗ M 2 (F) admits a nondegenerate regular grading with the group ( Z /2Z) 4 . We therefore see that in general the entries of commutation matrices which correspond to different nondegenerate regular gradings on an algebra A may be distinct. However, the last corollary shows that the trace of the commutation matrices remains invariant. Our goal in this section is to extend Corollary 44 and show that any two such matrices corresponding to nondegenerate gradings are conjugate (Theorem 7). We will follow the notation from Subsection 1.1.1. In particular we have B = F α G, H = ker (g → θ g,g = ψ(g)) (a subgroup of G of index ≤ 2) and A is PI-equivalent to the Grassmann envelope of B with respect to the Z /2Z-grading B = B H ⊕ B G\H . Before we consider nondegenerate gradings, let us analyze briefly the degenerate case. If G is abelian, the commutation matrix is given by M A g,h = θ(g, h)U e . Hence, since the grading is not minimal, there exists g = e such that θ(g, h) = θ(h, g) = 1 for all h ∈ G and so M A is not invertible. The next proposition shows that this is true in the nonabelian case as well. Proposition 45. Let A be a regularly G-graded algebra with a degenerate grading. Then M A is not invertible. Proof. Let B = F α G be the twisted group algebra which corresponds to the G-graded algebra A and denote by θ B and θ A the corresponding commutation functions. We note that for commuting elements g 1 , g 2 ∈ G we have θ B g1,g2 = −θ A g1,g2 if θ A g1,g1 = θ A g2,g2 = −1 and θ B g1,g2 = θ A g1,g2 otherwise. Since the field F is algebraically closed of characteristic zero, B is a direct sum of matrix algebras over F. Fix a representation ρ : B → M n (F). The grading on A is degenerate so there is some e = h ∈ G such that θ A h,g = 1 for all g ∈ C G (h) and in particular θ A h,h = 1. We thus have θ B h,g = 1 for all g ∈ C G (h). As a consequence, applying Lemma 34, the element Let v ∈ g∈G B (a vector of size G with entries in B) where v g = λ g U g for some λ g ∈ F, and consider Clearly, we may choose the λ h 's such that h∈G τ g,h λ h U h = λ 1 U e + λ 2 z for all g ∈ G. This element is central so we have M A v g = λ 1 U e + λ 2 z. But the center of M n (F) is F · I, so there is some c ∈ F such that ρ(z) = c · I and hence ρ( M A v g ) = (λ 1 + cλ 2 ) · I. We see that if we choose λ 1 , λ 2 not both zero such that λ 1 + cλ 2 = 0 we have that ρ(M A v) = 0 for some v = 0. Moreover, we note that the nonzero entries of v are invertible in B. Let ρ i be the distinct representations of B and let e i ∈ B be such that ρ i (e j ) = δ i,j · I. For each i let v i be a vector corresponding to ρ i as constructed above and let v = v i e i ∈ g∈G B. Then . Furthermore, taking g ∈ G such that v i g = 0, we know that v i g is invertible and so ρ i (v i g ) = 0. This implies that v = 0. On the other hand we get for each i and so M A v = 0. We conclude that M A is not invertible from the left, and similar computations show that it is not invertible from the right. Now we consider the case where the grading is nondegenerate. Proposition 46. Let A be a nondegenerate regularly G-graded algebra, then Proof. Recall that for any fixed g ∈ G, the function θ A (·,g) : C G (g) → F × is a character, and since the grading is nondegenerate, this character is nontrivial for g = e. For fixed a, c ∈ G, set N = C G (a −1 c) and choose a set of left coset representatives Notice that since ψ : is the commutation function of the Grassmann algebra with the Z 2 -grading. In addition, the character θ A (·, a −1 c) : N → F × is nontrivial if and only if a = c and so we get that This completes the proof of the proposition. In the next discussion we use the notation of abelian groups, namely M A ∈ M |G| (F) with M A g,h = θ(g, h). This can be generalized to the nonabelian (i.e. not necessarily abelian) in the following way. We may view B = F α G as a direct sum of matrices, and then also M |G| (F α G) is isomorphic to a direct sum of matrices. Alternatively, we may factor through a representation ρ : B → M t (F) of B and then extend it to ρ : M |G| (F α G) → M |G|t (F). In any case, the matrix M A can be viewed as a matrix in M k (F) for some k large enough. It follows from the last proposition that the commutation matrix M A satisfies the polynomial p(x) = x 2 − n = (x − √ n)(x + √ n) where n = |G| = 0. Hence, the corresponding minimal polynomial is either (x − √ n),(x + √ n) or p(x). In each case the minimal polynomial has only simple roots and hence the matrix M A is diagonalizable. Let α + and α − denote the multiplicities of the eigenvalues √ n and − √ n respectively. Then we have α + + α − = n and α + − α − = tr(M A ) √ n . In our case, M A has only 1's on the diagonal (the first type of regular algebras), or half 1's and half −1's (the second and third type of regular algebras). Moreover, by Corollary 44 we know that this depends only on the algebra A and not on the grading. Thus, for algebras of the first type (in Corollary 42) we have that n = exp(A) = |G| is a square and tr(M A ) = n. In that case, the equalities above take the form and hence For algebras of the second or third type (in Corollary 42) we have n = exp(A), which is either 2m 2 or (2m) 2 for some m, and tr(M A ) = 0. Then, here, the corresponding equalities are In case n = 2m 2 we have α + = α − = n 2 = m 2 and so the characteristic polynomial is and the minimal polynomial is (x − √ n)(x + √ n) = x 2 − n. Finally, an easy computation of the free coefficient of the characteristic polynomial in each one of the cases considered above yields that det(M A G ) = ± |G| |G|/2 . This proves part 3 of Theorem 7 and hence the entire theorem is now proved. As promised, we now compute the commutation matrix for the G-regular algebras constructed in Lemma 26, where G is an arbitrary finite group. In case (2), the algebras A, B and A⊕B have the same commutation function θ. Thus, the cocycles corresponding to A, B, A ⊕ B are isomorphic (up to a coboundary) and therefore the corresponding twisted group algebras of A, B and A ⊕ B are isomorphic. With this identification of twisted group algebras we get that the commutation matrices of A, B and A ⊕ B are the same. In case (3) we consider A N = g∈N A g for some subgroup N of G. Let α N be the restriction of α to N × N . Then α N is the cocycle corresponding to the algebra A N and there is a natural graded embedding F αN N ֒→ F α G. Let M ′ be the restriction of M A to the coordinates in N × N , then the entries of M ′ are in F αN N and this submatrix is actually M AN . In cases (1) and (4) we have algebras A, B with commutation functions θ A , θ B and cocycles α, β which are defined on groups G and H respectively. In case the groups G and H are abelian, the matrix M A⊗B is just M A ⊗ M B . For the general case, let α ⊗ β ∈ Z 2 (G × H, F × ) be the cocycle defined by (α⊗β)((g 1 , h 1 ), (g 2 , h 2 )) = α(g 1 , g 2 )β(h 1 , h 2 ). Clearly, α⊗β represents the regular algebra A⊗ B (with commutation function θ A ⊗ θ B ). Furthermore, since F α G⊗ F β H ∼ = F α⊗β (G× H), we can extend this product to a "matrix tensor product". In other words, if ϕ : F α G ⊗ F β H → F α⊗β (G × H) is an isomorphism, then M A⊗B is determined by M A⊗B (g1,h1),(g2,h2) = ϕ(M A g1,g2 ⊗ M B h1,h2 ). In case (4), a similar computation shows that there is a isomorphism ψ : F α G ⊗F β G → F α·β G and then M A ⊗B (which is defined over F αβ G) is determined by M A ⊗B g,h = ψ(M A g,h ⊗ M B g,h ). Finally, we note that there is a natural embedding G ∼ =G = {(g, g) | g ∈ G} ≤ G × G. Hence may view A ⊗B as (A ⊗ B)G and with this identification M A ⊗B is the restriction of M A⊗B toG. Nondegenerate skew-symmetric Bicharacters If the group G is abelian then any G-commutation function θ is defined by the skew-symmetric bicharacter θ(g, h) = θ g,h for every g, h ∈ G (the commutation of two elements). Our goal in this section is to present a classification of the the pairs (G, φ) where G is a (finite) abelian group and φ is a nondegenerate skew-symmetric bicharacter defined on G. In fact, this classification is known and can be found in [1]. Nevertheless for the reader convenience and completeness of the article, we recall the main results here. In what follows, we present three types of regularly graded algebras. It turns out that the bicharacters which correspond to some special cases of these gradings are irreducible and generate all possible skew-symmetric bicharacters. Then Define a Z 2n × Z 2n -grading on M 2n,n (E) by M 2n,n (E) (k,l) = U k V l ⊗ E (−1) l , where E = E 1 ⊕ E −1 is the usual grading on the Grassmann algebra. This induces a minimal regular grading on M 2n,n (E) with commutation function θ determined by We consider the bicharacters which correspond to (some special cases of) the gradings just described.
14,879.2
2012-12-03T00:00:00.000
[ "Mathematics" ]
Simulation-Based Identification of Operating Point Range for a Novel Laser-Sintering Machine for Additive Manufacturing of Continuous Carbon-Fibre-Reinforced Polymer Parts Additive manufacturing using continuous carbon-fibre-reinforced polymer (CCFRP) presents an opportunity to create high-strength parts suitable for aerospace, engineering, and other industries. Continuous fibres reinforce the load-bearing path, enhancing the mechanical properties of these parts. However, the existing additive manufacturing processes for CCFRP parts have numerous disadvantages. Resin- and extrusion-based processes require time-consuming and costly post-processing to remove the support structures, severely restricting the design flexibility. Additionally, the production of small batches demands considerable effort. In contrast, laser sintering has emerged as a promising alternative in industry. It enables the creation of robust parts without needing support structures, offering efficiency and cost-effectiveness in producing single units or small batches. Utilising an innovative laser-sintering machine equipped with automated continuous fibre integration, this study aims to merge the benefits of laser-sintering technology with the advantages of continuous fibres. The paper provides an outline, using a finite element model in COMSOL Multiphysics, for simulating and identifying an optimised operating point range for the automated integration of continuous fibres. The results demonstrate a remarkable reduction in processing time of 233% for the fibre integration and a reduction of 56% for the width and 44% for the depth of the heat-affected zone compared to the initial setup. Introduction The utilisation of continuous carbon-fibre-reinforced polymer (CCFRP) parts in industrial applications presents a substantial opportunity for achieving significant reductions in future product consumption and CO2 emissions while maintaining economic viability [1]. CCFRP parts are notable for their favourable weight-to-strength ratio and impressive mechanical tensile properties along the fibre orientation. Continuous fibres play a crucial role in enhancing the mechanical characteristics of fibre-reinforced parts along the load-bearing pathway [2]. Additive manufacturing processes offer a promising avenue for the tool-less and time-efficient production of CCFRP parts, allowing for high levels of customisation and shape complexity. Material extrusion (MEX), which includes techniques like fused layer modelling (FLM) and ARBURG plastic freeforming (APF), has gained prominence in the literature as a viable method for the additive manufacturing of CCFRP parts [3][4][5][6][7][8][9]. Another category of processes employed for CCFRP parts is vat photopolymerisation (VPP) [10][11][12]. However, it is essential to note that CCFRP parts produced using these processes (MEX and VPP) have decisive disadvantages. The inherent nature of these processes necessitates untapped. Only by identifying an optimised operational point for roving integration is it possible to reduce the dimensions of the HAZ and, crucially, the processing time in a targeted manner. A systematic approach to determining an operating point that ensures a secure process and a suitably compact HAZ while enabling swift roving integration is imperative for the cost-effective production of LS components with elevated FVC and enhanced mechanical properties. Therefore, the objective of this paper is the simulation-based identification of an optimal operating point range within which the smallest possible HAZs can be generated, and the rovings can be integrated into the part as quickly as possible but simultaneously in a process-safe manner. Using COMSOL Multiphysics (Version 6.1), the process zone is first modelled with the help of a macroscopic modelling approach. Using this FE model, the creation process of the HAZ, due to the heat input of the heated fibre nozzle, is to be simulated, and an optimised operating point range is to be identified in a subsequent simulation study. Section 2.1 presents the principle of roving integration. The influencing and target variables on the roving integration caused by the fibre integration unit are discussed in the same section. The procedure for deriving the FE model in COMSOL (Version 6.1) is described in Section 2.2. The same section presents the evaluation procedure for evaluating the FE model's accuracy using a convergence analysis and a plausibility check. Section 2.3 describes the approach for the simulation-based identification of an optimised operating point range with the help of a central composite design (CCD) with initially assumed factor levels within which an optimised operating point range is sought. Finally, the identified operating point range derived by the FE model is experimentally validated, and the roving integration is analysed in the same chapter using an adjusted, more detailed CCD. Section 3 presents and discusses the results concerning the initial state. This study successfully showcased a substantial 233% reduction in the processing time required for roving integration. Consequently, this achievement paves the way for the more cost-effective production of CCFRP parts using the developed LS machine. Furthermore, 56% and 44% reductions in the width and depth of the heat-affected zone (HAZ) were attained. This advancement enables the integration of rovings closer to the edges of the parts, thereby permitting a higher fibre volume content (FVC). As a result, this study offers optimised operation points that can guide future research efforts in systematically enhancing the FVC and the accompanying mechanical properties of CCFRP parts. Principle of Roving Integration This section outlines the process flow to provide a fundamental understanding of roving integration within the developed LS machine-see Figure 1a. It is important to note that this paper does not offer an exhaustive description of the machine itself or the achievable properties of the produced parts, as those details can be found in [18,20]. The process begins within a heated process chamber of the LS machine, maintained at approximately 110 • C. Subsequently, a fresh layer of powder is evenly applied by the recoater, and the powder bed's surface temperature is homogenised using infrared (IR) emitters. Black PA12 powder (Sintratec AG, Brugg, Switzerland) with a melting temperature of approx. 185 • C is used as the matrix material. The laser beam then liquefies the applied powder layer. Following ISO 6983 G-code instructions, the layerspecific (2D) integration of one or more rovings is carried out sequentially. In order to be able to process parts with the developed LS machine, a MATLAB app was developed in xy to slice the parts (.stl) and generate the G-code for roving integration [21]. The developed LS machine uses 1K rovings (HTA40) with 67 tex (Teijin Limited, Tokyo, Japan), a width of about 365 µm and a thickness of about 110 µm. The 1K rovings have an elliptical shape in their delivery condition. To facilitate this process, the entire fibre integration unit, with an additional heat source and a heated fibre nozzle, is moved rapidly in the x and y directions to reach the initial point of the first roving path located in the stable sintering region. Detailed descriptions of all symbols used in this process can be found in Tables 1 and 2. It is important to note that this paper does not offer an exhaustive description of the machine itself or the achievable properties of the produced parts, as those details can be found in [18,20]. The process begins within a heated process chamber of the LS machine, maintained at approximately 110 °C. Subsequently, a fresh layer of powder is evenly applied by the recoater, and the powder bed's surface temperature is homogenised using infrared (IR) emitters. Black PA12 powder (Sintratec AG, Brugg, Switzerland) with a melting temperature of approx. 185 °C is used as the matrix material. The laser beam then liquefies the applied powder layer. Following ISO 6983 G-code instructions, the layer-specific (2D) integration of one or more rovings is carried out sequentially. In order to be able to process parts with the developed LS machine, a MATLAB app was developed in xy to slice the parts (.stl) and generate the G-code for roving integration [21]. The developed LS machine uses 1K rovings (HTA40) with 67 tex (Teijin Limited, Tokyo, Japan), a width of about 365 µm and a thickness of about 110 µm. The 1K rovings have an elliptical shape in their delivery condition. To facilitate this process, the entire fibre integration unit, with an additional heat source and a heated fibre nozzle, is moved rapidly in the x and y directions Detailed illustrations of the fibre integration unit and its constituent components are provided in Figure 1b,c. Throughout the entire roving integration process, the structure of the fibre integration unit moves into the path of radiation from the infrared (IR) emitters installed within the LS machine. This movement results in the shadowing of IR radiation. To maintain the powder bed's part and surface within the sintering window, a metal plate fitted with an adhered silicone heating mat (functioning as an additional heat source) is positioned parallel to the powder surface. It is situated at a distance of h HM3 beneath the bottom side of the fibre integration unit. The term "sintering window" refers to a temperature range encompassing the interval between the onset of crystallisation and the melting of the semi-crystalline thermoplastic material used. The PA12 material employed in this study corresponds to an approximate crystallisation temperature of 154 • C and a melting temperature of around 184 • C, resulting in a sintering window of 30 • C [19,22]. The interaction between the heated fibre nozzle, operating at temperature T HM3 , and the part's surface induces the formation of a localised melt zone or heat-affected zone (HAZ). The dimensions of this HAZ, namely, its width b HAZ and depth t HAZ , describe the extent to which the polymer's viscosity is locally reduced. The successful embedding of rovings within the part relies on creating a sufficiently deep HAZ through the action of the heated fibre nozzle. The resultant molten material adheres to the roving, firmly anchoring it to the Polymers 2023, 15, 3975 5 of 29 underlying layers. Positioned above the fibre nozzle, a cutting blade trims the continuous roving to a length specified by the programmed instructions in the G-code. It is important to note that the built-in diode laser, operating at 450 nm with a power output of 1.6 W, remains inactive during the roving integration process. Following the successful integration of the rovings, the fibre integration unit returns to its designated home position, and the recoater applies a fresh layer of powder. Once the infrared (IR) emitters have sufficiently heated and homogenised the powder bed surface to reach the sintering temperature of approximately 175 • C, the laser is employed to melt the new powder layer, thereby fully incorporating the roving within the polymer matrix. This sequence is repeated until all rovings have been integrated per the instructions specified in the G-code. Subsequently, after the printing process concludes, a controlled cooling process is initiated for the powder bed housing the CCFRP parts. Heat losses due to radiation W An analytical examination of the heat transfer process is employed to determine the variables that have an affect and those are aimed for in the context of roving integration. One-dimensional heat flows were utilised to simplify this analysis to characterise the influencing and target variables outlined in this study. The heat transfer during the roving integration process can be mathematically described in Equation (1). The first term in Equation (1) accounts for the heat delivered to the surface of the powder bed via the additional heat source. The part and the powder bed surface are kept warm within the sintering window, with the assistance of the additional heat source denoted as Q HM3 , located on the bottom side of the fibre integration unit to ensure the integration of rovings in a secure and replicable manner. For an extensive analysis of the additional heat source and Q HM3 , please refer to [19]. The second term in Equation (1) represents the heat the heated fibre nozzle conveys. This heat flow primarily plays a pivotal role in directing heat energy transfer, thus facilitating the formation of the HAZ. The heat transferred through the fibre nozzle is quantified according to Equation (2). The first term within the parentheses in Equation (2) pertains to Fourier's law, describing the heat exchange between the ring-shaped fibre nozzle characterised by inner diameter d D,i (responsible for guiding and transmitting the roving inside the fibre nozzle), outer diameter d D,o , nozzle curvature κ, and the surface of the powder bed. It is assumed that heat is exclusively transferred through heat conduction across the air gap h HM3 , which constitutes a scenario of free convection involving internal flow and heat radiation [18,20]. Under these conditions, the Rayleigh number (Ra) is less than 1, indicating a stable stratified fluid layer with no induced flow (Nusselt number, Nu, equals 1) [23]. The second term within the parentheses in Equation (2) corresponds to the Stefan-Boltzmann law, incorporating an additional radiation exchange between the front surface of the fibre nozzle and the surface of the powder bed. From the standpoint of a stationary point on the part's surface, ∆t FN signifies the temporal duration during which the heated fibre nozzle with a surface area A D (d D,i , d D,o ) imparts heat to the part at a feed rate of v D . The duration ∆t FN for heat transfer can be influenced by the feed rate v D of the fibre nozzle. It is important to note that potential convection currents between the fibre nozzle and the supplementary heat source and heat dissipation at the peripheries of the additional heat source into the surroundings were assumed but not considered in this analysis. The parameters outlined in Equation (2) summarise the factors influencing the roving integration process within the developed laser-sintering machine, and their definitions are provided in Table 1. It is imperative to mention that this analysis did not incorporate influences arising from the laser-sintering process, such as interactions between the laser and the part, material composition, or ageing effects of the powder. These factors were maintained as constant as possible throughout the studies, as detailed in Section 2.3. The target variables for successfully integrating rovings in the developed LS machine are depicted in Figure 2. It is crucial to create a HAZ to position the roving below the recoater's movement level, denoted as h (the predetermined layer thickness during the printing process), to ensure the successful integration of rovings. In practical terms, t needs to be adjusted to sufficiently immerse the roving in the molten material, meaning h should be less than It is crucial to create a HAZ to position the roving below the recoater's movement level, denoted as h S (the predetermined layer thickness during the printing process), to ensure the successful integration of rovings. In practical terms, t HAZ needs to be adjusted to sufficiently immerse the roving in the molten material, meaning h R should be less than h S . This configuration avoids the formation of any disruptive contours for the recoater, ensuring process reliability. Consequently, the finite element (FE) model must yield a value for t HAZ corresponding to a roving thickness of approximately 365 µm. If the roving extends too far beyond the part's edges h R ≥ h S , it could lead to a collision between the recoater and the roving during the subsequent recoating process. Such an occurrence may result in the part being displaced by the recoater, necessitating the stop of the printing process. Additionally, minimising b HAZ is essential for enabling rovings to be placed as close to the edges of the part as possible without causing the unsintered and loose powder to melt beyond its edges. According to the results of the SPD from [21], b HAZ and t HAZ of the HAZ constantly change with a constant aspect ratio b HAZ / t HAZ when varying the process parameters (e.g., fibre nozzle feed rate and nozzle temperature). In other words, if, for example, t HAZ of the HAZ is reduced by increasing the nozzle feed rate, b HAZ is reduced simultaneously. For this reason, only t HAZ is considered in this paper, as this is primarily responsible for the process reliability of the roving integration. Regarding cost-effective production, the duration required for roving integration plays a crucial role. This processing time is determined by the feed rate v D of the fibre nozzle. The specific objectives for successful fibre integration are outlined in Table 2. Initial studies were conducted in [18,19] for the influencing and target variables listed in Tables 1 and 2. The experimental analysis and quantification of the effects and interactions of v D , h D , d D,o , h HM3 , κ, and T D on the width and depth of the HAZ (without the influence of roving integration) were analysed using an SPD in [18,24]. According to [18], v D , h D , d D,o , and T D have the most significant influence on the shape of the HAZ. h D cannot be set smaller than 0.6 mm from the powder bed surface; otherwise, the powder can adhere to the fibre nozzle and, thus, reduce the effective fibre nozzle distance. This leads to entrainment effects in the powder bed or the part. Furthermore, an outer diameter of the fibre nozzle of 2 mm has been established. d D,o directly influences the b WEZ . Due to d D,i = 0.6 mm, the outer diameter cannot be reduced further, as this would weaken the fibre nozzle too much. According to [19], an operating point was found for T HM3 and h HM3 at T HM3 = 190 • C and h HM3 = 0.8 mm, where rovings can be reliably integrated into the parts. This leaves T D and v D for an optimisation of t WEZ . In other words, the two factors, T D and v D , can be varied to optimise t WEZ . According to the SPD in [19] and Equation (2), a further reduction of t WEZ is achieved by increasing T D and v D as optimisation direction. This paper aims to determine an operating point range in which the condition t HAZ ≥ 365 µm is achieved with the highest possible fibre nozzle feed rate v D using an FE model in COMSOL Multiphysics (Version 6.1). The findings from the SPD in [19] are used to evaluate the FE model's plausibility and accuracy with the help of a CCD with initially assumed factor levels within which an optimised operating point range (t HAZ ≥ 365 µm) is sought. Finally, the adjusted operating point range derived by the FE model is validated with the help of an adapted and experimentally performed CCD with more detailed factor levels. In addition, the influence of roving integration on the depth of the HAZ is analysed, and an optimised operating point for roving integration is experimentally derived. The results are described and discussed in Section 3. Numerical Modelling In this section, a systematic derivation of an FE model for modelling the formation process of the HAZ is carried out based on physical assumptions and simplifications. The results generated from the FE model are first compared with the results of the SPD from [18] to evaluate the plausibility and accuracy of the FE model. The FE model was then used to identify an optimal operating point range for T D and v D . Modelling Approach A macroscopic modelling approach is employed in which individual particles within the part are not individually considered. Instead, the molten part is treated as a continuous medium with homogenised properties. The fluctuations in temperature within the molten part due to the heat source Q FI are described using a nonlinear heat transfer equation as defined in Equation (3) [25]. The quantities ρ P kg/m 3 and C P [J/(kg·K)] represent the part's density and heat capacity. Equation (3) delineates the alterations in the part's temperature resulting from the heat fluxes q W/m 2 and external heat sources Q W/m 3 , predominantly involving conduction, convection, and radiation. Model Assumptions and Simplifications For the derivation of the FE model, some physical assumptions and simplifications are made based on the findings from [18,19] to keep the computing time and the required storage space low while still gaining maximum knowledge. The most important assumptions and simplifications are listed in the following points. • The supplied heat flow Q HM3 compensates for heat losses due to radiation and convection according to Equation (1), consisting of heat conduction through the air gap and the radiation exchange between the installed additional heat source and the part surface. Consequently, the heat losses are not considered in the FE model but only the supplied heat Q HM3 of the additional heat source. • According to Equation (2), the heat input is based on heat conduction through the air gap and the radiation exchange between the ring-shaped fibre nozzle and the part's surface. The FE model does not consider possible convection flows between the fibre nozzle and the additional heat source. • The surrounding and loose powder bed is not considered in the FE model, but only the already manufactured part. The part edges are isolated in the FE model. • Modelling approaches from the current state of research and technology for analysing the laser-sintering process consider the phase transformation from powdery to molten states [26,27]. However, in the developed LS machine, the HAZ is generated in the molten state of the part, which the laser has transformed. The phase transition has, therefore, already taken place when the HAZ was created. Therefore, only the material properties of the molten state of the part are used. Isotropic part properties are assumed. This means that constant values are used for the thermal conductivity of the part λ P for the part density ρ P , and, thus, for the porosity of the part Φ P and the specific heat capacity c P (constant pressure). The thermal conductivity of the air in the air gap is the only variable with a temperature dependence λ A (T). • To determine the target value t HAZ in the FE model, only the part area with a temperature value equal to or higher than the melting temperature of the PA12 part T M is evaluated. • According to Figure 1b, there is symmetry in the centre of the fibre nozzle and along the y-axis (axis of movement of the fibre nozzle). Due to this symmetry property, only half of the process zone is modelled in the FE model. • To simplify the FE model, not the entire additional heat source is modelled, but only the immediate vicinity of the process zone, consisting of the PA12 part, fibre nozzle, air gap, feeler gauge tape, metal plate of the additional heat source, and guidance. • Due to the roving's current scattering position/orientation in part [18], only the formation process of the HAZ is modelled. The influence of the roving on the HAZ is, therefore, not an object of investigation of the FE model. For a simulation-based identification of an operating point range with the highest possible fibre nozzle feed rate, a value of t HAZ ≈ 365 µm is assumed. According to Table 1, this corresponds to the approximate thickness of a 1K roving. To build an FE model in COMSOL Multiphysics (Version 6.1), a geometric representation of the process zone to be investigated must first be created. According to Section 2.1.1, the process zone consists of the PA12 part in which the HAZ is generated, along with the air gap, the feeler gauge tape with guidance, and the fibre nozzle, which is mandatory for creating the HAZ. These components are modelled within COMSOL so that the settings to be investigated can be parameterised. A parameterisation of the geometry parameters (h D , h HM3 , d D,o , and κ) enables a simple and automated variation of the influencing variables using a MATLAB script linked to the FE model when carrying out parameter studies. Figure 3a shows the CAD model, including the installed components of the process zone. For comparison, Figure 3b shows the simplified geometry of the process zone realised in COMSOL. mation process of the HAZ is modelled. The influence of the roving on the HAZ is, therefore, not an object of investigation of the FE model. For a simulation-based identification of an operating point range with the highest possible fibre nozzle feed rate, a value of t ≈ 365 µm is assumed. According to Table 1, this corresponds to the approximate thickness of a 1K roving. Geometric Model Structure To build an FE model in COMSOL Multiphysics (Version 6.1), a geometric representation of the process zone to be investigated must first be created. According to Section 2.1.1, the process zone consists of the PA12 part in which the HAZ is generated, along with the air gap, the feeler gauge tape with guidance, and the fibre nozzle, which is mandatory for creating the HAZ. These components are modelled within COMSOL so that the settings to be investigated can be parameterised. A parameterisation of the geometry parameters (h , h , d , , and κ) enables a simple and automated variation of the influencing variables using a MATLAB script linked to the FE model when carrying out parameter studies. Figure 3a shows the CAD model, including the installed components of the process zone. For comparison, Figure 3b shows the simplified geometry of the process zone realised in COMSOL. (a) (b) Figure 3. Three-dimensional view of the additional heat source with fibre nozzle, feeler gauge tape, and guide (a), and geometry implemented in COMSOL for process zone (b). Table 3 shows the material parameters relevant to the FE model with numerical values and sources. In addition, Table 3 shows the initial values for all materials that form the starting point for the simulation. [26] ε P The emissivity of the PA12 part 0.90 - [19] ε HM3 The emissivity of the black lacquered metal plate of the additional heat source 0.97 - [32] ε FN The emissivity of the copper fibre nozzle (oxidised) 0.76 - Material Properties with Initial Settings The emissivity of the feeler gauge tape 0.85 - [32] As described in Section 2.2.2, isotropic material properties are assumed for the PA12 part. For modelling heat conduction through the air gap, the heat conduction coefficient through the air is relevant. The temperature-dependent thermal conductivity can be approximated according to [23] using Equation (4). Meshing Zones and Moving Mesh To reduce the calculation time, the individual meshing zones of the assembly are meshed with different degrees of resolution according to their expected influence on the simulation results. For this reason, the geometry from Figure 3 is partitioned into sections. The following Figure 4 shows the meshing zones used. The emissivity of the feeler gauge tape 0.85 - [32] As described in Section 2.2.2, isotropic material properties are assumed for the PA12 part. For modelling heat conduction through the air gap, the heat conduction coefficient through the air is relevant. The temperature-dependent thermal conductivity can be approximated according to [23] using Equation (4). Meshing Zones and Moving Mesh To reduce the calculation time, the individual meshing zones of the assembly are meshed with different degrees of resolution according to their expected influence on the simulation results. For this reason, the geometry from Figure 3 is partitioned into sections. The following Figure 4 shows the meshing zones used. Table 4 lists the individual meshing zones with the initially assumed mesh resolution. The meshing parameters for the individual mesh resolutions are in the COMSOL documentation [33]. Table 4. Individual meshing zones with initially assumed mesh resolution. Table 4 lists the individual meshing zones with the initially assumed mesh resolution. The meshing parameters for the individual mesh resolutions are in the COMSOL documentation [33]. For a detailed description of the mesh specifications, please refer to [33]. The PA12, the front surface of the nozzle, and the air near the nozzle are significantly involved in the heat transfer and the formation of the HAZ. Consequently, a fine mesh is initially assumed there. The nozzle and the air section mesh in a standard mesh using a parameterisable selection cylinder along the rotation axis of the fibre nozzle in COMSOL. The PA12 part is also partitioned. In this way, a correspondingly fine mesh can be selected in the area where the HAZ is formed, and the low-influence areas of the PA12 part can be more coarsely meshed. The guidance, the feeler gauge tape, and the black lacquered metal plate are provided with a coarser mesh. By default, geometries in COMSOL are meshed tetrahedrally. To replicate the one-dimensional movement of fibre nozzle feed along the y-direction, the moving mesh knot is used in COMSOL. The moving mesh knot allows the deformation of the extruded meshes (zones 5 and 6), which realises a relative movement between zones 1 and 2. The extruded mesh in zones 5 and 6, represented by a rectangular mesh, includes the edge zones of the feeler gauge tape, the black lacquered plate of the additional heating mat, and the air. It is assumed that these zones along the extrusion axis have a minor influence on the target variables. The following Figure 5 shows the displacement of the mesh along the y-axis. Meshing Zone Description Mesh Resolution the HAZ is formed, and the low-influence areas of the PA12 part can be more coarsely meshed. The guidance, the feeler gauge tape, and the black lacquered metal plate are provided with a coarser mesh. By default, geometries in COMSOL are meshed tetrahedrally. To replicate the one-dimensional movement of fibre nozzle feed along the y-direction, the moving mesh knot is used in COMSOL. The moving mesh knot allows the deformation of the extruded meshes (zones 5 and 6), which realises a relative movement between zones 1 and 2. The extruded mesh in zones 5 and 6, represented by a rectangular mesh, includes the edge zones of the feeler gauge tape, the black lacquered plate of the additional heating mat, and the air. It is assumed that these zones along the extrusion axis have a minor influence on the target variables. The following Figure 5 shows the displacement of the mesh along the y-axis. Figure 5a shows the initial state of the mesh at time t = 0 s. Figure 5b shows the extruded mesh at a relative fibre nozzle offset of 40 mm. Zone 1 moves continuously, whereas the extruded meshes in zones 5 and 6 deform. Equation (5) Here, Δs represents the travelled path of the fibre nozzle within a discrete time step Δt. To ensure relative movement between zones 1, 5, and 6, sliding conditions are defined on the surfaces of the deformable zones so that no fixed nodes to neighbouring areas are generated during meshing. The mesh resolutions initially assumed in Table 4 are optimised in Section 2.3.1 with the help of a convergence analysis. The model's accuracy can be increased with more account points. However, this happens at the expense of computing time and memory requirements. Therefore, a compromise must always be found between accuracy and computing time or storage capacity. Figure 5a shows the initial state of the mesh at time t = 0 s. Figure 5b shows the extruded mesh at a relative fibre nozzle offset of 40 mm. Zone 1 moves continuously, whereas the extruded meshes in zones 5 and 6 deform. Equation (5) Here, ∆s represents the travelled path of the fibre nozzle within a discrete time step ∆t. To ensure relative movement between zones 1, 5, and 6, sliding conditions are defined on the surfaces of the deformable zones so that no fixed nodes to neighbouring areas are generated during meshing. The mesh resolutions initially assumed in Table 4 are optimised in Section 2.3.1 with the help of a convergence analysis. The model's accuracy can be increased with more account points. However, this happens at the expense of computing time and memory requirements. Therefore, a compromise must always be found between accuracy and computing time or storage capacity. Physics with Initial and Boundary Conditions The meshing zones for the PA12 part and the air are selected in the physics module heat conduction-see Figure 6a. The heat conduction within the heating mat is insignificant for the FE model. Therefore, only the additional heat source's black lacquered metal plate surfaces are given a temperature boundary condition (T ), which can be adjusted via the model's parameter list. This conducts heat through the air gap between the black lacquered metal plate and the PA12 part. The heat conduction from the fibre nozzle is also ensured by a temperature boundary condition (T ). The air and the PA12 part are given an initial value (T ) at the beginning of the simulation, corresponding to the LS machine's preheating. Furthermore, The heat conduction within the heating mat is insignificant for the FE model. Therefore, only the additional heat source's black lacquered metal plate surfaces are given a temperature boundary condition (T HM3 ), which can be adjusted via the model's parameter list. This conducts heat through the air gap between the black lacquered metal plate and the PA12 part. The heat conduction from the fibre nozzle is also ensured by a temperature boundary condition (T D ). The air and the PA12 part are given an initial value (T O ) at the beginning of the simulation, corresponding to the LS machine's preheating. Furthermore, the circumferential surfaces of the FE model receive a thermal insulation condition. On the one hand, the surfaces in the symmetry plane of the FE model must be insulated, as there are no heat flows here. On the other hand, the outer surfaces of the air region and the PA12 part are isolated. Since only a section of the black lacquered metal plate is shown in the FE model, no heat transfer from the heating mat to the surrounding process chamber occurs there. The thermal radiation physics module in COMSOL is applied to all surfaces involved in radiation exchange-see Figure 6b. These include the surface of the black lacquered metal plate, the bottom of the feeler gauge tape, and the surfaces of the PA12 part and the fibre nozzle. The direction of the emitted radiation is determined by the opacity, which is why the radiation exchange takes place through the air areas. The total heat Q FN to be transferred according to Equation (2), the multiphysics function heat transport with surface-to-surface radiation, is applied in COMSOL. Determination of the Depth of the HAZ To determine the width and depth of the HAZ, the General Projection function in COMSOL is used [34]. For each discrete time step ∆t of the moving mesh, the heat propagation in the PA12 part is calculated. Using the General Projection operator, starting from the centre of the HAZ, the distance along the Cartesian axes is integrated until the condition T M ≥ 184 • C is fulfilled. A detailed description of how the General Projection Operator functions is given in [34]. The result is a value for the width and depth of the HAZ. The result of the simulation, the HAZ, is shown in Figure 7. By default, COMSOL Multiphysics defines the time steps Δt = 3 s used to solve a timedependent problem. The predefined settings, however, lead to irregular HAZs that do not occur in reality. The greater the feed rate of the nozzle, the worse the HAZ reproduced. To counteract this, time steps are given to the temporal solver of the FE model. A further convergence analysis examines the effect of the selected time step on the HAZ-see Section 2.3.1. Evaluation of Model Quality As described in Section 2.2, a compromise must be found between model accuracy and computing time or storage capacity. The meshing zones introduced in Section 2.2.5 are increasingly refined during a convergence analysis, starting from the initially assumed mesh resolution. The resulting changes in the target value t as a function of the number of elements/mesh resolution are presented with the help of tables and a diagram indicating the number of elements and calculation time. In addition to the standardised mesh resolutions in COMSOL (fine, finer, extra fine, and extreme fine) [33], two additional customised mesh resolutions, according to Table 5, are also used. By default, COMSOL Multiphysics defines the time steps ∆t = 3 s used to solve a time-dependent problem. The predefined settings, however, lead to irregular HAZs that do not occur in reality. The greater the feed rate of the nozzle, the worse the HAZ reproduced. To counteract this, time steps are given to the temporal solver of the FE model. A further convergence analysis examines the effect of the selected time step on the HAZ-see Section 2.3.1. Evaluation of Model Quality As described in Section 2.2, a compromise must be found between model accuracy and computing time or storage capacity. The meshing zones introduced in Section 2.2.5 are increasingly refined during a convergence analysis, starting from the initially assumed mesh resolution. The resulting changes in the target value t HAZ as a function of the number of elements/mesh resolution are presented with the help of tables and a diagram indicating the number of elements and calculation time. In addition to the standardised mesh resolutions in COMSOL (fine, finer, extra fine, and extreme fine) [33], two additional customised mesh resolutions, according to Table 5, are also used. To determine a suitable time step ∆t of the moving mesh function, the procedure is analogous to the convergence analysis described above. The only difference is using a time step ∆t instead of the mesh's number of elements/mesh resolution. In addition to the time step of three seconds automatically defined by COMSOL, the time steps 2 s, 1 s, 0.75 s, and 0.5 s are used as time steps. After completion of the convergence analysis, it is qualitatively checked whether the FE model correctly determines the calculated results for the depth of the HAZ. In other words, it is checked whether the physics and material parameters in the FE model correspond to the experimental results up to that point. Using a plausibility check, the results of the FE model are compared with the results of the SPD from [18]. The results of this SPD are the main effect and interaction diagrams for the investigated parameters (T D , h HM3 , T HM3 , h D , κ, and d D,o ). To check the plausibility of the developed FE model, the exact repetition of the SPD from [18], including the factor-level combinations, is carried out with the help of the FE model and Minitab (2022 Cloud App). A detailed description of the SPD is not given here. For this, please refer to [18]. The plausibility is checked using arrows by comparing the main effect and interaction diagrams concerning the slope. If, for example, the effect has a positive slope when changing from the first to the second level value of a factor, an ascending arrow symbol ↑ is assigned to this diagram. A descending arrow symbol ↓ is assigned in the case of a negative slope. The procedure for the other factors is analogous. Finally, the agreement of the arrow symbols, i.e., the agreement between simulation and experiment, is checked ( ). For the interactions, two arrows are used instead of one arrow. To determine the model accuracy, the quantitative results (mean values) of the SPD are compared with the simulated results of the FE model for the same factor-level combinations. The degree of agreement, i.e., the model accuracy, e FE , between simulation and experiment is expressed in %. Simulation-Based Identification of an Optimal Operating Point Range The starting point for the simulation-based optimisation of v D and t HAZ is the knowledge of the SPD from [18]. Section 2.1.1 describes process parameters T D and v D, which must be increased to reduce the target value t HAZ (optimisation direction). The correlation between the dependent target variable t HAZ and the independent influencing variables T D and v D is required to determine an optimal operating point range. Determination of an Optimised Operating Point Range To identify an optimised operating point range for T D and v D , in which t HAZ ≈ 365 µm can be achieved, the relationship between t HAZ (T D , v D ) is first derived with the help of a central composite design (CCD) with initially widely spaced factor levels. A CCD consists of a full factorial or partial factorial ground plan and a central star. CCD designs have an orthogonal design with a two-stage structure. The two-stage basic design can be evaluated in advance. The star can be used to create both square and cubic models. Such experimental designs are preferably used for the optimisation of target variables. Due to two independent influencing variables and a dependent target variable, the cause-effect relationship of these variables can be described in three-dimensional space. Following the Stefan-Boltzmann law as outlined in Equation (2), it is observed that alterations in temperature exhibit a nonlinear impact on the transferred heat Q FN . Therefore, Equation (6) serves as a fundamental framework for comprehensively depicting the linear and nonlinear influences of T D and v D on t HAZ . f(x, y) = a 0 + a 1 x + a 2 y + a 3 xy + a 4 x 2 + a 5 y 2 + a 6 x 2 y 2 (6) To determine the coefficients from Equation (6), assumed factor-level combinations of a central composite design (CCD) are first set and varied using the developed FE model and a linked MATLAB script, and the target value t HAZ is determined. The initial factor levels are selected so that the results of the CCD include t HAZ ≈ 365 µm. Information on the construction and derivation of a CCD can be found in [35]. Since the results of the FE model are not subject to scatter, the factor-level combinations are not repeated, nor are randomisation and block formation. The following considerations determine the initial factor levels for T D and v D of the CCD within which the value t HAZ ≈ 365 is expected. • The basis for optimisation is the best operating point of the production process so far [35]. According to the results of the SPD, the best operating point range so far is based on the process understanding gained in [18], Table 6. The first four columns contain the factor-level combinations of the CCD. The last column contains the amount of heat transferred to the part. Three values are below 28 joules. Since this initial CCD intends to find a parameter constellation for T D and v D at which a value of t HAZ ≈ 365 µm can be achieved, the CCD assumed in Table 6 is used as a first approximation. According to [18], and for reasons of process safety (Section 2.1.1), h D has a constant value of h D = 0.6 mm for all factor-level combinations. Using the FE model and based on the factor-level combinations given in Table 6, an optimised operating point can be found for which a t HAZ (T D , v D ) ≈ 365 µm can be achieved. Based on this identified operating point, a more detailed CCD is derived, based on which the operating point determined by the FE model is to be validated experimentally. Experimental Validation of the Adjusted Operating Point Range The starting point for the experimental validation is the more detailed CCD determined by the FE model and adjusted concerning the factor levels. The samples with integrated HAZ are produced with the developed LS machine's help to validate the operating point range according to the adapted CCD's factor-level combinations. The generated HAZ is then measured in its depth. The specimens used in this study were rectangular parts with dimensions of 60 × 15 × 3 mm 3 (l × w × h) and were constructed using PA12 material (specifically, Sintratec black). The design of these specimens was adapted from a previously established geometry described in [19]. Notably, the specimens' thickness and width were significantly larger than the HAZ. Consequently, the heat input occurred exclusively within the generated part structure, preventing any melting of the unsintered powder. The positioning of these specimens within the powder bed of the developed LS machine is depicted in Figure 8 below. The specimens were placed at a separation of 2 mm from each other along the x-axis. Furthermore, adjacent specimens were positioned with a 0.5 mm offset along the z-axis, corresponding to the build direction. The HAZ insertion occurred after 2.5 mm, corresponding to the 25th layer of each part. The fibre integration unit executed a rapid traverse to reach the initial position for the path of motion, indicated by the red line in Figure 8b. Subsequently; it moved at the feed rate specified in the experimental design, following which it returned to its home position via rapid traverse. Five central points were selected for each block to generate orthogonal blocks [35]. Each combination of factor levels, including two replications and the corresponding central point, was produced within a single print run, forming one block. The sequencing of experiments within two blocks was randomised using Minitab software (2022 Cloud App). The settings employed for producing the samples with the developed LS machine aligned with those described in [18]. Notably, the travel distance of the fibre nozzle, and, thus, the area subjected to melting, exceeded the dimensions of the part itself, resulting in a protrusion at the end faces of the specimens. The specimens were prepared in a manner that allowed the HAZ to be observable for evaluation to ascertain the depth of the HAZ. This was achieved by trimming off The specimens were placed at a separation of 2 mm from each other along the xaxis. Furthermore, adjacent specimens were positioned with a 0.5 mm offset along the z-axis, corresponding to the build direction. The HAZ insertion occurred after 2.5 mm, corresponding to the 25th layer of each part. The fibre integration unit executed a rapid traverse to reach the initial position for the path of motion, indicated by the red line in Figure 8b. Subsequently; it moved at the feed rate specified in the experimental design, following which it returned to its home position via rapid traverse. Five central points were selected for each block to generate orthogonal blocks [35]. Each combination of factor levels, including two replications and the corresponding central point, was produced within a single print run, forming one block. The sequencing of experiments within two blocks was randomised using Minitab software (2022 Cloud App). The settings employed for producing the samples with the developed LS machine aligned with those described in [18]. Notably, the travel distance of the fibre nozzle, and, thus, the area subjected to melting, exceeded the dimensions of the part itself, resulting in a protrusion at the end faces of the specimens. The specimens were prepared in a manner that allowed the HAZ to be observable for evaluation to ascertain the depth of the HAZ. This was achieved by trimming off the projecting portion (a "slug") of the melted area at the front of the specimens using a scalpel. The depth of the HAZ was then measured using a microscope (specifically, the Keyence VHM 7000), following the procedures outlined in [18]. Influence of Roving Integration on the Depth of the HAZ In addition to the previously outlined experimental design, an analysis was conducted to assess the impact of roving integration on the HAZ. The modified CCD was replicated while considering the influence of roving integration to accomplish this. The results were compared with those from the CCD experiments conducted without roving integration. However, given that it is challenging to visually distinguish the roving from the surrounding matrix when examining the specimen from the front (as illustrated in Figure 9a), a scalpel was employed to incise the specimen, as demonstrated in Figure 9b. This incision allowed for the measurement of the overlap of the roving [26]. In addition to the depth of the HAZ, the roving overlap is an important target value for roving integration. If the roving overlap is too high, a dragging effect of the part and the roving can occur. To determine the roving overlap, the total part thickness is first determined. Based on the known number of powder layers per part n = 30, the thickness per layer h * (≈0.095 mm) can, thus, be determined-see Figure 10. This layer thickness h * is necessary since the part will shrink during cooling. In addition, the roving overlap h can be determined from the known integration layer of the roving at n = 25. The roving overlap is compared to the set powder layer thickness during printing and/or the movement level of the recoater at h (= 0.1 mm) and should ideally be h < h . The machine settings employed in this study mirrored those utilised in [18]. The roving selected was a coated 1K roving (67 tex, HTA40) sourced from Teijin Limited. The chosen roving featured a coating material comprising a thermoplastic-compatible polymer dispersion known as PERICOAT AC250 [36] to facilitate adequate bonding between the fibre and matrix. The coating content was maintained at 5% [35]. Finally, the process understanding gained in this paper is validated using a demonstrator part. The demonstrator part is a battery tab gripper for handling battery electrodes within an agile battery cell production line [37]. In addition to the depth of the HAZ, the roving overlap is an important target value for roving integration. If the roving overlap is too high, a dragging effect of the part and the roving can occur. To determine the roving overlap, the total part thickness is first determined. Based on the known number of powder layers per part n tot = 30, the thickness per layer h * S (≈0.095 mm) can, thus, be determined-see Figure 10. This layer thickness h * S is necessary since the part will shrink during cooling. In addition, the roving overlap h R can be determined from the known integration layer of the roving at n FI = 25. The roving overlap is compared to the set powder layer thickness during printing and/or the movement level of the recoater at h S (= 0.1 mm) and should ideally be h R < h S . In addition to the depth of the HAZ, the roving overlap is an important targ for roving integration. If the roving overlap is too high, a dragging effect of the the roving can occur. To determine the roving overlap, the total part thickness is termined. Based on the known number of powder layers per part n = 30, the t per layer h * (≈0.095 mm) can, thus, be determined-see Figure 10. This layer t h * is necessary since the part will shrink during cooling. In addition, the roving h can be determined from the known integration layer of the roving at n = roving overlap is compared to the set powder layer thickness during printing an movement level of the recoater at h (= 0.1 mm) and should ideally be h < h The machine settings employed in this study mirrored those utilised in [18]. ing selected was a coated 1K roving (67 tex, HTA40) sourced from Teijin Lim chosen roving featured a coating material comprising a thermoplastic-compati mer dispersion known as PERICOAT AC250 [36] to facilitate adequate bonding the fibre and matrix. The coating content was maintained at 5% [35]. Finally, the understanding gained in this paper is validated using a demonstrator part. monstrator part is a battery tab gripper for handling battery electrodes within battery cell production line [37]. The machine settings employed in this study mirrored those utilised in [18]. The roving selected was a coated 1K roving (67 tex, HTA40) sourced from Teijin Limited. The chosen roving featured a coating material comprising a thermoplastic-compatible polymer dispersion known as PERICOAT AC250 [36] to facilitate adequate bonding between the fibre and matrix. The coating content was maintained at 5% [35]. Finally, the process understanding gained in this paper is validated using a demonstrator part. The demonstrator part is a battery tab gripper for handling battery electrodes within an agile battery cell production line [37]. Model Quality During the convergence analysis, a change in the mesh resolution for the initially assumed coarse meshing zones 3 to 6 does not lead to a significant difference in the target value t HAZ . For meshing zones 1 and 2 and time step ∆t of the moving mesh, changes in t HAZ could be recorded based on the convergence analysis. Figure 11 below shows the result of the convergence analysis for meshing zone 1. It can be seen in Figure 11a that the target t converges relatively early. According to Figure 11b, the required calculation time is 54 min for the smallest number of elements and 2 h 11 min for the highest number of elements. A number of elements of 110,022 are selected for this mesh (Custom 1). Figure 12 shows the result of the convergence analysis for meshing zone 2 (PA12). It can be seen in Figure 12a that the target value t appears to change extremely late, i.e., at a high number of elements. According to Figure 12b, the required calculation time is 21 min for the smallest number of elements and 5 h 10 min for the highest number of elements. Due to the significant jump between the extremely fine mesh and the Custom 1 mesh concerning t of about 0.05 mm, a number of elements of 220,384 are chosen for meshing zone 2 (Custom 1). Figure 13 shows the result of the convergence analysis for the time step ∆t. (a) (b) It can be seen in Figure 11a that the target t HAZ converges relatively early. According to Figure 11b, the required calculation time is 54 min for the smallest number of elements and 2 h 11 min for the highest number of elements. A number of elements of 110,022 are selected for this mesh (Custom 1). Figure 12 shows the result of the convergence analysis for meshing zone 2 (PA12). It can be seen in Figure 11a that the target t converges relatively early. According to Figure 11b, the required calculation time is 54 min for the smallest number of elements and 2 h 11 min for the highest number of elements. A number of elements of 110,022 are selected for this mesh (Custom 1). Figure 12 shows the result of the convergence analysis for meshing zone 2 (PA12). It can be seen in Figure 12a that the target value t appears to change extremely late, i.e., at a high number of elements. According to Figure 12b, the required calculation time is 21 min for the smallest number of elements and 5 h 10 min for the highest number of elements. Due to the significant jump between the extremely fine mesh and the Custom 1 mesh concerning t of about 0.05 mm, a number of elements of 220,384 are chosen for meshing zone 2 (Custom 1). Figure 13 shows the result of the convergence analysis for the time step ∆t. (a) (b) It can be seen in Figure 12a that the target value t HAZ appears to change extremely late, i.e., at a high number of elements. According to Figure 12b, the required calculation time is 21 min for the smallest number of elements and 5 h 10 min for the highest number of elements. Due to the significant jump between the extremely fine mesh and the Custom 1 mesh concerning t HAZ of about 0.05 mm, a number of elements of 220,384 are chosen for meshing zone 2 (Custom 1). Figure 13 shows the result of the convergence analysis for the time step ∆t. It can be seen in Figure 12a that the target value t appears to change extremely late, i.e., at a high number of elements. According to Figure 12b, the required calculation time is 21 min for the smallest number of elements and 5 h 10 min for the highest number of elements. Due to the significant jump between the extremely fine mesh and the Custom 1 mesh concerning t of about 0.05 mm, a number of elements of 220,384 are chosen for meshing zone 2 (Custom 1). Figure 13 shows the result of the convergence analysis for the time step ∆t. It can be seen in Figure 13a that the target value t does not converge even at a very high mesh resolution, i.e., at an increased number of elements. However, the changes in t are still approximately 0.02 mm for the tiniest time steps. According to Figure 13b, the required calculation time is 34 min for the smallest number of elements and 1 h 22 min It can be seen in Figure 13a that the target value t HAZ does not converge even at a very high mesh resolution, i.e., at an increased number of elements. However, the changes in t HAZ are still approximately 0.02 mm for the tiniest time steps. According to Figure 13b, the required calculation time is 34 min for the smallest number of elements and 1 h 22 min for the largest number of elements. Due to the relatively small change in the target value with small time steps, a time step of 0.5 s is, nevertheless, selected since it was shown during the CCD that the high values of the fibre nozzle feed rate could be better reproduced with this time step. Table 7 summarises the mesh resolutions for the meshing zones 1-6. In the following, the results of the plausibility check are presented. Table 8 contains the results of the plausibility check comparing the slopes of the main effects of the SPD from [18] and the slopes of the main effects of the FE model. For this purpose, the results of the FE model were evaluated using Minitab (2022 Cloud App). Table 8. Comparison of the slopes of the main effect diagrams between the FE model and the SPD. Factor Factor Level 1→Factor Level 2 Experiment Slope Simulation Slope Concordance It can be seen from Table 8 that the slope characteristics simulated by the FE model correspond to the slope characteristics of the main effects of the experimentally conducted SPD from [18]. Thus, it can be assumed that, at least for the main (linear) effects, the FE model provides a correct estimation for an optimised operating point range. In Table 9, the slopes identified from the SPD for the significant interactions are compared with those from the FE model for the same interactions. Table 9. Comparison of the slopes of the interaction diagrams between the FE model and the SPD. Interaction Factor 1/Factor 2 Experiment Slope Simulation Slope Concordance As in Table 8, it can be seen in Table 9 that the slope curves of the interactions simulated by the FE model match the slope curves of the experimentally performed SPD from [18]. Thus, in addition to the main effects, it can be assumed that the FE model provides a correct estimate for an optimised operating point range when performing the CCD in Section 3.2. To determine the model accuracy e FE , the results of each factor-level combination of the FE model were compared with the measured values for t HAZ from the SPD. The maximum model deviation is 22%. This results in a model accuracy of e r = 78%. The most probable cause for the model deviation of 22% is the unfavourable position of the factor-level combination. For this factor-level combination, the transferred heat quantity Q FN is a value of 28.7 joules. This heat quantity corresponds to an average value of t HAZ (T D = 280 • C, h D = 0.8 mm, v D = 60 mm/min, κ =Concave, T HM3 = 190 • C, and d D,o = 2 mm) = 52 µm. When evaluating this factor-level combination, it was found that the generated HAZ was barely visible and, thus, could not be reliably measured. This factor-level combination was repeated several times to obtain a statistically reliable result. With another factor-level combination, an average value of t HAZ (T D = 310 • C, h D = 0.4 mm, v D = 30 mm/min, κ =planar, T HM3 = 190 • C, and d D,o = 2 mm) = 783 µm was obtained. At this setting, a heat quantity of 152.8 joules is transferred to the part. For this factor-level combination, a model accuracy of 82% could be achieved in comparison. Further causes for reduced model accuracy are listed in the following points: • Depending on the position of the parts in the powder bed, the porosity of the parts can vary significantly due to temperature differences caused by the installed IR emitters or the additional heat source on the powder bed surface [13]. This has a direct influence on how the heat propagates within the part. The result is scattered values for the width and depth of the HAZ. • Furthermore, the mixing ratio of the PA12 powder used has a decisive influence on the part properties. Although a mixing ratio of 60% new powder and 40% old powder was used for the SPD, according to [38], the proportion of old powder contributes to the scattering. • Another cause of the reduced model accuracy is the possible convection flows in the air gap between the heated fibre nozzle and the part surface due to the nozzle velocity. These convection currents can influence the transferred heat quantity Q FN . • The values in the FE model are partly based on literature values. Depending on the powder composition of the manufacturer, the material characteristics assumed in Table 3 may differ from the real material characteristics. Furthermore, the mesh resolutions have an additional influence. In summary, it can be said that, according to the results of the plausibility check, the FE model can reflect the experiments well concerning the slope curves. A model accuracy of e r = 78% is classified as acceptable. Due to the proven plausibility and considering the FE model's accuracy of e r = 78%, the FE model is used to identify an operating point range for T D and v D , in which t HAZ ≈ 365 µm occurs with high probability. Operating Point Range for Roving Integration In this section, the simulation-based derivation of an operating point range is carried out within which the rovings can be integrated into the part as reliably as possible, i.e., sufficiently deeply and with the highest possible fibre nozzle feed rate. Since the FE model does not consider the roving, an operating point range is sought with the help of the FE model by achieving a value of t HAZ ≈ 365 µm. t HAZ ≈ 365 µm is the average roving thickness. It is assumed that, at t HAZ ≈ 365 µm, the roving is integrated sufficiently deeply into the part without forming an interfering contour for the recoater. Adjustment of the Operating Point Range The starting point for the simulation-based identification of an operating point range for v D and T D are the factor-level combinations of the CCD listed in Table 6 and initially assumed. With the help of the FE model and the MATLAB link, these factor-level combinations were set, and the regression equation and the coefficient of determination R 2 were determined. In the following Figure 14a, the generated surface response diagram and the simulated points of the FE model can be seen in red. It can be seen that the determined factor-level combinations from Table 6 include and can map the assumed condition t (T , v ) ≈ 365 µm. The regression equation calculated from Minitab (2022 Cloud App) is shown in Equation (7). With a coefficient of determination of R = 0.98, this value is close to 1. Thus, with the help of the determined regression equation, the simulated target value t (T , v ) is very well reproduced. Figure 14b shows the possible parameter constellations for T and v , where a value of t ≈ 365 µm is achieved. The fibre nozzle feed rate in mm/min is plotted on the x-axis, and the nozzle temperature in °C is plotted on the y-axis. The graph in Figure 14b represents the function values calculated by the regression model at constant t (T , v ) ≈ 365 µm. To achieve a high fibre nozzle feed rate for t (T , v ) ≈ 365 µm and, thus, a reduced process time, the nozzle temperature T must be increased at the same time according to Figure 14b so that the same amount of heat is transferred to the part. According to Figure 14b, the desired operating point range for T and v , thus, moves to the upper right corner. Since a high fibre nozzle feed rate is sought, an initial setting at a temperature of T = 345 °C is defined as the first starting point of the CCD to be adjusted, which will be used for the experimental validation. This temperature value, while maintaining the requirement t (T , v ) ≈ 365 µm and according to Figure 14b, corresponds to a feed rate value of approximately v = 116 mm/min. This factor-level combination T = 345 °C and v = 116 mm/min is, thus, determined as the central point of the new, adjusted CCD. According to [35], starting from this starting point (central point), a factor- It can be seen that the determined factor-level combinations from Table 6 include and can map the assumed condition t HAZ (T D , v D ) ≈ 365 µm. The regression equation calculated from Minitab (2022 Cloud App) is shown in Equation (7). With a coefficient of determination of R 2 = 0.98, this value is close to 1. Thus, with the help of the determined regression equation, the simulated target value t HAZ (T D , v D ) is very well reproduced. Figure 14b shows the possible parameter constellations for T D and v D , where a value of t HAZ ≈ 365 µm is achieved. The fibre nozzle feed rate in mm/min is plotted on the x-axis, and the nozzle temperature in • C is plotted on the y-axis. The graph in Figure 14b represents the function values calculated by the regression model at constant t HAZ (T D , v D ) ≈ 365 µm. To achieve a high fibre nozzle feed rate for t HAZ (T D , v D ) ≈ 365 µm and, thus, a reduced process time, the nozzle temperature T D must be increased at the same time according to Figure 14b so that the same amount of heat is transferred to the part. According to Figure 14b, the desired operating point range for T D and v D , thus, moves to the upper right corner. Since a high fibre nozzle feed rate is sought, an initial setting at a temperature of T D = 345 • C is defined as the first starting point of the CCD to be adjusted, which will be used for the experimental validation. This temperature value, while maintaining the requirement t HAZ (T D , v D ) ≈ 365 µm and according to Figure 14b, corresponds to a feed rate value of approximately v D = 116 mm/min. This factor-level combination T D = 345 • C and v D = 116 mm/min is, thus, determined as the central point of the new, adjusted CCD. According to [35], starting from this starting point (central point), a factorlevel combination is selected in the assumed optimisation direction, i.e., in the direction of a higher fibre nozzle feed rate v D (reduced process time for roving integration). Table 10 shows the factor-level combinations selected for the adjusted CCD with the new, more detailed operating point range. Based on the factor levels for the adjusted CCD in Table 10, PA12 parts were produced using the developed LS machine and the generated HAZ (without roving) was measured. The measured and FE-model-simulated values for the depth of the HAZ are listed in Table 11, together with the prevailing model deviation. The experimental results for the depth of the HAZ are close to the simulated results. The last column shows that the relative maximum model deviation between the experiment and the FE model is max. 18%. Compared to the initial model accuracy of 22%, which was determined with the help of the SPD, the model accuracy could be reduced with the CCD. A possible reason for this could be the reduced number of factors. Compared to the SPD, where six factors are varied, with the CCD, only two factors are varied, and the rest are kept constant. This reduces the scattering influence of the factors that are held constant. Figure 15 compares the surface response diagrams of the CCD derived using Minitab for the FE model (top) and the experiments carried out (bottom). was determined with the help of the SPD, the model accuracy could be reduced with the CCD. A possible reason for this could be the reduced number of factors. Compared to the SPD, where six factors are varied, with the CCD, only two factors are varied, and the res are kept constant. This reduces the scattering influence of the factors that are held constan Figure 15 compares the surface response diagrams of the CCD derived using Minitab for the FE model (top) and the experiments carried out (bottom). Equation (8) shows the regression function generated by Minitab (2022 Cloud App) to describe the depth of the HAZ for the experimentally performed CCD without the influence of the roving integration (lower surface response diagram). The standard error of the regression, S, for the depth of the HAZ was low at 32.41 µm. Therefore, the predicted deviation from the actual value was only 36.68 µm. The coefficient of determination is R 2 = 0.78 and is rated as acceptable. Furthermore, Equation (9) shows the regression function describing the depth of the HAZ for the simulation-based CCD (upper surface response diagram). The standard error of the regression, S, for the depth of the HAZ was also low at 32.41 µm. Therefore, the predicted deviation from the actual value was only 36.68 µm. The coefficient of determination is R 2 = 0.98, close to 1 and rated as very good. It can be said that the two surface response diagrams are relatively close. The surface response diagrams are further apart for high feed rate values v D than for lower feed values. Furthermore, it can be seen in Figure 15 that the surface response diagram for Equation (9) drops for both high and lower temperatures. The causes for deviations between the surface response diagrams could be the same as in Section 3.1. In addition, the longer sample geometry could be another reason for a reduced coefficient of determination for Equation (9). Due to the longer length of the samples compared to the samples from the SPD, the samples from the CCD may lie in areas on the powder bed where the heat distribution is no longer constant. The result is an increased pore content of the parts and a reduced value for the depth of the HAZ. Influence of Roving Integration on the Depth of the HAZ The identical sets of factor levels from the modified CCD were selected to examine the impact of roving integration on the depth of the HAZ, as outlined in Table 10 [21]. Equation (10) represents the regression function derived from the CCD outcomes, used to model the HAZ depth with consideration of the roving's influence. The standard error of the regression (S) for the HAZ depth was notably low, measuring 35.69 µm. Although the coefficient of determination R 2 of 0.72 was lower than that of the model presented in Equation (10), it is considered acceptable. In Figure 16a, a surface response plot based on Equation (9) of the FE model (excluding the influence of the roving) was compared with Equation (10) (considering the influence of the roving). With the integration of rovings, it is evident that the depth of the HAZ becomes more substantial compared to scenarios without roving integration. Furthermore, roving integration primarily influences the depth of the HAZ, increasing it by approximately 100 µm. In comparison, the width of the HAZ experiences a marginal increment of about 32 µm, consistent with findings in [18]-this increase in depth primarily results from the heat transferred from the heated fibre nozzle to the roving. The stored heat of the roving contributes to additional melting, resulting in an average increase of 110 µm in the HAZ depth. Figure 16b presents the contour diagram for Equation (10) (CCD with roving influence). According to the contour lines in Figure 16b, the minimum depth is observed in the top-left region. In this case, it corresponds to a nozzle temperature between 335 • C and 345 • C and a feed rate between 130 mm/min and 140 mm/min. The predicted HAZ depth within this range falls below 540 µm and above 520 µm. efficient of determination is R 2 = 0.98, close to 1 and rated as very good. It can be said that the two surface response diagrams are relatively close. The surface response diagrams are further apart for high feed rate values v than for lower feed values. Furthermore, it can be seen in Figure 15 that the surface response diagram for Equation (9) drops for both high and lower temperatures. The causes for deviations between the surface response diagrams could be the same as in Section 3.1. In addition, the longer sample geometry could be another reason for a reduced coefficient of determination for Equation (9). Due to the longer length of the samples compared to the samples from the SPD, the samples from the CCD may lie in areas on the powder bed where the heat distribution is no longer constant. The result is an increased pore content of the parts and a reduced value for the depth of the HAZ. Influence of Roving Integration on the Depth of the HAZ The identical sets of factor levels from the modified CCD were selected to examine the impact of roving integration on the depth of the HAZ, as outlined in Table 10 [21]. Equation (10) represents the regression function derived from the CCD outcomes, used to model the HAZ depth with consideration of the roving's influence. The standard error of the regression (S) for the HAZ depth was notably low, measuring 35.69 µm. Although the coefficient of determination R 2 of 0.72 was lower than that of the model presented in Equation (10), it is considered acceptable. In Figure 16a, a surface response plot based on Equation (9) To assess the reliability of the roving overlap above the integrated part level, it was observed that there was no significant influence on the nozzle feed rate within the range of 92 mm/min to 140 mm/min. Consequently, additional specimens were employed to explore the limitations of the nozzle feed rate. A nozzle temperature of 345 • C was set for these particular specimens, while the nozzle feed rate was systematically adjusted. The supplementary feed rates used were 160, 180, 200, 220, 240, and 300 mm/min, each repeated twice for reliability. Furthermore, only those derived from the modified CCD using a nozzle temperature of 345 • C were used for consistency among the specimens. Figure 17a presents the results concerning the measured roving overlaps (blue dots), the recoater's movement level (red), and a linear trend line (black) to represent the estimated measurement points. Figure 17b illustrates the rovings' orientation in the selected specimens' parts. a nozzle temperature of 345 °C were used for consistency among the specimens. Figure 17a presents the results concerning the measured roving overlaps (blue dots), the recoater's movement level (red), and a linear trend line (black) to represent the estimated measurement points. Figure 17b illustrates the rovings' orientation in the selected specimens' parts. The recoater's movement level depicted in Figure 17a remained constant at 100 µm, aligning with the predetermined layer thickness during printing. Figure 17a provides a visual representation of the measured values' variability. However, an increase in the nozzle feed rate resulted in the roving being less deeply embedded in the heat-affected zone (HAZ), which raised the risk of the recoater encountering the roving or potentially becoming entangled with it. Notably, while there were fewer data points for specimens with feed rates exceeding 140 mm/min, three values (representing 25% of these specimens) surpassed the 100 µm limit. Examining Figure 17b reveals significant variations in the shape and orientation of the rovings among the specimens. Some specimens exhibited flat and wide rovings, while others had tall and narrow ones. This underscores the substantial The recoater's movement level depicted in Figure 17a remained constant at 100 µm, aligning with the predetermined layer thickness during printing. Figure 17a provides a visual representation of the measured values' variability. However, an increase in the nozzle feed rate resulted in the roving being less deeply embedded in the heat-affected zone (HAZ), which raised the risk of the recoater encountering the roving or potentially becoming entangled with it. Notably, while there were fewer data points for specimens with feed rates exceeding 140 mm/min, three values (representing 25% of these specimens) surpassed the 100 µm limit. Examining Figure 17b reveals significant variations in the shape and orientation of the rovings among the specimens. Some specimens exhibited flat and wide rovings, while others had tall and narrow ones. This underscores the substantial impact of orientation as a significant source of variation affecting the successful integration of rovings. The arbitrary orientation of the rovings has a significant effect on the overlap of rovings and, consequently, the reliability of the process. Process limits are established based on the conducted tests and the previously established relationships to address this issue. Increasing the nozzle feed rate while reducing the nozzle temperature led to the shallower embedding of the roving within the part. This heightened the risk of the recoater encountering and entangling the roving. Furthermore, this condition increased the likelihood of powdered material adhering to the nozzle, potentially disrupting the printing process. The powder adhering to the nozzle was observed at 335 • C with a feed rate of 160 mm/min and 345 • C with a 200 mm/min feed rate. In the experimental CCD study, a detailed examination was conducted within the temperature range of 335 • C to 355 • C for the nozzle and within the feed rate range of 92 mm/min to 140 mm/min. Notably, no disturbances were encountered during the production of 52 specimens within this range, leading to the conclusion that a process-reliable integration of fibres can be achieved within these parameters. The following factors are potential explanations for the lower coefficients of determination and the lack of control over roving orientation: • The entire structure of the fibre integration unit remains within the process chamber of the developed LS machine during the printing process, leading to thermal expansion and potential changes in manually set values (e.g., nozzle distance) compared to the system's cold state. Additionally, thermal expansion of the feed spindles that position the fibre integration unit can introduce errors and alter nozzle feed rate values. • The PLC temperature setting accuracy is approximately ±1 • C, which can contribute to variations in the measurement results. • The inner diameter of the fibre nozzle exceeds the thickness of the roving, resulting in an increased play of the roving within the fibre nozzle. This play can lead to uncontrolled roving placement within the part. • Other factors, such as the placement of specimens in the powder bed, the condition of the powder's ageing, and the condition of roving delivery, may also contribute to result deviations. Determination of an Optimal Operating Point All target parameters must be considered to determine the optimal operating point for the developed LS machine. Notably, the settings for nozzle temperature and nozzle feed rate exert opposing effects on the target variables. A higher nozzle feed rate proves advantageous for the width and depth of the HAZ (heat-affected zone) and processing speed. Conversely, lowering the nozzle temperature results in reduced HAZ dimensions. Conversely, a higher nozzle temperature and a lower nozzle feed rate enhance process reliability. The adjusted CCD analysis identified an optimal range between 335 • C and 345 • C at a nozzle feed rate of 130 mm to 140 mm per minute for the HAZ depth. Further reductions in either nozzle temperature or increases in nozzle feed rate should be avoided for process safety. Considering these factors, a nozzle feed rate increase enhances all three remaining target values. Therefore, a process limit of 140 mm per minute was selected for optimisation. At the same time, nozzle temperature variations have a relatively minor impact within the range of 335 • C to 345 • C; a limit of 340 • C was set to ensure greater process reliability. Considering process reliability, speed, and HAZ size, the optimised operating point corresponds to a nozzle temperature of 340 • C and a nozzle feed rate of 140 mm per minute. According to the FE model, this setting is expected to yield a HAZ width of 2638.72 µm and a HAZ depth of 523. 36 µm. An overview of all settings for the optimal operating point is provided in Table 12. Table 12. Identified operating points evaluated as optimal for reproducible and process-reliable roving integration. Operating Points In the future, several approaches can be considered to enhance process reliability and reduce processing time: • Modifying the fibre nozzle's inner diameter to match the roving's shape or making it smaller can provide better control over the orientation of the rovings within the part. • Implementing additional twisting of the rovings before coating could result in a rounder shape. This, combined with adjustments to the inner diameter of the fibre nozzle, may improve deposition accuracy within the part. Manufacturing of a Battery Tab Suction Gripper For the experimental validation of the identified and optimised operating points from Table 12, this section involves the production of a battery tab suction gripper with a function-integrated spring, i.e., without subsequent assembly, for agile fuel cell production. Figure 18a shows the 3D model of the suction grip with integrated roving paths (red) inside the struts to be reinforced, which was created using generative design. Manufacturing of a Battery Tab Suction Gripper For the experimental validation of the identified and optimised operating points from Table 12, this section involves the production of a battery tab suction gripper with a function-integrated spring, i.e., without subsequent assembly, for agile fuel cell production. Figure 18a shows the 3D model of the suction grip with integrated roving paths (red) inside the struts to be reinforced, which was created using generative design. In the struts, within which the rovings are to be integrated, the part has a minimum part diameter of 3.5 mm. The risk of the HAZ protruding from the part, thus affecting the struts' overall appearance and reducing the struts' surface quality, is, therefore, relatively high. Using the optimised operating point from Table 12, three rovings could be integrated into each strut according to Figure 18b, and, thus, the part could be successfully manufactured and reinforced. The surface of the struts differs only marginally from the rest of the part. Conclusions The LS machine developed for the additive manufacturing of CCFRP parts aims to combine the specific advantages of the LS process with continuous roving reinforcement. This technology enables the future production of intricate, near-net-shape functional parts with favourable matrix properties. The approach allows for load-path-oriented reinforcement with continuous rovings, offering economic benefits by eliminating the need for support structures and reducing post-processing efforts. In the struts, within which the rovings are to be integrated, the part has a minimum part diameter of 3.5 mm. The risk of the HAZ protruding from the part, thus affecting the struts' overall appearance and reducing the struts' surface quality, is, therefore, relatively high. Using the optimised operating point from Table 12, three rovings could be integrated into each strut according to Figure 18b, and, thus, the part could be successfully manufactured and reinforced. The surface of the struts differs only marginally from the rest of the part. Conclusions The LS machine developed for the additive manufacturing of CCFRP parts aims to combine the specific advantages of the LS process with continuous roving reinforcement. This technology enables the future production of intricate, near-net-shape functional parts with favourable matrix properties. The approach allows for load-path-oriented reinforcement with continuous rovings, offering economic benefits by eliminating the need for support structures and reducing post-processing efforts. This study pursued the systematic optimisation of roving integration within a newly developed LS machine. This involved utilising an FE model in COMSOL and conducting experiments using a CCD. The optimisation focused on crucial aspects such as processing time, process reliability, and the HAZ's shape. The critical findings of this study are summarised as follows: • Using a convergence analysis and plausibility check, the developed FE model could be verified concerning model plausibility. The FE model shows the same physical behaviour as the split-plot design (SPD) in [18]. When comparing the results from the SPD and the FE model, an initial model accuracy of the FE model of 78% is achieved. • A large percentage of the deviations between the developed FE model and the conducted experiments most likely originate in the pure LS process and the course of roving integration. Depending on the position of the parts in the powder bed, the porosity of the parts can vary significantly on the powder bed surface due to temperature differences caused by the installed IR emitters in the LS machine or the additional heat source of the fibre integration unit [20]. This has a direct influence on how the heat propagates within the part. The result is scattered values for the width and depth of the HAZ. Furthermore, the mixing ratio of the PA12 powder used has a considerable influence on the part properties. Another cause for reduced model accuracy is the occurrence of possible convection flows in the air gap between the heated fibre nozzle and the part surface due to the nozzle feed rate. These convection currents can influence the transferred heat quantity Q FN . The material parameters or the mesh fineness used in the FE model in COMSOL can also affect the target values and, thus, the model accuracy. • With the help of the derived FE model and a CCD with initially widely spaced factor-level combinations, an operating point range could be identified in which t HAZ (T D , v D ) ≈ 365 µm occurs. Based on a selected operating point at T D = 345 • C and v D = 116 mm/min, a new, more detailed CCD was derived as the basis for experimental validation of the FE model. • The adapted CCD was carried out experimentally and simulatively. In a result comparison, the model accuracy of the FE model could be reduced to 18%. The reasons for this are the reduced number of varying factors and, thus, a reduced scattering effect. The experimental regression model for the detailed CCD has a coefficient of determination of 0.78 and is close to 1. The target variable t HAZ (T D , v D ) can thus be described relatively well by the influencing factors. • Additional factors contributing to the lower coefficients of determination include the thermal expansion of the fibre integration unit, potential deviations in temperature settings controlled by the PLC, the interaction between the roving and the inner diameter of the fibre nozzle, and scattering effects linked to the laser-sintering process (such as material properties and specimen placement within the powder bed). • Integrating rovings into the part results in an expanded HAZ. Specifically, the depth of the HAZ is notably increased compared to scenarios without roving integration. Roving integration primarily influences the depth, which sees an average increase of 100 µm. This heightened depth can be attributed to the heat transferred from the heated fibre nozzle to the roving. Moreover, the roving's intrinsic heat contributes to additional melting, increasing the average HAZ depth of 110 µm. • This study successfully demonstrated a substantial 233% increase in the nozzle feed rate, achieving a 140 mm/min rate for roving integration. Consequently, more costeffective production of CCFRP parts in the developed LS machine becomes feasible. Furthermore, the width and depth of the HAZ were effectively reduced to 2638.72 µm −56%) and 523.36 µm (−44%), respectively. This reduction enables the integration of rovings closer to the part edges, facilitating higher fibre volume content (FVC) settings. The study, thus, provides optimised operational parameters for future research endeavours. • However, certain limitations persist, particularly in terms of processing time. Although a 233% increase in processing time may seem substantial, the manufacturing duration for CCFRP parts with a high FVC can still be considerable. The increase in manufacturing time is only marginal for CCFRP parts, necessitating localised reinforcement in highly stressed areas with a lower FVC requirement. Additionally, it should be noted that, when rovings are placed near part edges, this may lead to protrusions of melted material or the HAZ from the part surface. Achieving a uniform surface requires the removal of these protruding materials. Subsequent research efforts will focus on systematically augmenting the FVC and its correlated mechanical properties. In tandem with positioning rovings close to part edges, careful consideration will be given to the relative distances between rovings in both the vertical (build-up direction within the LS process) and horizontal (across the powder bed surface) axes. Tensile specimens will be fabricated to elucidate the mechanical attributes and, ultimately, reveal the potential of this LS process integrated with continuous rovings. Moreover, an investigation into the influence of nozzle temperature on roving properties is slated for exploration. Furthermore, the established FE model can be leveraged to determine an operational point conducive to successful roving integration with alternative materials in the LS process (such as PA11, PA6, TPU, etc.). The potential applications of this LS process for CCFRP parts extend into the domain of production engineering. Notably, the cost-effective and time-efficient production of lightweight tools, purposefully tailored for industrial robot applications (e.g., gripper fingers featuring internal air channels for parallel jaw grippers or suction grippers equipped with integrated springs), stands as a promising avenue for reducing both moving masses and energy demands.
21,081.2
2023-10-01T00:00:00.000
[ "Engineering", "Materials Science" ]
The 2010 expansion of activity-based hospital payment in Israel: an evaluation of effects at the ward level Background In 2010, Israel intensified its adoption of Procedure-Related Group (PRG) based hospital payments, a local version of DRG (Diagnosis-related group). PRGs were created for certain procedures by clinical fields such as urology, orthopedics, and ophthalmology. Non-procedural hospitalizations and other specific procedures continued to be paid for as per-diems (PD). Whether this payment reform affected inpatient activities, measured by the number of discharges and average length of stay (ALoS), is unclear. Methods We analyzed inpatient data provided by the Ministry of Health from all 29 public hospitals in Israel. Our observations were hospital wards for the years 2008–2015, as proxies to clinical fields. We investigated the impact of this reform at the ward level using difference-in-differences analyses among procedural wards. Those for which PRG codes were created were treatment wards, other procedural wards served as controls. We further refined the analysis of effects on each ward separately. Results Discharges increased more in the wards that were part of the control group than in the treatment wards as a group. However, a refined analysis of each treated ward separately reveals that discharges increased in some, but decreased in other wards. ALoS decreased more in treatment wards. Difference-in-differences results could not suggest causality between the PRG payment reform and changes in inpatient activity. Conclusions Factors that may have hampered the effects of the reform are inadequate pricing of procedures, conflicting incentives created by other co-existing hospital-payment components, such as caps and retrospective subsidies, and the lack of resources to increase productivity. Payment reforms for health providers such as hospitals need to take into consideration the entire provider market, available resources, other – potentially conflicting – payment components, and the various parties involved and their interests. Background Payments to healthcare providers entail a set of economic incentives that influence provider behavior and decision-making [1,2]. Israel adopted activity-based payments to replace per diems (PDs) and created codes for 30 common procedures as early as the 1990s [3]. The main objective of the change was to shorten waiting times for expensive procedures involving brief hospital stays, for which the PD payment was insufficient so that hospitals were discouraged from performing them [4]. Due to data and policy constraints, Israel chose procedure-related groups (PRGs) rather than Diagnosis Related Groups (DRGs) as the basis for measuring activity. PRGs differ from DRGs in that they are defined based on type of treatment (surgical procedure) rather than diagnosis, and they are not adjusted for case-mix or disease severity [5]. In the past two decades, many OECD countries have shifted to hospital payments based on activity and adopted diagnosis-related groups (DRGs) as payment units but, unlike the Israeli case, their main objectives were to increase efficiency and transparency [6]. DRGs are still being adopted by mid-income countries [7]. In 2002, continuing the move towards activity-based payments, the Israeli Ministry of Health (MoH) created PRG codes for more procedures, in the same timing DRGs were introduced in some OECD countries such as Estonia, Germany and the Netherlands [6]. Since 2010, the MoH has further expanded the application of PRGs to several clinical specialties, in three main waves: The objectives of the 2010-2015 "PRG reform" mainly concerned transparency and a fair distribution of funds. The specific objectives were to refine the unit of payment and establish consistent costing and pricing mechanisms in order to reduce cost-price gaps, improve MoH ability to set policy and priorities, influence the supply of hospital services by adjusting prices, and conduct supervision and control [5]. Furthermore, PRG payments were expected to change the incentives for hospitals. If PD payments create incentives for longer stays, PRGs create incentives to perform more procedures and shorten the length of stay (LoS), to minimize operating costs and maximize profits. Many studies have evaluated the impact of DRG-based payments in high-and middle-income countries on volume of activity, LoS, and quality of care [8][9][10]. In Israel, Shmueli and colleagues [11] examined the effects of the early introduction of PRG payments for five major procedures, one year after implementation in 1990. They found that the volume of activity increased for two procedures, remained unchanged for two others, and decreased for the last one. Regarding LoS, there was a modest decrease in three procedures and a significant decrease in the other two. A later study evaluated the effect of incorporating the time interval between hospitalization and treatment (of hip fractures) in the PRG tariff (maximum fees are paid for patients operated within 48 h, for those operated later, payments are significantly lower); it found that the LoS decreased following this change in payment method [12]. Since then, no study has evaluated the effects of the later adoption of PRGs on hospital activity. The effects of the 2010-reform thus remain largely unknown, preventing evidence-informed discussion of its benefits and challenges. The current study adds to the previous literature both by analyzing the changes that have occurred since then, and extending it, by examining all the hospital data and including all the activities performed at the ward level. Background on the Israeli case and the hospital market Since 1995, Israel has had a national health insurance (NHI): four competing, non-profit health plans (HPs) are responsible for providing and managing a broad benefits package determined by the government. The HPs provide care in the community and purchase hospital services for their members. Of the 44 general hospitals in Israel, 35 are non-profit and owned by the Ministry of Health (MoH), the municipalities, the Clalit HP or NGOs. These are considered "public hospitals." The other nine are smaller, for-profit hospitals, and operate 3% of the beds. The main source of income of Israeli public hospitals is the sale of services to HPs and the National Insurance Institute (NII) (see left-hand column in Fig. 1). Hospital reimbursement rates are determined by a joint MoH and Ministry of Finance (MoF) pricing committee, stipulated in the "Price List for Ambulatory and Inpatient Services." This maximum list-price (tariff ) also determines the type of payment, which can be PD; per activity (PRG); or fee-for-service (FFS) (see right-hand column in Fig. 1). There are currently 24 PD rates according to ward type and length of stay (the tariff of the first three days is higher than the tariff of the subsequent days), about 320 PRG codes, and more than 1600 ambulatory service codes. In 2015, 25% of the gross revenue of hospitals was for inpatient care paid as PRGs, 37% for inpatient care paid as PDs, 21% for ambulatory care paid as FFS or PRGs, 8% for births paid as PRGs, 6% for emergency care paid as FFS, and 3% from other sources such as the Ministry of Defense or the military [13]. The sale of services covers hospital marginal costs and some fixed costs such as physician salaries. Public hospitals also receive "prospective subsidies" in the form of global budgets from the MoH to cover part of the other fixed costs such as infrastructure and equipment. Furthermore, the government provides "retrospective subsidies" for public hospitals with a financial deficit at the end of each year. Both subsidies are negotiated with both the MoH and MoF. Overall, hospitals received about NIS 1500 million, which roughly represents 12% of their income from government subsidies (yellow box in Fig. 1) [13]. Israeli public hospitals are subject to two major income constraints. The first, put in place in 2005, is a cap mechanism; the MoH sets annual caps on hospital revenues from each HP to each hospital (see vertical arrows at right-hand side of Fig. 1). In recent years, caps have been set as a floor (lower cap bound) and a ceiling (upper cap bound), and are updated every three years. The floor is a minimum payment amount, set in 2016 as 93% of the previous year's expenditure for each HP to each hospital. If an HP consumes services that, at "list prices", would have an aggregate cost of less than the lower cap, the HP pays 93% of the previous year's expenditure to the hospital in any case. The ceiling is a maximum payment amount and when an HP spends more than this threshold, it pays only a percentage (less than 100%) of the full price [14]. Towards the end of the financial year, once the upper bound of the cap is reached, there is an incentive for HP to refer patients, when possible (e.g., for elective procedures) to the hospital as they do not pay the full price for these services. In 2016, the hospitals' net income was 15% lower than the potential gross income due to discounts related to the cap mechanism [13]. The second constraint is a negotiated alternative reimbursement contract between an HP and a hospital that may supplant the official cap, with such contracts entailing discounts that vary across HPs and hospitals [4]. In 2015, individual discounts represented 4% of the hospitals' gross income [13]. Acute hospital care in Israel has a high rate of overcrowding, one of the highest among OECD countries. Compared with the OECD average, Israeli hospitals function with half the rates of acute-care beds and nurses per population. In 2017, the average length of stay (ALoS) in Israeli hospitals was 4 days, one of the shortest, and occupancy rates of acute-care beds was one of the highest among OECD countries, reaching almost full capacity, 93%. Nonetheless, the number of discharges per 100,000 population in Israel is almost the same as the OECD average [15,16]. Objectives Our objective was to examine changes in the volume of activity, measured by the number of discharges and ALoS, in hospitals following the PRG reform. The focus was on changes on the macro/system level, aiming at draw generalizable conclusions about the payment-policy change rather than an examination of the impact on specific procedures or hospitals. Since PRG codes were created in waves by clinical area, we hypothesized increasing volumes and decreasing ALoS in the clinical areas for which PRG codes were created. Our analysis focuses on hospital wards as a proxy for such clinical areas. Economic theory suggests that hospitals react to economic incentives derived from payment mechanisms [17]. Peleg and colleagues [12] show that immediately after the adoption of the refined PRG codes, Israeli hospitals reacted by costing the wait between injury and surgery for timing of hip fracture procedures. Based on international experience [9], it is quite plausible that this was a reaction to the PRG reform. However, in contrast to other OECD countries, where DRGs were adopted to improve efficiency, Israeli hospitals operated with relatively limited resources even before the adoption of PRGs in 2010, potentially limiting the hospitals' capacity for increased activity. The analysis of the effect of the PRG reform on hospital volume and ALoS, in so different an environment, thus provides an interesting case. Methods The analysis is based on data on inpatient care provided by the MoH for all public general hospitals. Our observations were of hospital wards for the years 2008-15, two years prior to the first wave of reform (2008-10) and two years after each wave (2011-13, period1; 2014-15, period2). The data were aggregate; they did not apply to individual patients or the level of procedures. Derived from HP-hospital accounts reported to the MoH, the data included the following variables: ward type, hospital code, dummy for hospital location in the periphery, ownership (government, HP or NGO), number of annual discharges per ward, and annual ALoS per ward. The data related to procedural acute-care wards. They excluded medical wards (such as internal medicine, neurology and pediatrics) since in medical wards there are few procedures and PRG payments. We further excluded long-term care wards such as psychiatry, rehabilitation and geriatrics; intensive-care and observation wards since they are not good proxies for specific medical areas; and obstetric wards because deliveries are paid for by the NII, rather than HPs and the reimbursement mechanism described above does not apply (there are no caps, subsidies or negotiation of discounts with hospitals). To investigate changes in the number of discharges and in ALoS, we chose a difference-in-differences (DiD) approach that compares treatment and control groups for the period before and after waves 1 and 2 of the PRG reform. Although there were PRG codes before 2010, most of them were created in the 1990s and in 2002. Thus, their impact occurred before 2008 and they should not blur the effects of the 2010-14 waves. The treatment group was composed of procedural wards for which blocks of PRG codes were created between 2010 and 2014. The control group was composed of procedural wards for which no PRG codes were created in the same period. We analyzed the effects of the PRG reform on the number of discharges and the ALoS for waves 1 and 2 separately (Eqs. 1 and 2 below), and then the effects on each ward (Eqs. 3 and 4), since the reform might have affected each ward in a different manner, direction or intensity. The first wave refers to codes created in 2010-12 for orthopedic procedures; the second wave, for codes created from July 2013 to January 2014 for procedures in general surgery, urology, ophthalmology, and head and neck surgery. The 2015 wave is not analyzed in this work as not enough time has passed to observe its effects. The control group consists of pediatric surgery, cardiovascular surgery, vascular surgery, plastic surgery, gynecology, neurosurgery, oral and maxillofacial surgery wards. The data were analyzed using SPSS 24 version (SPSS-IBM). We calculated the number of annual discharges and the ALoS for each treatment and control group. The ALoS was weighted for size of ward (measured by the number of discharges). The weighting was performed to balance the relative influence of each ward on the ALoS. For example, the relative importance of a small ward with a longer ALoS is smaller than that of a large ward with a shorter ALoS. The changing trends in the volume of discharges and in ALoS are depicted in graph form. To verify the independent impact of each wave of PRG reform on the dependent variables (the number of annual discharges and the ALoS per ward), we performed a DiD analysis. We conducted the analysis using ordinary least squares (OLS) regressions. To mitigate skewness of the dependent variables, we transformed them with a natural logarithm. We controlled for hospital, wards and year fixed effects. We clustered the data by wards, given that the same type of ward in different hospitals should exhibit more robust homogeneity than different wards within each hospital. We built one regression for each dependent variable following the models below: For both equations, α represents the intercept that captures the model's unexplained variance. The wave variable is a dummy for the two sets of treatment and control groups (wave 1 for orthopedics vs. the others; and wave 2 for general surgery, urology, ophthalmology, head and neck surgery vs. other procedural, non-participant wards). We examined the shortand long-term impact of wave 1 (orthopedics) and the short-term impact of wave 2 represented in the equation by period 1, which refers to 2011-13; and period 2, which refers to 2014 and 2015. The coefficient of interest is δ, the DiD estimator, as it captures the treatment groups of wards in the period after each reform wave: δ(wave*period). δ 1 and δ 2 capture the short-and long-term effects of the first wave, respectively. δ 3 captures the short-term effects of the second wave of PRG expansion. The control variables, C i, are the fixed effects for hospitals and wards, and T i , for time trends (year). In a more refined analysis, we examined the reform's effects in each ward separately, according to the following models: Results Table 1 summarizes the changes in the number of discharges and ALoS in the study period, by treatment and control group. Figures 2 and 3 also show trends over time. Since we excluded some wards from the analysis, the number of discharges is smaller than the national data reported by the MoH, ranging from 376,480 in 2008 to 410,160 in 2015, an increase of 9%; the ALoS remained constant at 4.1 days. Our findings show that the trends and changes in ALoS and the number of discharges over time, in our dataset, are the same as that recorded by the MoH. When analyzing the changes in the number of discharges by treatment and control group, we see that it increased more markedly in control (non-participant) wards (12%) compared with treatment (participant) wards (7%). However, while refining the analysis to focus on specific treatment wards, we observed an increase in volume in the general surgery, orthopedics and urology wards, but a sharp decrease in ophthalmology, and no change in the head and neck surgery ward, despite the high rate of population growth of 1.8% annually. The ALoS decreased more sharply in participant wards (6%) than in non-participant wards (1%). These results are in line with our hypothesis that the adoption of PRGs increases volume and shortens length of stay. A more in-depth focus on each participant ward shows that the ALoS decreased sharply in urology and in head and neck surgery (14 and 16% respectively), but remained almost unchanged in orthopedics and urology. Multivariable DiD analysis The results of the multivariable DiD analysis are presented in Table 2. Our coefficients of interest, δ, are the DiD estimates of the interaction between the treatment dummies (waves, coded with received treatment = 1) and our period variables (2011-13 and 2014-15). The table shows results for model 1 (comparing treatment and control wards) and model 2 (analysis at the ward level) for both the natural logarithm of number of discharges and the ALoS. In all models, the DiD estimates were small and not significant with the exception of the head and neck surgery ward where the discharges and ALoS decreased by 24% and 9%, respectively. Discussion In our study, despite the changes seen in the descriptive statistics, the DiD analysis could not demonstrate causality between the PRG reform and the changing volume of hospital activities or the ALoS, at least not when comparing inpatient activities at the ward level. It is likely that the adoption of PRGs created incentives to increase the volume of specific procedures or to change the quality of care. However, an examination of such an impact was beyond the scope of this study. Rapid population growth and aging may explain part of the volume increases while technological innovations that allow for shorter hospital stays may be related to the decrease in ALoS. One plausible explanation for the counterintuitive finding of no significant PRG-reform effect, which deserves further analysis, is the difference between PRG payments and the previous PD payments for the same procedure. If the PRG tariff is lower than the original PD payment (calculated as the PD rate times the length of stay), the reform might not create a strong incentive to increase volume. This might be particularly true in Israel where the pricing mechanism is constrained and somewhat distorted due to a budget-neutral requirement that may lead to inaccurate prices [5]. In a 2018 qualitative study, hospital managers, ward directors, and surgeons reported that indeed, most PRG-paid procedures were underpriced [18]. A second possible explanation is that Israeli hospitals already worked under pressure before adopting PRGs, leaving little room for further increase of activities or reduction of length of stay. It is possible that hospitals simply do not have the necessary resources to treat more patients. As mentioned, the rates of hospital beds per population is one of the lowest among OECD countries. Rates of physicians have declined and are expected to drop below the OECD average in the coming decade [19,20]. There is a particular shortage of anesthesiologists and surgeons, causing a bottleneck for various procedures, at least in the short-term [21]. Finally, a third explanation is that other components of the hospital payment system, such as caps and subsidies, modify the incentives created by PRG payments. Caps on hospital income can deter hospitals from increasing activities beyond the ceiling. Retrospective subsidies avert hospital collapse, but also reduce their fiscal responsibility, transferring the risk to the MoH. Subsidies may blur the effects of the PRG reform on hospital activities if they are not financially responsive. Feldhaus and Mathauer [22] also conclude that mixed or blended provider-payment mechanisms may restrain economic incentives. They stress that the effects of payment reforms are highly context-specific. Our study adds to the literature on activity-based payment and its economic incentives. Diagnosis-related groups (DRGs) were originally developed in the US to incentivize hospitals to provide care more efficiently [23]. In the past two decades, many countries have shifted to hospital payment based on activity and adopted DRGs as a payment mechanism to improve efficiency while limiting incentives for patient selection [6]. In general, DRGs created economic incentives to cut costs and shorten ALoS. In Western European countries, DRGs also created incentives to maximize income by increasing the volume of (profitable) activities. Yet, there is evidence that DRGs also led to decreased volume (USA) or unchanged volume (Eastern European and Central Asian countries) [10]. Norton and colleagues [24] found, too, that overall, hospital ALoS did not decrease in the US when Medicaid introduced a flat episode payment for psychiatric patients in the 1990s, replacing PD. Our findings add another case of a country where the shift to activity-based payments did not seem to contribute to changing volume or a shorter ALoS. Notwithstanding, the study has limitations that should be taken into account: 1. Analysis on the wards level may not be sufficiently refined to capture the effects of the reform. Currently, about a third of the activity in procedural wards is paid for by PRGs, which represent some 40% of the ward income. Possibly, an increase of patients with PRG-paid procedures in these wards was compensated for by a reduction in the number of patients treated with non-PRG-paid procedures. 2. Wards are not a perfect proxy for medical areas because about 15% of discharges are transfers between wards within a hospital, further diluting the potential effect of the PRG reform on a specific ward. Since the data were aggregated at the ward level, it was not possible to exclude the transfers from the dataset. Yet, the rate of transfers has remained constant over the study period, so the DiD analysis should overcome this limitation. 3. The study does not control for changes in population needs, preferences over time or the technological "menu" offered to patients due to ageing and changes in the case-mix. However, we believe that the DiD methodology overcomes the problem of ageing and case-mix changes as it affects all wards similarly. Conclusions This is the first study to evaluate the impact of PRG reform in Israel on hospital activities, as measured by the number of inpatient discharges and the ALoS on the national level. The study did not find any significant effect of the PRG reform on ward-aggregate hospital inpatient volume or the ALoS. However, our inability to demonstrate a significant effect does not necessarily mean that the reform did not have any effect. As noted in the discussion, there are plausible explanations for the finding. These include conflicting incentives created by budget caps and subsidies, comparatively low PRG prices, and the limited capacity of hospitals to increase their volume because of limited resources. Despite the counterintuitive evidence generated by this study, the possible absence of an effect for the reform is interesting, and warrants closer examination, because it has implications for both researchers and policymakers in Israel and in other countries. First, researchers conducting cross-country analyses should avoid simplistic assumptions about the effects of DRG-like payment components on volume and length of stay; the incentives of such payments are often modified by multiple, co-existing payment components of a given national hospital payment system. Second, policymakers engaging in hospital payment reforms need to take into consideration additional factors, such as the national hospital market, available resources, otherpotentially conflictingpayment components, the various parties involved and their interests. More broadly speaking, unless payment reforms are accompanied by further measures that allow providers to respond to the changed incentives, e.g., by making available additional resources or allowing greater provider autonomy, the reforms are unlikely to lead to the intended changes.
5,663
2019-05-08T00:00:00.000
[ "Medicine", "Economics" ]
Privacy-Preserving Location-Based Service Scheme for Mobile Sensing Data † With the wide use of mobile sensing application, more and more location-embedded data are collected and stored in mobile clouds, such as iCloud, Samsung cloud, etc. Using these data, the cloud service provider (CSP) can provide location-based service (LBS) for users. However, the mobile cloud is untrustworthy. The privacy concerns force the sensitive locations to be stored on the mobile cloud in an encrypted form. However, this brings a great challenge to utilize these data to provide efficient LBS. To solve this problem, we propose a privacy-preserving LBS scheme for mobile sensing data, based on the RSA (for Rivest, Shamir and Adleman) algorithm and ciphertext policy attribute-based encryption (CP-ABE) scheme. The mobile cloud can perform location distance computing and comparison efficiently for authorized users, without location privacy leakage. In the end, theoretical security analysis and experimental evaluation demonstrate that our scheme is secure against the chosen plaintext attack (CPA) and efficient enough for practical applications in terms of user side computation overhead. Introduction Recently, mobile sensing devices have been widely used in data sensing [1,2], including location data [3,4]. For example, when a person takes photos by a smart phone, the equipped location sensor (GPS modules) can always acquire the locations where the photos are taken [5]. Additionally, the locations are embedded into the photos for remembrance. Then, these location-embedded data will be published in the mobile cloud automatically [1,2,6], such as iCloud, Samsung cloud, etc. These location-embedded data bring great convenience for cloud service providers (CSP) to provide location-based services (LBS) for users [3,7]. However, the mobile cloud is untrustworthy. Additionally, the location itself contains much personal information [8][9][10]. CSP is curious to infer and analyze location data to harvest additional information to gain illegal profits. Thus, the publisher (i.e., data owner) requires a solution that can protect location privacy from unauthorized users and CSP. As shown in Figure 1, Alice takes some food photos by iPhone, and the photos are embedded with location information. She stores them in iCloud and shares them with her friends. Since the location where the photos are taken is her home, she hopes that the embedded location is visible only to her friends, while invisible to strangers. A naive solution is to store the location data on the mobile cloud in an encrypted format. To achieve secure sharing on encrypted data, some researchers have studied the attribute-based encryption (ABE) scheme [4,[11][12][13][14]. In this scheme, the publisher encrypts the confidential location information using a symmetric encryption scheme (AES or DES) and defines the access policies, then uploads the encrypted location and access polices into the cloud for storage. Only authorized queriers (whose attributes satisfy the access policies) can read the location data. During the whole procedure, the cloud undertakes only storage overhead. However, in LBS, the queriers also require the CSP to provide the services of location distance compute and compare. By applying ABE directly, the authorized queriers cannot process the location until downloading and decrypting the encrypted location data. It takes the queriers too much local storage and computation cost, which is unacceptable considering the weak power of smart phones. To support functional processing over encrypted data, homomorphic encryption (HE) [15][16][17][18] was proposed. Xie and Wang [4] applied the RSA (for Rivest, Shamir and Adleman) algorithm to achieve computational functions on encrypted location data. In addition, Paillier's cryptosystem [19][20][21][22][23] is widely used as HE, due to its high efficiency and simplicity. It involves only one multiplication for each homomorphic addition and one exponentiation for each homomorphic multiplication. Li and Jung [5] combined the ciphertext-policy attribute-based encryption (CP-ABE) with Paillier's cryptosystem to exert fine-grained access control over LBS. One cannot gain any information from the query if his/her identity attributes do not satisfy the access policy defined by the data publisher. In addition, the location distance computed over encrypted location is supported. However, the publisher must stay online to interact with queriers once requested. This is not practical, considering the limited power of smart phone. To overcome the above problems, we propose a privacy-preserving LBS scheme for mobile sensing data. Here, a new encryption method on the basis of the RSA algorithm (The RSA algorithm is a commonly adopted public key cryptography algorithm. It is named after the three mathematicians who developed it: Rivest, Shamir and Adleman. The security of this encryption algorithm is based on the hardness of the factoring problem. We will present this algorithm in Section 3.2.) and CP-ABE scheme is designed. Our proposed scheme has two advantages as follows: • Secure sharing over location information with certain queriers. Our scheme achieves that location information is visible to specific queriers, while kept secret from others. • Efficient and privacy-preserving location distance compute and compare. The location distance compute and compare are two of the most common functions in LBQ, i.e., what is the distance between the publisher's and querier's locations, or whether the distance is less than 100 m. Compared with the privacy-preserving location query protocol (PLQP) scheme [5], we make better use of the powerful energy in the mobile cloud, by the mobile cloud undertaking most of the computing overhead, such that the computation cost at the querier is very low. The main contributions of our paper are outlined as follows: 1. A novel mobile sensing service system is constructed for privacy-preserving LBS. 2. This paper designs a novel encryption method on the basis of the RSA algorithm and CP-ABE scheme, so that the mobile cloud can process LBS over encrypted location information and only authorized queriers can get the query results. The rest of this paper is organized as follows. Section 2 discusses the related work. Section 3 presents some preliminaries. Section 4 describes our system models. Section 5 gives the detailed design of our privacy-preserving LBS scheme for mobile sensing data. Section 6 analyzes the security of our proposed scheme. Section 7 shows the performance evaluation by experiments. Finally, Section 8 concludes this paper. Related Work Our paper designs a privacy-preserving LBS scheme for mobile sensing data. The related work mainly includes two aspects, i.e., privacy-preserving LBS and the access control technique. Privacy-Preserving LBS The k-anonymity technique has been widely used to achieve user location privacy in LBS. The basic idea is to remove some features, such that each item is not distinguishable among other k − 1 items. It can ensure that a user can be identified with a probability of at most 1/k. Kido et al. [24] proposed an anonymous communication technique for LBS to protect location privacy using dummies. Duckham and Kulik [25] presented a privacy-preserving location query algorithm by using the obfuscation method and vague location information of the user. Chow et al. [26] proposed a distributed k-anonymity model and a peer-to-peer spatial cloaking algorithm for the anonymous location-based services. Mokbel [27] proposed a location-obfuscation method that allows the server to record the real identifier of the user, but decreases the precision of the location information to protect the location privacy. Bamba et al. [28] proposed fast and effective location cloaking algorithms for location k-anonymity and location l-diversity in a mobile environment. Gedik and Liu [29] applied the personalized k-anonymity model to protect the location privacy of the user. Shankar et al. [30] proposed a fully-decentralized and autonomous k-anonymity-based system for location-based queries. Xue et al. [31] introduced the concept of location diversity, which improves spatial k-anonymity by ensuring that each query can be associated with at least l different semantic locations. All of the above solutions can be applied to LBS. However, their techniques do not allow the cloud to search encrypted data. Therefore, they cannot be used for outsourced LBS where LBS data in the cloud are encrypted. Li and Jung [5] designed a suite of fine-grained privacy-preserving location query protocols (PLQP) by applying Paillier's cryptosystem [32,33]. It can solve the privacy issues in existing LBS applications. However, once there is an LBS request, the PLQP needs very frequent interaction between the publisher and the querier and much computation cost. In mobile sensing service systems, most queriers access the social networks via smart phones. The smart phones have weak power. Hence, it is unacceptable for the publishers to stay online always. Shao, Lu and Lin [8] proposed a FINEframework based on the CP-ABE scheme. In this framework, LBS data are outsourced to a cloud server after encryption. Although the framework can ensure the confidentiality of LBS data, their search patterns will lead to the leakage of user location privacy, because their trapdoors generated from the locations are steady, which means trapdoors are always the same for the same location. It is easy for an attacker to count the frequency of a specific trapdoor and identify the known locations. In addition, this method is not efficient due to the low efficiency of the public encryption. Access Control Recently, the ABE scheme has been widely used to exert access control for LBS in the mobile cloud. Li and Jung [5] introduced CP-ABE to exert fine-grained access control over the location queries. One cannot gain any information from the query if his/her identity attributes do not satisfy the access policy defined by the data publisher. Shao, Lu and Lin [8] also employed CP-ABE in designing their FINE framework to achieve fine-grained access control over location-based service data. The ABE scheme enables fine-grained access control over encrypted data using access policies and associates attributes with private keys and ciphertexts [34][35][36][37]. It was first proposed by Sahai and Waters [38], later extended to the key-policy ABE (KP-ABE) by Goyal et al. [39] and the CP-ABE by Bethencourt et al. [40]. In KP-ABE, the ciphertext is associated with an attribute set, and the user secret key is associated with an access policy over attributes. The user can decrypt the ciphertext if and only if the attribute set of the ciphertext satisfies the access policy specified in his/her secret key. The encryptor exerts no control over who has access to the data that he/she encrypts. In CP-ABE, the ciphertext is associated with an access policy over attributes, and the user secret key is associated with an attribute set. The user can decrypt the ciphertext if and only if the attribute set of his/her secret key satisfies the access policy specified in the ciphertext. The encryptor is able to decide who should or should not have access to the data that he/she encrypts. In our system model, CP-ABE is more suitable than KP-ABE because it enables the data publishers to determine an access policy over the outsourced location data, as studied by Li and Jung in [5]. Preliminaries This section briefly describes some preliminaries used in our work, including the bilinear map, the RSA algorithm, CP-ABE and the access tree. Bilinear Map Let G 0 be a multiplicative cyclic group of prime order p and g 0 be its generator. The bilinear map e is defined as: e : G 0 × G 0 → G T , where G T is the codomain of e. The bilinear map e has the following properties: • Non-degeneracy: e (g 0 , g 0 ) = 1. Definition 1 (discrete logarithm assumption). The discrete logarithm assumption in group G 0 of prime order p with generator g 0 is defined as follows: for any probabilistic polynomial-time (PPT) algorithm A, the probability that Pr A g 0 , g a 0 = a is negligible, where g 0 , g a 0 ∈ G 0 , and a ∈ Z p . Definition 2 (decisional Diffie-Hellman (DDH) problem). The decisional Diffie-Hellman (DDH) problem in group G 0 of prime order p with generator g 0 is defined as follows: on input g 0 , g a 0 , g b 0 , g c 0 = g ab 0 ∈ G 0 , where a, b, c ∈ Z p , decide whether c = ab or c is a random element. Definition 3 (decisional bilinear Diffie-Hellman (DBDH) problem). The decisional bilinear Diffie-Hellman (DBDH) problem in group G 0 of prime order p with generator g 0 is defined as follows: on input g 0 , g a 0 , g b 0 , g c 0 = g ab 0 ∈ G 0 and e(g 0 , g 0 ) z = e(g 0 , g 0 ) abc ∈ G T , where a, b, c ∈ Z p , decide whether z = abc or z is a random element. The security of many ABE schemes relies on the discrete logarithm assumption. The research also assumes that no PPT algorithm can solve the DDH and DBDH problems with non-negligible advantage. This assumption is reasonable since in a large number field, it is widely recognized that discrete logarithm problems (DLP) are as hard as described in Definition 1. Therefore, a is not deducible from g a 0 , even if g 0 is publicly known. RSA: Public Key Cryptography Algorithm RSA is a commonly-adopted public key cryptography algorithm [41]. It is the first and still most widely-used asymmetric algorithm. RSA is named after the three mathematicians who developed it, Rivest, Shamir and Adleman. The public/private key pair of RSA is computed in Algorithm 1, where GenModulus 1 N is a function used to output a composite modulus n along with its two N-bit prime factors; φ is Euler's totient function; gcd is a function used to compute the greatest common divisor for two numbers. The RSA encryption scheme includes three algorithms as follows: 1. KeyGen 1 N → pk, sk: takes security parameter 1 N as input and outputs a public/private key pair, denoted as pk = (n, e) and sk = (p, q, d), respectively, by executing Algorithm 1. 2. Enc (m, pk) → c: on input, a public key pk = (n, e) and a message m ∈ Z * n compute the ciphertext as c = m e mod n. 3. Dec (c, sk) → m: on input, a private key sk = (p, q, d) and a ciphertext c ∈ Z * n compute the message as m = c d mod n. The security of the RSA encryption scheme relies on the hardness of the factoring problem. If an adversary can factorize n, then he/she can compute φ (n) = (p − 1) (q − 1) and obtain the secret key d by utilizing the Euclidean algorithm. However, factoring a large number is still a hard problem. The proper choice of the modulus n = pq can guarantee the security of RSA encryption scheme. CP-ABE In the CP-ABE, the private key is distributed to users by the trusted authority (TA) only once. The keys are identified with a set of descriptive attributes, and the encryptor specifies an encryption policy using an access tree, so that those with private keys the satisfy it can decrypt the ciphertext. Access Tree T P In CP-ABE, the encryption policy is described with a tree called access tree T p . Each non-leaf node of the tree is a threshold gate, and each leaf node is described by an attribute. An example is shown in Figure 2. In this paper, a publisher's location information is set visible to certain kinds of users. For example, in Figure 1, Alice's location information is only accessible to her friends. If a user's attributes satisfy T P , he/she is granted with the access privilege. Simultaneously, he/she also can obtain the results of LBS provided by the mobile cloud. By doing so, we can control the visibility of the publisher's location information. Given a node u in the T P , |Children(u)| is the number of the node u's children nodes, and k u is its threshold value 0 < k u ≤ |Children(u)|. The node u is assigned a true value if at least k u child nodes have been assigned a true value. Specially, the node becomes an ORgate when k u = 1 or an ANDgate when k u = |Children(u)|. The access tree is described by a set of polynomials, as shown in Algorithm 2. In the access tree T P , the node value of the gate is recovered if and only if the values of at least k u child nodes are recovered, which is performed in a recursive manner. The notations for the access tree is explained in Table 1. Input: T P : an access tree; Output: {s, q l f (0)|l f is a leaf node of T P }: s ∈ Z p is a randomly-picked secret integer; 1: for all u in T P do 2: define a polynomial q u (x) = a ui x i , where the coefficients a ui are undetermined; 3: end for 4: Pick a random integer s 5: Set a R0 = q R (0) = s, where R is the root node of T P ; 6: Set other coefficients of q R (x), i.e., a Ri , i = 1, 2, . . . , k R − 1, by randomly-picked secret integers; 7: From top to bottom, set all of the coefficients of other nodes (except for the root node) that satisfy the following equation; q u (0) = q parent (u) (index (u)) . return {s, q l f (0)|l f is a leaf node of T p }. System Model Our mobile sensing service system mainly consists of four entities, as shown in Figure 3: mobile cloud, a publisher, many queriers and TA. The mobile cloud provides LBS via mobile applications or social network applications based on the collected location data. Its main work is to store and process ciphertext. A publisher contributes his/her location data to the mobile cloud, via smart phone, iPad, etc. Before uploading the data, the publisher first obtains the public key from the TA and determines the access tree. Then, he/she uses the public key and access tree to encrypt his/her location information. Afterwards, the encrypted location information is uploaded to the mobile cloud for storage and sharing. In addition, we assume the origin of a packet is successfully hidden, which is out of this paper's scope, and can be trivially prevented by employing anonymized network protocols [42]. Many queriers submit LBS query requests to the mobile cloud over the collected cloud data. However, only authorized queriers can obtain plain query results. TA is assumed to have powerful computation abilities. At the setup phase, the TA computes its own master key and the system-wide public parameter. The master key is used to generate the private key for the queriers, and the public key is used to process system-wide operations. Threat Model We assume that the mobile cloud is "honest but curious". Specifically, it acts in an "honest" fashion and correctly follows the designated protocol specification. However, it is "curious" to infer and analyze the stored data and queriers' query requests to harvest additional information to gain illegal profits. The queriers are curious about the confidential information, which is outside of their privileges. They may also collude with the mobile cloud. Location Assumption As described in [5], the ground surface can be assumed to be a plane, and every user's location is mapped to an Euclidean space with integer coordinates (with meters as the unit). The Euclidean distance between two locations X = (x 1 , x 2 , x 3 ) and Y = (y 1 , y 2 , y 3 ) is computed as: As for a real location on the surface of the Earth, we know that the relationship between the surface distance SD(X, Y) and the Euclidean distance dist(X, Y) is as follows: where the Earth is assume to be a sphere with radius R meters. Hence, it is easy to compute SD(X, Y) from dist(X, Y). To check if the surface distance satisfies certain conditions, we can convert it to check if the Euclidean distance is satisfying the corresponding conditions. For example, dist(X, Y) ≤ τ is equivalent as: SD(X, Y) ≤ 2R arcsin(τ/2R). For brevity, in this paper, we will focus on the Euclidean distance instead of the surface distance. Problem Formulation Assume that the querier Q's location information is X = (x 1 , x 2 , x 3 ), and the publisher P's location information Y = (y 1 , y 2 , y 3 ) is embedded in the published data. According to Q's attributes set S Q , the publisher P determines whether the querier Q can enjoy the LBS related to P's location. Afterwards, the authorized querier will obtain the corresponding LBS results provided by the mobile cloud. In this paper, we design a privacy-preserving LBS scheme for mobile sensing data, where the location publisher can determine who can decrypt the ciphered LBS results provided by the mobile cloud. Moreover, no confidential location information is leaked to the mobile cloud and unauthorized users during the LBS processing. Here, the LBS mainly includes two basic types: location distance compute and compare. The proposed scheme includes five main algorithms, as follows: This algorithm takes a security parameter 1 N as input. The TA executes this algorithm to compute its own master key MK and a system-wide public parameter PK. This algorithm takes as input the public parameter PK, the publisher's location information Y = (y 1 , y 2 , y 3 ), an access tree T P determined by the publisher and an encryption key K Y . It will encrypt the location Y, so that a querier can enjoy the LBS over location Y if and only if his/her attributes satisfy the access tree T P . • KeyGenerate (MK, PK, S Q ) → SK Q This algorithm takes as input the TA's master key MK, the public parameter PK and a querier's attribute set S Q . It enables the querier to interact with the TA and to obtain a secret key SK Q . • Veri f y (PK, SK Q , S Q , Y e ) → W s or ⊥ This algorithm enables an authorized querier to obtain a critical secret parameter W s , which is the key to decrypt the ciphered query results provided by the mobile cloud. • Operate(Y e , W s , X) → answer In this protocol, firstly, a querier encrypts his/her location X as X e using W s , then the mobile cloud operates over the encrypted locations Y e and X e to compute a ciphered query result. In the end, the querier uses W s to decrypt the ciphered result as answer. Our Proposed Scheme In this section, we will present the scheme design in detail. A. Setup(1 N ) → MK, PK This algorithm takes a security parameter 1 N as the input, and gives the TA's master key MK and a system-wide public parameter PK as the output, as shown in Algorithm 3. By Algorithm 3, the TA chooses and publishes a bilinear group G 0 of prime order p with generator g 0 , then randomly and secretly picks v 0 ∈ Z p . Finally, the TA computes the master key MK and the public key PK. Before uploading the location data, the publisher executes this algorithm to encrypt the sensitive location information Y = (y 1 , y 2 , y 3 ). In addition, she/he determines the access tree T P to exert access control on the location information. The ciphertext Y e includes two parts: Y I e and Y I I e . The encryption procedure consists of three main steps. 1. Pick a symmetric encryption key K Y = x, m , where 0 < x, m < n, n is generated by GenModulus 1 N , as shown in Section 3.2. 2. Compute: Here, g is co-prime with n; φ (n) is Euler's totient function of n. We will omit "modφ (n)" in the following expressions with an assumption that the exponent of the above formula is computed in modular φ (n). 3. Execute Algorithm 2, and obtain {s, q l f (0)|l f is a leaf node of T p }. Then, Y I e and Y I I e are computed by Equations (2) and (3). In Y I I e , C u and C u represent the attribute values in the specified access tree. , C = g s 0 . (3) Finally, the publisher stores Y e = Y I e , Y I I e in the mobile cloud for sharing them with some queriers. In this system, Y e can be downloaded by every querier. However, only authorized queriers can obtain the plain location information Y, and the query results of location distance compute and compare. C. KeyGenerate (MK, PK, S Q ) → SK Q When a new querier Q, with attribute set S Q , requests to join the system, TA executes this algorithm to generate Q's secret key. This algorithm is composed of two steps, as follows: 1. Attribute key generate: TA randomly picks d 0 ∈ Z p and computes: For any attribute i ∈ S Q , TA randomly picks r i ∈ Z p and computes the partial private key as: where H(i) is the hash value of attribute i. 2. Key aggregate: The secret key is generated by aggregating D, D i , and D i as: The above procedures are described in Algorithm 4. Input: MK: TA's master key; PK: the public parameter; S Q : a querier's attribute set; Output: SK Q : a secret key; D. Veri f y (PK, SK Q , S Q , Y e ) → W s or ⊥ By executing this algorithm, only authorized querier can obtain the secret parameter W s , which will be used to decrypt the ciphered query results. Otherwise, it will output ⊥. Figure 4 shows the main overview of this algorithm. Firstly, a recursive algorithm DecryptNode Y I I e , SK Q , S Q , u is defined as Algorithm 5, where u stands for a node in the access tree T P . Then, the querier recursively calls DecryptNode Y I I e , SK Q , S Q , R from the root node R, and obtains par R = e(g 0 , g 0 ) s·d 0 . At last, he/she can get the secret output W s by computing Equation (8). E. Operate(Y e , W s , X) → answer In this protocol, firstly, the authorized querier uses W s to encrypt his/her location X as X e , then the mobile cloud operates over Y e and X e to compute a ciphered location distance or to test whether the distance between these two locations is far or not. If necessary, the querier will use W s to decrypt the ciphered result as answer. In our scheme, we consider these two types of operations: location distance compute and location distance compare, i.e., what the distance between these two locations is or whether the distance is far or not. They are two basic LBS. Input: Y I I e : defined in Equation (3); SK Q : the querier Q's secret key; S Q : the querier Q's attribute set; u: a node in the access tree T P ; Output: par u : a secret parameter; or ⊥ 1: if u is a leaf node then 2: Set i = att(u); 3: if i ∈ S Q then 4: Compute else return par u = ⊥; 6: end if 7: else 8: Define F u = null; 9: for all z ∈ Children(u) do 10: Compute par z = DecryptNode Y I I e , SK Q , S Q , z ; 11: if par z = ⊥ then 12: Update F u = F u ∪ {par z }; 13: end if 14: end for 15: if |F u | < k u then return par u = ⊥; 16: else 17: Compute par u = e(g 0 , g 0 ) q u (0)d 0 using F u by polynomial interpolation method; 18: end if 19: end if return par u . Here, the locations of the publisher and the querier are X= (x 1 , x 2 , x 3 ) and Y= (y 1 , y 2 , y 3 ), respectively. Additionally, the publisher's location data Y= (y 1 , y 2 , y 3 ) have been encrypted as Y e = Y I e , Y I I e by Equations (2) and (3) and stored in the mobile cloud. The querier encrypts his/her location X using Equation (9). Next, we will present how to perform these two operations. Location distance compute: We know that the distance between the publisher and the querier can be computed as: The querier encrypts his/her location X= (x 1 , x 2 , x 3 ) as X e by Equation (9) and sends X e to the mobile cloud. Then, the mobile cloud executes Algorithm 6 to compute the ciphered location distance between X e and Y e . For simplicity and convenience of presentation, we will denote W s · g W s as ∆ in the following. Algorithm 6 DistanceCompute. Input: X e : the ciphertext of a querier's location X; Y e : the ciphertext of a publisher's location Y; Output: dis e : the ciphertext of the distance between X and Y; 1: Obtain m = m · W s and x = x + W s from the Y I e ; 2: Compute K Y = m · g x mod n; return dis e ; From Algorithm 6 and Equation (1), we know that for i = 1, 2, 3, After executing Algorithm 6, the mobile cloud sends dis e to the querier. Finally, the querier can get the plain distance dis by computing Equation (12). During the execution, all that the mobile cloud processes is the ciphertext. Location distance compare: The querier wants to know whether the location distance is within a threshold value τ. He/she encrypts the τ as τ e , using Equation (13): Then, the querier sends X e and τ e to the mobile cloud. The mobile cloud executes Algorithm 7 to compute whether the distance between X and Y is less than τ. From Equation (11), we know that dis e = dis · ∆. Since ∆ is always positive, it will not change the compare result between dis and τ. Thus, the mobile cloud can give out the comparison results through Algorithm 7 directly. Security Analysis In our system, the publisher can authorize the queriers that he/she knows or not, such as his/her friends or someone who has similar interests. Hence, the queries may include attackers. If so, it is easy for the attacker to get a certain plaintext/ciphertext pair. Thus, our scheme has to be secure against the chosen plaintext attack. Next, we will prove it. Theorem 1. Our scheme is secure against CPA. Proof of Theorem 1. Assume an attacker obtains Y = {y 1 , y 2 , y 3 } and its ciphertext Y e . From Equation (1), we get that: Assume that B = mg x . It is easy to compute B. However, it is difficult to get the proper x, m from B. If m is the power of g, it comes down obviously to the discrete logarithm problem. If m is not the power of g, i.e., m = m g x , where m is co-prime with g and x ≥ 0, then B = m g x +x . Even though the attacker can solve this equation, there are multiple solutions x , x for x + x. Let alone, it is even hard to solve m, x + x from B, which is as intractable as the discrete logarithm problem. As a consequence, the attacker cannot deduce the encryption key, as well as the secret parameter W s from Y I e . As described in Section 5, we know that secret parameter W s is the key to decrypt the query result. Thus, the attacker cannot decrypt extra confidential information apart from the already known Y = {y 1 , y 2 , y 3 }. In conclusion, our scheme is secure against CPA. Experiment As described in Section 2.1, the PLQP [5] is a suit of protocols supporting privacy-preserving LBS in mobile application. It has high efficiency and achieves fine-grained control by exerting the CP-ABE scheme, which is similar to our work. Thus, in this section, we will compare our scheme with the PLQP scheme [5] for evaluating the performance of our proposed scheme. The algorithms are implemented using the BigInteger library on a Windows 8.1 system with Intel CORE i7-4500U<EMAIL_ADDRESS>GHz and 8.00 G RAM. We have 10 tests in this experiment. Additionally, in each test, we use 1000 pairs of random locations for the publisher and the querier, respectively. We present the average results for each test in the following figures. Figure 5 shows the detailed time cost for once location distance compute and location distance compare, respectively. It is obvious that the time cost at the publisher is always zero, which meets the Operatealgorithm in Section 5. The query processing work can be done with no need for the publisher's help. The Comparison of The Time Cost between Our Scheme and the PLQP Scheme In Figure 6, we show the comparison results of the total time cost for the aforementioned two operations, respectively. It can be seen that our scheme is much more efficient than PLQP. Figure 7 shows the comparison results of the detailed time cost for the aforementioned two operations at the publisher, querier and mobile cloud, respectively. To be more clear, we also present the comparison results in Figures 8 and 9. From these figures, we get three points: • At the querier, the time cost in our scheme is much less that that in PLQP. • At the publisher, the time cost in our scheme is zero. • At the mobile cloud, the time cost in PLQP is zero. In sum, our scheme takes better advantage of the mobile cloud than PLQP and outperforms the PLQP scheme in terms of query efficiency. Numerical information about the computation cost is also shown in Table 2. Conclusions In this paper, we propose a privacy-preserving LBS scheme for mobile sensing data, by exerting the RSA algorithm and CP-ABE scheme. Our proposed scheme can support the mobile cloud to perform efficient and privacy-preserving queries of location distance compute and compare on encrypted locations. Moreover, due to the application of CP-ABE, our scheme achieves fine-grained control over the sensitive location data, where only authorized queriers, whose attributes satisfy the corresponding access tree, can decrypt the ciphered query results provided by the mobile cloud. As a consequence, both the publisher's and querier's location information are kept secret from the mobile cloud and unauthorized users. Finally, the security analysis proves that it is secure against CPA, and the performance evaluation demonstrates its efficiency with experiments compared with the efficient PLQP.
8,229.2
2016-11-25T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Image mosaicking using SURF features of line segments In this paper, we present a novel image mosaicking method that is based on Speeded-Up Robust Features (SURF) of line segments, aiming to achieve robustness to incident scaling, rotation, change in illumination, and significant affine distortion between images in a panoramic series. Our method involves 1) using a SURF detection operator to locate feature points; 2) rough matching using SURF features of directed line segments constructed via the feature points; and 3) eliminating incorrectly matched pairs using RANSAC (RANdom SAmple Consensus). Experimental results confirm that our method results in high-quality panoramic mosaics that are superior to state-of-the-art methods. Introduction The automatic construction of large, high-resolution image mosaics is an active area of research in the fields of photogrammetry, computer vision, image processing, and computer graphics [1]. It is considered as important as other image processing tasks such as image fusion [2], image denoising [3], image segmentation [4] and depth estimation [5]. Image mosaicking finds applications in a wide variety of areas. A typical application is the construction of large aerial and satellite images from collections of smaller photographs [1,6]. More applications include scene stabilization and change detection [7], video compression [8], video indexing [9] and so on [1]. Some widely used commercial software packages for image mosaicking are available, such as AutoStitch [10], Microsoft ICE [11], and Panorama Maker [12]. The key problem in image mosaicking is to combine two or more images by stitching them seamlessly together into a new one that distorts the original images as little as possible [13]. Image mosaicking techniques can be mainly divided into two categories: grayscale-based methods and feature-based methods. Grayscale-based methods are easy to implement, but they are relatively sensitive to grayscale changes especially under variable lighting. Featurebased methods extract features from image pixel values. Because these features are partially invariant to lighting changes, matching ambiguity can be better resolved during image matching. Matching robustness can be further improved by using feature points that can be detected reliably. Many methods have been shown to be effective for the extraction of image feature a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 points, for example, Harris method [14], Susan method [15], and Shi-Tomasi method [16]. Feature-based image mosaicking methods afford two main advantages: (1) the computation complexity of image matching will be significantly reduced since the number of feature points is far smaller than the number of pixels; (2) the feature points are very robust to unbalanced lighting and noise, resulting in better image mosaicking results. A wide variety of feature detectors and descriptors have been proposed in the literature (e.g. [17][18][19][20][21]). Detailed comparisons and evaluations of these detectors and descriptors on benchmark datasets were performed in [22,23]. Among various methods, SIFT [18] has been shown to give the best performance [22]. Recent efforts (e.g. SURF [24], BRISK [25], FREAK [26], NESTED [27], and Ozuysal's method [28]) have been focused on improving SIFT-based matching accuracy and reducing computation time. Arguably SURF [24] is among the best methods. Fei Lei et al. proposed a fast method for image mosaicking based on a simple application of SURF [29]. Jun Zhu et al. proposed an image mosaicking method that uses the Harris detector and SIFT features of line segments [30]. For performance and efficiency, this method uses Harris corner detection operator to detect key points. Then features of line segments are used to match feature points owing to their effective representation of local image information, such as textures and gradients. However, the Harris corner detector is very sensitive to changes in image scale; so it does not provide a good basis for matching images of different sizes. Motivated by this observation, we propose an image mosaicking method that is based on SURF features [24] of line segments. First, the method uses the SURF detection operator to locate feature points and then constructs a directed graph of the extracted points. Second, it describes directed line segments with SURF features and matches them to obtain rough matching of points. Finally, it adjusts matching points and eliminates incorrectly matched pairs through the RANSAC algorithm [31]. The framework of our method is summarized in Fig 1. SURF SURF, like the SIFT operator, is a robust feature detection method that is invariant to image scaling, rotation, illumination changes, and even substantial affine distortion. Both of these descriptors encode the distribution of pixel intensities in the neighborhoods of the detected points. SURF is computationally more efficient than SIFT owing to the use of integral images [32] and the box filters [33] that approximate second order partial derivatives of Gaussian convolutions. Similarly to many other approaches, SURF consists of two consecutive parts, including feature point detection and feature point description. SURF feature-point detector Similarly to the SIFT method, the detection of features in SURF relies on a scale-space representation combined with first and second order differential operators. The key feature of the SURF method is that these operations are approximated using box filters computed via integral images. So, the procedure of SURF feature detection involves first computing an integral image, establishing an image scale space with box filters, and finally locating feature points in the scale space. The SIFT detector is based on the determinant of the Hessian matrix, which is defined at point x = (x, y) and scale σ as where L xx (x, σ) is the convolution of the Gaussian second order derivative @ 2 @x 2 gðsÞ with the image I at point x, and similarly for L xy (x, σ) and L yy (x, σ). As mentioned before, in order to reduce computation, SURF approximates L xx , L xy , L yy with the box filtering using sum of the This can be performed very efficiently using an integral image I ∑ , which given an input image I is calculated as The determinant of the approximated Gaussians is Thus, the interest points, including their scales and locations, are detected in approximate Gaussian scale space. The size of the box filter is varied with octaves and intervals [34]: The filter sizes for various octaves and intervals are illustrated in Fig 2. Only pixels with greater responses than their surrounding pixels are classified as interest points. The maximal responses are then interpolated in scale and space to locate interest points with sub-pixel accuracy. SURF descriptor The goal of a descriptor is to provide a unique and robust description of the intensity distribution within the neighborhood of the point of interest. In order to achieve rotational invariance, the orientation of the point of interest needs to be determined. Orientation is calculated in a circular area of radius 6s centered at the interest point, where s is the scale at which the interest point is detected. In this area, Haar wavelet responses in x and y directions are calculated and weighted with a Gaussian centered at the point of interest. By computing the sum of the horizontal and vertical responses within a sliding orientation window of size π/3 and traversing the entire circle every 5 degrees, 72 orientations can be obtained. The two summed responses then yield a local orientation vector. The longest of such vector over all windows defines the main orientation. Once position, scale and orientation are determined, a feature descriptor is computed. The first step consists of constructing a square region centered around the feature point and oriented along the orientation determined previously. The region is divided uniformly into smaller 4 × 4 sub-regions. For each sub-region, Haar wavelet responses are computed at 5 × 5 regularly-spaced sample points. The x and y wavelet responses, denoted by dx and dy respectively, are computed at these sample points weighting with a Gaussian centered at the interest point and summed up over each sub-region to form a first set of entries to the feature vector. In order to obtain information on the polarity of the intensity changes, the sums of the absolute values of the responses, |dx| and |dy|, are also extracted. Therefore each sub-region is associated with a four-dimensional vector Combining the vectors, v's, from all sub-region yields a single 64-dimensional descriptor, which is normalized to unit-norm for contrast invariance. Rough matching The best candidate match for each keypoint is found by identifying its nearest neighbor in the set of keypoints generated from a reference image. The nearest neighbor is defined as the keypoint with the minimal Euclidean distance determined based on the invariant descriptor vector described above. However, many features from an image do not have any matching counterparts in the reference image because they arise from background clutter or cannot be detected in the reference image. Therefore, we use a global threshold on the distance to discard keypoints without good matches. Fig 3 shows the Euclidean distance of 10000 keypoints with correct matches for real image data. This figure was generated by matching images with different scales, rotation angles, changes in illumination, and affine distortions. As shown in Fig 3, most of the matched pairs have small Euclidean distances ranging from 0 to 0.15. We set the global threshold to 0.1 in our experiments, eliminating more than 90% of the false matches while discarding less than 5% of the correct matches. Line segment features Features of line segments are effective representation of local image information, such as textures and gradients. Given two images I and I 0 to be matched, the feature points are detected for each image using SURF to construct two directed graphs, G = (V, E) and G 0 = (V 0 , E 0 ), where V = {a 1 , a 2 ,Á Á Á,a n } and V 0 = {b 1 Nearest neighbor matching We use the nearest-neighbor matching criterion proposed in [30] for rough matching of line segments. Assuming image I has n 1 directed line segments, L = [l 1 , l 2 ,Á Á Á,l n 1 ], and image I 0 has n 2 directed line segments, L 0 ¼ ½l 0 1 ; l 0 2 ; . . . ; l 0 n 2 , the nearest-neighbor pairs can be encoded using an adjacency matrix K 2 R n 1 Ân 2 : The distance between a pair of line segments l i and l 0 j , with feature matrices S i and S 0 j respectively, is defined using the F-norm of the feature matrices: dðl i ; l 0 j Þ ¼ kS i À S 0 j k F . The matching is further refined as follows. With the sets of key points in two given images, V = {a 1 , a 2 ,Á Á Á,a n } and V 0 = {b 1 , b 2 ,Á Á Á,b m }, we use the statistical voting method reported in [30] to obtain the matching frequency of each point. A matrix G 2 R n×m is initiated as a null matrix. If based on K two straight lines match each other, we vote for the starting point pairs and the ending point pairs of the two lines once. This is carried out by incrementing the corresponding element in G by 1. A larger element in matrix G indicates higher probability of matching of two points. The procedure for computation of matrix G is detailed in Algorithm 1. To avoid matching to too many points to one point, the criteria to select matching points are as follows: Incorrectly matched pairs are further removed by using RANSAC (RANdom SAmple Consensus) [31] and then a homography matrix M is estimated for image alignment. Image mosaicking using SURF features of line segments Experimental results In this section, the experimental results of the proposed method are presented. Evaluation was performed with gray level images with different rotation angles, scales, illumination, and affine distortions are used. Representative results are shown here. In order to compare our proposed method with a recent state-of-the-art method presented in [30], images downloaded from the website [35] were used. Representative image pairs are shown in Fig 4. The lighting conditions in the two images are largely different in Fig 4(A). The left image has longer exposure time than the right one. The two images in Fig 4(B) were taken by ordinary camera in different orientations. The two images have different resolutions in Fig 4(C). The left one is a blurred low-resolution image and the right one has higher resolution. In Fig 4(D), the left image is taken with the lens of the camera zoomed relative to the right one. Therefore, the buildings in the left image appear larger than the ones in the right. Results of matching by different methods are shown in Figs 5-7. Fig 5 indicates that SURF cannot even stitch the images correctly due to incorrectly matched points. Figs 6 and 7 demonstrate that both SIFT and our method obtain good results. However, Fig 6(B) indicates that SIFT still results in wrongly matched points. Our method incorporates robust statistical voting and rough matching strategies that could eliminate incorrectly matched pairs. We can see that the panoramic image stitched by our method is cleaner than the one given by SIFT. To evaluate the proposed method quantitatively, we used some representative test image pairs from website [36], taken for the textured and structured scenes, as shown in Fig 13. The following metric is used: Note that a correct match is a match where two keypoints correspond to the same physical location, and a false match is one where two keypoints come from different physical locations. Table 1 presents the comparison of the matching results, including the number of correct matches over the number of total matches and 1-precision. The results in the table indicate that our proposed algorithm is superior in terms of 1-precision. Conclusion In this paper, we have introduced a novel image mosaicking method based on SURF features of line segments. This method firstly uses SURF detection operator to detect feature points. Secondly, it constructs directed line segments, describes them with SURF feature, and matches those directed segments to acquire rough point matching. Finally, the RANSAC (RANdom SAmple Consensus) algorithm is used to eliminate incorrect pairs for robust image mosaicking. Experimental results demonstrate that the proposed algorithm is robust to scaling, rotation, lighting, resolution and a substantial range of affine distortion. Recently, Ji et.al [37] proposed a novel compact bag-of-patterns (CBoP) descriptor with an application to low bit rate mobile landmark search. The CBoP descriptor offers a compact yet discriminative visual representation, which significantly improves search efficiency. In the future, we will try these new methods [37][38][39] proposed in the fields of mobile visual location recognition and mobile visual search to further improve the performance of our algorithm.
3,424.4
2017-03-15T00:00:00.000
[ "Computer Science" ]
Effects of the Comparative Continuation on L2 Writing Performance This study is a follow-up study of the continuation task, aiming to investigate the long-term alignment effects of the comparative continuation on L2 writing performance. The research lasted for a period of 16 weeks and employed a pretest-treatment-posttest research design. Two comparable groups of fifty-five Chinese undergraduate EFL learners participated. Both groups were assigned the same writing tasks i.e. writing an argument essay of the same topic within 30 minutes. One group was given an input text with comparative ideas before writing, while another group did not have any reading materials. After 8-week treatment, both groups received a posttest, in which the data were compared and analyzed with those of pretest. Results showed that (i) the comparative continuation task resulted in greater improvement in EFL learners’ writing performance than topic-writing task. (ii) the comparative continuation task was superior to the topic-writing task in incurring less meaning-based errors, but there was no difference in form-based errors between the two groups. The results can provide some enlightenment for teaching and research of foreign language writing. Introduction Interaction and learning are inseparable, and interaction can promote learning (Eskildsen, 2012). Interaction plays a critical role in promoting foreign language learning. In the context of foreign language learning in China, learners lack the real context to use foreign languages and the opportunities to interact with native speakers. At the same time, large-class teaching leads to relatively limited opportunities for teacher-student interaction and student-student interaction in class, which greatly affects the efficiency and quality of foreign language learning of Chinese students. Based on interactive alignment model, Wang C. (2012) proposed the continuation task which requires learners to read and continue an incomplete text. It differs substantially from other commonly used second language writing tasks (Plakans & Gebril, 2013) and offers unique opportunities for language learning. When continuing an incomplete text, second language learners tend to repeat some of its expressions and language structures (Wang & Wang, 2015). This process is dubbed alignment. By combining the input and output closely, the continuation task can effectively solve the problem of the lack of foreign language context in China. speaker's language ability can be developed by reusing his or others' language structures (Pickering & Ferreira 2008). Therefore, interactive alignment is not only the cognitive mechanism of interpersonal dialogue, but also an important mechanism to promote language development. As a universal psychological mechanism, the alignment effect in language use has aroused great interest among researchers in the fields of psycholinguistics and second language acquisition. Some scholars have conducted in-depth theoretical discussions and exploratory empirical studies on it (Atkinson et al., 2007;Barr & Keysar 2002;Costa et al., 2008;Pickering & Garrod,2004). Their research results have deepened the academic understanding of alignment and highlighted its important role in second language acquisition. Atkinson (2002) explained the phenomenon of alignment from the perspective of social cognition, believing that alignment is a process in which human beings coordinate and interact with the outside world and dynamically adapt to the external environment. Alignment occurs not only between people, but also between people and interaction variables such as environment, situation and tools. According to the social cognition theory of second language acquisition, learning is a process of continuous alignment between learners and social cognition environment, and alignment is a necessary factor for language use and acquisition. Based on the interaction alignment model and social cognition theory, Wang C. (2009) proposed that there is also alignment in written language. He comprehensively analyzed the relevant mechanism of language learning potential of alignment and proposed the continuation task theory. It is a new form of reading-writing integrated task in which test-takers read an incomplete story and continue it coherently. The continuation task provides an effective path for foreign language learning: interaction → understanding → alignment → output → acquisition. According to the path, input and output are closely combined, which encourages learners to interact with the original text. Although it is a one-way process, the interaction between learners and input text is very intense so that it can help learners to notice the gap between their interlanguage and the input text, producing "leveling effect" (Wang C., 2014:46). Language needs to be imitated through interaction in a rich context so as to strengthen the combination of language output and language understanding. When meaning is understood and expressed in the specific context of interaction, the language form, as the carrier of meaning, is noticed by learners and may be reactivated under certain context (Long 1991). In China, English learning lacks context, and there is often no channel to communicate with native speakers. Therefore, using the continuation task to promote the occurrence of interaction alignment is a good way to learn second language (Wang C., 2012). Since the continuation task was formally put forward, Chinese scholars have carried out a series of relevant empirical studies. Some scholars have studied the alignment effect of the continuation task from the perspectives of linguistic errors, grammatical structures, vocabulary acquisition, textual cohesion and coherence, and rhetoric. Wang & Wang (2014) found that alignment manifested itself in the continuation. Learners, who performed the English-version task, used more lexical items in the original story and committed significantly fewer errors in comparison with their performance on the Chinese-version task. Jiang L. & Chen J. (2015) examined the effect of the continuation task on second language learners' written accuracy, complexity, and fluency by comparing the performance of continuation task and topic-writing task. The comparison revealed that the continuation task generated more gains on accuracy and complexity. Jiang L. & Tu M. (2016) investigated the effectiveness of the continuation task on L2 vocabulary learning and found that both the continuation task and summary task could facilitate L2 vocabulary learning, with the continuation task outperforming the summary task, especially in terms of meaning and use. In addition, some scholars have begun to explore various factors influencing the alignment effect of the continuation task, such as, linguistic complexity and genres of input text, task conditions, and learners' English levels and learning motivation. Xue H. (2013) found that text-based interest has a positive influence on the alignment effect in story continuation. Her study showed that learners who write after reading an interesting story would align more and better with the original story than those who write after reading an uninteresting story. Peng J. (2015) examined the relationship between linguistic complexity variable and L2 learners' writing of the continuation. Results showed that linguistic complexity appeared to have no effect on the magnitude of alignment with the language use of the input text. However, while aligning with the given text, participants continuing the simplified version improved significantly in their writing fluency and accuracy. Zhang & Zhang (2017) investigated the continuation tasks of different genres in terms of their effects on alignment. Her study showed that the argumentation writing produces more alignment in vocabulary and phrases and fewer language errors than the narration writing. Xin S. (2017) explored L2 learners' acquisition of subjunctive mood under two different conditions of the continuation task. elt.ccsenet.org English Language Teaching Vol. 13, No. 8;2020 Notably, most of the previous studies on the continuation task generally examined the short-term alignment effect, overlooking the sustainability of alignment effect and could not provide sufficient evidence for language learning potential of the continuation task on second language acquisition. The key to the acquisition of language knowledge is whether learners can effectively apply the language knowledge they have mastered in the current task to the new task (Luria, 1961). It is a reliable way to evaluate the effect of the continuation task to make long-term tracking in learners' language development and measure their language level with new tasks. In addition, most current studies focus on narrative genres with the exception of Zhang & Zhang (2017), whose research adopted an argumentative essay with incomplete end for the subjects to continue. Research Question The research attempts to examine the comparative continuation of argumentative essays among non-English majors at a university in northern China. The comparative continuation, proposed by Wang C. in 2016, refers to a task in which test-takers are asked to write an argumentative essay with comparative viewpoints to the input text after reading a complete essay. In this study, we examine the effect of comparative continuation on the interactive alignment effect, investigate the error types and frequency in the output of contrastive continuation task, and test the effectiveness of contrastive continuation on Chinese college EFL learners' writing performance. Specifically, we seek to address the following research questions: 1) Does comparative continuation task significantly reduce the language errors of second language learners? 2) Does long-term comparative continuation task produce sustainable learning effect on second language learners' writing performance? Participants Fifty-five freshmen non-English majors at a university in northern China participated in this study. They were from two parallel intact classes with 27 and 28 students, respectively. Both classes took Comprehensive English course undertaken by the researcher and the teaching content is consistent and carried out simultaneously. At data collection time, they had no experience with English continuation tasks. One week before the experiment, all participants took a level test equivalent to CET-4. Independent sample t test results of English proficiency test shows that the English proficiency of the two groups of subjects is equivalent, there is no significant difference (t= 0. 192, df=73, p = 0. 846>0. 05). In the study, one class was assigned to take the continuation task treatment (i.e. the continuation group, N=27), while the other took the traditional topic-writing task treatment (i.e. the topic-writing group, N=28). Procedure and Materials The study took place in the autonomous learning classes of the participants' College English course. It spanned 16 weeks, consisting of a pretest, 6 writing trainings and a post-test. Pretest was administered on the first week, followed by six writing trainings from 2 nd to 13 th week and posttest conducted on the 16 th week. In order to improve the test validity and effectively reveal the sustainable learning potential of the continuation task, the pre-test and post-test adopted freewriting to measure the writing performance of participants. The requirements for freewriting were consistent with CET 4, that is, to complete an essay of at least 120 words within 30 minutes. During the writing training sessions, the continuation group wrote essays with comparative points to the input text after reading, while the topic-writing group performed the traditional writing task without reading materials. Both groups have the same topic and the requirements were the same. In reading, the subjects may consult the dictionary or the researcher if he/she meets some difficult words. Before writing, the continuation group was asked to read the input text carefully and highlight the wonderful words or sentence structures. In the continuation process, the input text remained accessible to the participants, so that they could imitate its structures and linguistic expressions, and apply them in their own continuation. The input texts were selected from some English newspapers and magazines in four dimensions, that is, interestingness, genre, length and linguistic complexity. All the argumentative essays were suitable for freshmen to read with a length of about 500 words. The linguistic complexity of the input texts matched and exceeded the participants' production ability. The writing quality could be affected by subjects' familiarity and interest in the topics (Applebee 1982;Freedman, 1983); therefore, the writing topics were closely related to their study and life, such as, education, love view, sports, employment, and education. Data Collection Participants' writings were collected and typed into the computer to yield a small-size corpus. Two types of analyses were performed on the data: composition rating and error frequencies counting. Composition Rating Two English teachers experienced in rating CET-4 writing scored the subjects' continuations with reference to the CET4 test rating criteria, grading range of 0 to 15 points. If there is a difference of 3 points between the two raters on the same composition, the composition will be returned to the two raters for re-grading. For the individual composition whose difference is still above 3 points, the two graders consult the grading criteria together. Inter-rater reliability, assessed using Pearson's correlation analysis, was high (r = .78 ---.806, p < .01). When the two scores were no more than five points apart, the average score was taken as the final score. The independent sample and paired sample T test of SPSS17.0 statistical software were used to compare and analyze the composition scores of the two groups. Error Frequencies To investigate the alignment effect of the continuation task, the frequencies of errors were counted. The statistical analysis of error types and frequency refer to the six categories of errors typical of Chinese EFL learners made by Wang & Wang (2014), including number agreement errors, misuse of articles, misuse of copula, misuse of non-finite verbs, tense errors, and Chinglish. The first five categories were concerned with language forms, for which they were regarded as form-based errors. Chinglish is that misshapen, hybrid language that is neither English nor Chinese but that might be described as 'English with Chinese characteristics'(Joan Pinkham, 2003). As Chinglish pertains to the meaning expression and is closely related to the conceptual level, they were regarded as meaning-based errors. The six categories are illustrated as follows: • … as a clerk who are mainly responsible for … (number agreement errors) • There is a interesting phenomenon in education in China. (misuse of articles) • …they just too shy to speak out. (misuse of copula) • When walked in the yard, he found …(misuse of non-finite verbs) • In primary school, my teacher give me some chances to learn by myself. (tense errors) • … the Christmas song rang out …(Chinglish) Errors in participants' writings were identified and coded manually by two researchers. The inter-rater reliability obtained by Pearson Correlation SPSS 17.0 reached .90 (significance level is 0.05). All the disagreements were resolved through negotiation. Since the length of the writings varied across participants, raw frequencies of errors were converted into proportions, that is, mean frequencies of errors of different categories per 100 words to make the statistics comparable. Writing Performance In order to see whether the comparative continuation can improve the writing performance of learners, the data of pretest and posttest of two groups were compared and analyzed. As shown in Table 1, the posttest results of continuation group and topic-writing group were both higher than the pretest results. Paired sample T test showed that there were significant differences in the scores of the two groups (continuation group, t=-5.064, df=61, p=.000; topic-writing group, t=-5.053, df=59, p=.000). It can be seen that after medium and long term writing training, comparative continuation task and topic-writing task showed great improvement in participants' writing performance. In addition, the mean difference of the posttest (0.7129) between continuation group and topic-writing group was larger than that of the pretest (0.0062). The conclusion can be drawn that comparative continuation can exert greater influence on the learners' writing performance than topic-writing task. Alignment Effect on Error Types and Frequencies The present study statistically analyzed the language errors in the pretest and the posttest of continuation group and topic-writing group, in order to further investigate the alignment effect of comparative continuation on learners' language errors. Table 2 listed the frequency of various language errors per 100 words of the two groups in the pretest and the posttest. It can be seen that the total number of errors measured in the posttest was lower than that of pretest. Independent sample T test showed that there were significant differences in the total number of errors and errors measured before and after the two groups (P = 0.046 < 0.05). The form-based errors in continuation group were almost the same as those in topic-writing group, but the meaning-based errors or Chinglish were less than those of topic-writing group. In continuation group, the frequency of form-based errors was 2.38 and the frequency of meaning-based errors was 0.36. Meanwhile, the frequencies of form-based and meaning-based errors of topic-writing group were 2.43 and 1.32, respectively. The independent sample t test showed that there was no significant difference in the frequency of form-based errors between the two groups (p > 0.05), while the frequency of meaning-based errors was significantly different (p = 0.035 < 0.05). This manifested that language input cannot reduce linguistic form-based errors, but can reduce the occurrence of Chinglish. Compared with topic-writing task, comparative continuation can significantly reduce the frequency of language errors and promote the accuracy of language use. Discussion To sum up, the study yielded the following major findings: (i) comparative continuation task can significantly improve learners' writing performance in medium and long-term. (ii) The amount of meaning-based errors used in continuation group was less than that in topic-writing group, but the difference of form-based errors was not significant. These findings have provided affirmative answers for the aforementioned research questions. The above findings show that the alignment effect of the comparative continuation comes about largely due to the intimate coupling of production with comprehension. It is more consistent with the law of language learning and had sustainable learning effect. In the current study, the participants were non-English majors, and due to low language proficiency, they had to read the input text repeatedly in order to fully understand it. In the process of understanding, they constructed the situation models consistent with the original text, and then continue it, leading to alignment at the language level. The participants also made up the gap of linguistic expressions by repeating the words of input text, so as to produce a strong alignment at language level and promote their language ability. The results not only supported the ideas of Wang (2016), concerning alignment effect of comparative continuation, but confirmed the findings of Jiang & Tu (2016) that learners had clear reading objectives and would consciously repeat the words of input texts in their own continuation tasks. Another finding of this study was that comparative continuation was more effective than topic-writing in reducing language errors and promoting writing accuracy. With language input, the use of Chinglish in continuation group was greatly reduced, and the meaning-based errors were significantly lesser than those of topic-writing group. This finding was consistent with the study of Jiang L. & Chen, J. (2015) who found that both continuation and topic-writing tasks could promote the development of language accuracy, but the former had more obvious long-term effect. Another study conducted by Yang M. (2015) also proved that the activation of learners' L2 related knowledge caused less Chinglish errors among learners of low L2 proficiency. The two groups of participants in this study had the same writing topic, and the situation models should also be similar. Why did Chinglish errors appear significantly different? In my opinion, although the two groups constructed the same situation models, topic-writing group had no English input and could only create their own expressions according to the needs of meaning expression. On the contrary, with input text, continuation group could avoid using of Chinglish expression in terms of wording. Continuation after reading English can restrain native language transfer, prevent Chinese context knowledge from being filled up, and made English writing more authentic (Wang C., 2012). To promote learning, language imitation should not be simply repeated or recited mechanically, but organically integrated with content creation. Although topic writing can release learners' creativity, it is easy to activate the knowledge of mother tongue and produce Chinglish due to the lack of alignment with input text (Wang C., 2013). In contrast, the continuation task not only can increase learners' motivation to create, but also provide high-quality language input for imitation, which provides a favorable condition for the development of language accuracy and complexity. Compared with the existing findings, continuation group and topic-writing group showed no significant difference in form-based errors. One reason is genre differences. The linguistic accuracy of argumentation continuation is higher than that of narrative continuation (Zhang & Zhang, 2017). The other reason is that the subjects are non-English majors. Their English level is lower and their awareness of grammar is not strong enough. As a result, language input cannot reduce linguistic form errors. In the process of writing, learners establish situation models prior to linguistic expressions, while the lack of noticing to the form of language, coupled with the lack of external hints, makes it difficult for learners to pay attention and cooperate effectively. The limitation of their cognitive ability leads to language errors (Xin S., 2017). In addition, conscious noticing to these forms of language was not emphasized during the study. Noticing is a prerequisite for acquisition, the key to converting input into absorption (Schmidt, 1990). Due to the insufficient noticing of the subjects, they failed to effectively coordinate with the correct language form of the original text. Conclusion The results of this study verified the sustainable learning promoting effect of comparative continuation and provided empirical support for "continuation theory". Similar to the task of continuing incomplete stories, the comparative continuation can significantly reduce the number of meaning-based errors and improve the writing performance of learners. Like other empirical studies, this one has its limitations as well. One limitation is that we have only examined the learners' data of errors. There is a need to look into alignment at other levels, such as, the lexical and syntactic level to further verify the findings of the present study. Another limitation involves the assessment on the continuation task. Since the continuation group was not monitored by the researcher in the reading process, they may deliberately avoid using the complex language forms of the input text, which greatly weakened the alignment effect of continuation. Researchers should combine the continuation task with explicit teaching to promote its effectiveness. In order to obtain more scientific and convincing research conclusions, the future study should not only carry out dynamic monitoring of implementation process, but investigate the sustainability of implementation effect; at the same time it should focus on the short-term and long-term sustainability effect of alignment as well, so as to realize the investigation of the whole process of "continuation theory" and its learning promotion mechanism after reading.
5,149.2
2020-07-03T00:00:00.000
[ "Linguistics", "Education" ]
Parity Codes Used for On-Line Testing in FPGA This paper deals with on-line error detection in digital circuits implemented in FPGAs. Error detection codes have been used to ensure the self-checking property. The adopted fault model is discussed. A fault in a given combinational circuit must be detected and signalized at the time of its appearance and before further distribution of errors. Hence safe operation of the designed system is guaranteed. The check bits generator and the checker were added to the original combinational circuit to detect an error during normal circuit operation. This concurrent error detection ensures the Totally Self-Checking property. Combinational circuit benchmarks have been used in this work in order to compute the quality of the proposed codes. The description of the benchmarks is based on equations and tables. All of our experimental results are obtained by XILINX FPGA implementation EDA tools. A possible TSC structure consisting of several TSC blocks is presented. Introduction The design process for FPGAs differs mainly in the "design time", i.e., in the time needed from the idea to its realization, in comparison with the design process for ASICs.Moreover, FPGAs enable different design properties, e.g., in-system reconfiguration to correct functional bugs or update the firmware to implement new standards.Due to this fact and due to the growing complexity of FPGAs, these circuits can also be used in mission-critical applications such as aviation, medicine or space missions. There have been many papers [1,2] on concurrent error detection (CED) techniques.CED techniques can be divided into three basic groups according to the type of redundancy.The first group focuses on area redundancy, the second group on time redundancy and the third one on information redundancy.When we speak about area redundancy, we assume duplication or triplication of the original circuit.Time redundancy is based on repetition of some computation.Information redundancy is based on error detecting (ED) codes, and leads either to area redundancy or time redundancy.Next, we will assume the utilization of information redundancy (area redundancy) caused by using ED codes. The process when high-energy particles impact sensitive parts is described as a Single Event Upset (SEUs) [3].SEUs can lead to bit-flips in SRAM.The FGPA configuration is stored in SRAM, and any changes of this memory may lead to a malfunction of the implemented circuit.Some results of SEU effects on FPGA configuration memory are described in [4].CED techniques can allow faster detection of a soft error (an error which can be corrected by a reconfiguration process) caused by an SEU.SEUs can also change values in the embedded memory used in the design, and can cause data corruption.These changes are not detectable by off-line tests, only by some CED techniques.The FPGA fabrication process allows the use of sub-micron technology with smaller and smaller transistor size.Due to this fact the changes in FPGA memory contents, affected by SEUs, can be observable even at sea level.This is another reason why CED techniques are important. There are three basic terms in the field of CED: l The Fault Security (FS) property means that for each modeled fault, the produced erroneous output vector does not belong to the proper output code word. l The Self-Testing property (ST) means that, for each modeled fault, there is an input vector occurring during normal operation that produces an output vector which does not belong to the proper output code word. l The Totally Self-Checking (TSC) property means that the circuit must satisfy FS and ST properties. The basic method for the proper choice of a CED model is described in [5].Techniques using ED codes have also been studied by other research groups [6,7].One method is based on a parity bits predictor and a checker, see Fig. 1. The fault model All of our experiments are based on FPGA circuits.The circuit implemented in an FPGA consists of individual memory elements (LUTs -look up tables).We can see 3 gates mapped into an LUT in Fig. 2. The original circuit has two inner nets.The original set of the test vectors covers all faults in these inner nets.These test vectors are redundant for an LUT.For circuits realized by LUTs a change (a defect) in the memory leads to a single event upset (SEU) at the primary output of the LUT.Therefore we can use the stuck-at fault model in our experiments to detect SEU -only some of the detected faults will be redundant. Our fault model is described by a simple example in Fig. 3.Only one LUT is used for simplicity.This LUT implements a circuit containing 3 gates.The primary inputs from I0 to I1 are the same as the address inputs for the LUT.When this address is selected its content is propagated to the output. We assume the following situation: first the content of this LUT can be changed, e.g., electromagnetic interference, cross-talk or alpha particles.The appropriate memory cell is set to one and the wrong value is propagated to the output.This means that the realized function is changed and the output behaves as a single event upset.We can say that a change of any LUT cell leads to a stuck-at fault on the output according to this example.This fault is observed only if the bad cell is selected.This is the same situation as for circuits implemented by gates.Some faults can be masked and do not necessarily lead to an erroneous output. Due to masking of some faults, the possibility of their appearance can occur at the time when previously unused logic is being used.E.g., if one bit of an LUT is changed, the erroneous output will appear, while the appropriate bit in an LUT is selected by the address decoder. In our design methodology we evaluate FS and ST properties.For ST properties a hidden fault is not assumed. The evaluation of the FS property is independent of the set of allowed input words.If a fault does not manifest itself as an incorrect codeword for all possible input words, it cannot cause an undetectable error for any subset of input words.So we can use the exhaustive test set for combinational circuits. The exhaustive test set is generated to evaluate the ST property for combinational circuits, where the set of input words is not defined.But in a real situation, some input words may not occur.This means that some faults can be undetectable.This can decrease the final fault coverage.Therefore, the number of faults that can be undetectable is higher. The fault simulation process is performed for circuits described by netlist (for example .edif). Parity bits predictor There are many ways to generate checking bits.A single even parity code is the simplest code that may be used to get a code word at the output of the combinational circuit.This parity generator performs XOR over all primary outputs.However, the single even parity code is mostly not appropriate to ensure the TSC goal. Another error code is a Hamming-like code, which is in essence based on the single parity code (multi parity code).The Hamming code is defined by its generating matrix.We used a matrix containing the unity sub-matrix on the left side for simplicity.The generating matrix of the Hamming code (15, 11) is shown in Fig. 4. The values a ij have to be defined. When a more complex Hamming code is used, more values have to be defined.The number of outputs oi used for the checking bits determines the appropriate code.E.g., the circuit alu1 [10] 1 1 1 1 vector is composed of log.1s only.The last vector is composed of log.1s in the odd places and log.0s in the even places.Every vector except the first contains the same number of 1s and the same number of 0s.An example of the possible content of the right part sub-matrix is shown in Fig. 5. The number of vectors in the set is the same as the number of rows in the appropriate Hamming matrix.The way to generate parity output for checking bit xk is described by equation 1: where o 1 … o m are the primary outputs of the original circuit. Area overhead minimization The benchmarks used in this paper are described by a two-level network.The final area overhead depends on the minimization process.We used two different methods in our approach.Both these methods are based on a simple duplication of the original circuit. Our first method is based on a modification of the circuit described by a two-level network.The area of the check bits generator contributes significantly to the total area of the TSC circuit.As an example we consider a circuit with 3 inputs (c, b and a) and 2 outputs ( f and e).The check bits generator uses the odd parity code to generate the check bits.In our example we have only one check bit x. Our example is shown in Table 1.Output x was calculated from outputs e and f.We have to generate the minimal form of the equation at this time.We can achieve the minimal form using methods like the Karnaugh map or Quine-McCluskey.After minimization we obtain three equations, one per output (f, e and x), where x means an odd parity of the outputs f and e.If we want to know whether the odd parity covers all faults in our simple combinational circuit example, we have to generate the minimal test set and simulate all faults in each net in this circuit. The final equations are: x bc = (4) Our second method is based on a modification of the multi-level network.The parity bits are incorporated into the tested circuit as a tree composed of XOR gates.The maximal area of the parity generator can be calculated as the sum of the original circuit and the size of the XOR tree. Experimental evaluation software Fig. 6 describes how the test is performed for each detecting code.The MCNC benchmarks [11] were used in our experiments.These benchmarks are described by a truth table.To generate the output parity bits, all the output values have to be defined for each particular input vector.Only several output values are specified for each multi-dimensional input vector, and the rest are assigned as don't cares; they are left to be specified by another term.Thus, in order to be able to compute the parity bits, we have to split the intersecting terms, so that all the terms in the truth table are disjoint. In the next step, the original primary outputs are replaced by parity bits.Two different error codes were used to calculate the output parity bits (single even parity code and Hamming code).Another tool was used in the case where the original circuit was modified in multilevel logic.This tool is described in [8].Two circuits generated in the first step (the original circuit and the parity circuit) are processed separately to avoid sharing any part of the circuit.Each part is minimized by the Espresso tool [9].The final area overhead depends on the software that was used in this step.Many tools were used to achieve a small area of the parity bits generator.Only Espresso was used to minimize the final area of the circuit described by the two level network.In this step the area overhead is known for implementation to ASIC.For FPGAs the area overhead is known after the synthesize process has been performed. The "pla" format is converted into the "bench" format in the next step.The "bench" format was used because, the tool Czech Technical University in Prague Acta Polytechnica Vol. 45 No. 6/2005 c b a f e x 0 0 0 0 1 0 Another conversion tool is used to generate two VHDL codes and the top level.The top level is used for incorporating original and parity circuit generator.In the next step, the synthesis process is performed by Synplify [12].The constraints properties set during the synthesis process express the area overhead and the fault coverage.If the maximum frequency is set too high, the synthesize process causes hidden faults to occur during the fault simulation.The hidden faults are caused by circuit duplication or by the constant distribution.The size of the area overhead is obtained from the synthesis process.The final netlist is generated by the Leonardo Spectrum [13] software.The fault coverage was obtained by simulation using our software. Software solution description Special tools had to be developed to evaluate the area overhead and fault coverage.In addition to some commercial tools such as Leonardo Spectrum [13] and Synplify [12] we used format converting tools, parity circuit generator tools and simulation tools. At first, area minimization and term splitting is performed for the original circuit by BOOM [10].The Hamming code generator (or single parity generator) is generated by the second software.These two circuits are minimized again with Espresso.The next two tools convert the two-level format into a multi level format.The first converts a "pla" file to "bench", and the second converts "bench" to VHDL.The second software is used for generating the final circuit in the "bench" format for further usage in the exhaustive test set generator.The format converting software and parity generator software were written in Microsoft Visual C++.The netlist fault simulator was written in Java.The parser source code was used for parsing the netlist that is generated by the two commercial tools described above. Experiments The combinational MCNC benchmarks [11] were used for all the experiments.These benchmarks are based on real circuits used in large designs. Since the whole circuit will be used for reconfiguration in FPGA, only small circuits were used.Real designs having a large structure must by partitioned into several smaller parts.For large circuits, the process of area minimization and fault simulation takes a long time.This disadvantage prevents us examining more methods of designing the check bits generator. The evaluated area, FS and ST properties depend on circuit properties such as the number of inputs and outputs, and the circuit complexity.The experimental results show that a more important property is the structure of the circuit.Two basic properties are described in Table 2. In the first set of experiments our goal was to obtain one hundred percent of the FS and ST property, while we measured the area overhead.In this case, the maximum of the parity bits was used.This task was divided into two experiments (Fig. 7).In the first experiment the two-level network was being modified (Fig. 7a).The results are shown in Table 3 The ST property was fulfilled in 7 cases and the FS property was fulfilled in 4 cases.The area overhead in many cases exceeds 100%.This means that the cost of one hundred percent fault coverage is too high.In these cases the TSC goal is satisfied for most tested benchmarks. We then used an old method, where the original circuit described by a multi-level network is modified by additional XOR logic (Fig. 7b) [8]. The results obtained from this experiment are shown in Table 4.The FS and the ST properties were fulfilled in the same cases as in the first experiment, but the overhead is in some cases smaller. In the second set of experiments we tried to obtain a small area overhead, and the fault coverage was measured.In this case the minimum of parity bits is used (single even parity).The experiments are divided into two groups, a) and b), Fig. 7.The procedure is the same as described above. In the first experiment the two-level network of the original circuit was modified (Fig. 7a).The results are shown in Table 5. The ST property is achieved in four cases, but the area overhead is smaller in five cases.The FS property is satisfied in one case. In the last experiment, we have modified the circuit described by a multilevel network (Fig. 7b).The ST property was satisfied in four cases and the FS property in two cases.The area overhead is higher than 100% for most benchmarks, but the fault coverage did not increase, Table 6. Huge design Our previous results show that it is in many cases too difficult to achieve TSC goals with minimal area overhead [8].A way to detect and localize the fault part of the circuit has to be proposed.Assuming that the TSC goals cannot be higher than 90%, the area overhead can be rapidly decreased, and other methods to cover and localize the fault can be used.On-line testing methods can only detect faults.The localization process must exploit some other methods for off-line testing.However, neither on-line nor off-line tests increase the reliability parameters.The reliability mostly decreases due to the larger area occupied by the TSC circuit than by the original circuit. Therefore we propose a reconfigurable system to increase these parameters.Each block in our design is designed as a TSC, and we have been working on a methodology to satisfy TSC goals for the whole design and to design highly reliable systems.The way to connect all TSC blocks is shown in Fig. 8.The main idea is based on detection of the error code word generated in any block.The detecting process is moved from the primary outputs to the primary inputs of the following circuit.The interconnections of all individual blocks play an important role with respect to the TSC property of the whole circuit.A bad order of the connections between the inner blocks leads to lower fault coverage.Additional logic has to be included into the control arrangement of the implemented blocks with respect to the way the automatic tools handle the interconnection. In our structure we can assume six places where an error can be observable.We assume, for simplicity that an error that occurred in the check bit generator will be observable at the parity nets (number 1) and error occurred in the original circuit will be observable at the primary outputs (number 5). The checker in block N will detect the error if it occurs in net number 1, 2, 4 or 5.If the error occurs in the net number 3 or 6, the error will be detected in the next checker (N+1). All our experiments were applied to combinational circuits only.circuit, because these circuits can be divided into simple combinational parts separated by flip-flops.The finite state machine can be divided into two parts: the first part covers the combinational logic from inputs to flip-flops (with feedback), while the second part covers the combinational logic from flip-flops to outputs (and the parts connected directly from the input to the output). Conclusion The paper describes one part of the automatic design process methodology for a dynamic reconfiguration system.We designed concurrent error detection (CED) circuits based on FPGAs with a possible dynamic reconfiguration of the faulty part.The reliability characteristics can be increased by reconfiguration after the error detection.The most important criterion is the speed of the fault detection and the safety of the whole circuit with respect to the surrounding environment. In summary, FS and ST properties can be satisfied for the whole design, including the checking parts.This is achieved by using more redundancy outputs generated by the special codes. A Hamming-like code can be used as a suitable code to generate check bits.The type depends on the number of outputs and on the complexity of the original circuit [9]. More complex circuits need more check bits.We would like to reduce the duplicated circuit and compute the fault coverage again.We have proposed a new solution of the check bits generator design method.Because we want to increase the reliability characteristics of the circuit implemented in FPGAs, we have to modify the circuits at the netlist level. All of our experiments apply combinational circuits only.Sequential circuits can be disjoint to the simple combinational parts separated by flip-flops.Therefore this restriction only to combinational circuits does not reduce the quality of our methods and experimental results. Our future improvements will involve d discovering closer relations between real FPGA defects and our fault models.Minimization of the whole TSC design to obtain the lowest area overhead has been under intensive experimentation.We are also working intensively on the appropriate decomposition of the designed circuit. Table . 1 : Example of parity generator Table 2 : . Description of tested benchmarks Table 3 : Hamming code -PLA The same techniques can be used for a sequential
4,628.4
2005-01-06T00:00:00.000
[ "Engineering", "Computer Science" ]
Fast Trajectory Optimization for Gliding Reentry Vehicle Based on Improved Sparrow Search Algorithm In order to solve the problem of low convergence accuracy and easy to fall into local optimization when solving the reentry vehicle trajectory optimization problem for existing algorithms, an improved sparrow search algorithm (OTRSSA) is proposed. Firstly, the basic sparrow search algorithm is improved by the methods of opposition-based learning, adaptive T-distribution and random walk to improve the optimization accuracy and stability, and increase the global search ability, and the performance verification is carried out in 12 benchmark functions; secondly, the 3-DOF motion model of reentry trajectory optimization problem is established and transformed into multi-dimensional function optimization problem; finally, OTRSSA is applied to solve the trajectory optimization problem. The simulation results show that the optimization performance and convergence speed of OTRSSA are better, and a reentry trajectory with the farthest range and satisfying constraints can be obtained quickly. Introduction Reentry vehicle has the advantages of high speed, long range and strong maneuverability [1], reentry flight environment is complex, and high-speed reentry flight is greatly affected by heat flux, overload and dynamic pressure, the problem of trajectory optimization has always been a research hotspot. There are two kinds of methods for reentry trajectory optimization: indirect method and direct method. Based on the classical variational method or Pontryagin minimum principle, the indirect method transforms the optimal control problem into a Hamiltonian two-point boundary value problem, which is sensitive to the initial value and difficult to converge, literature [2][3][4] uses indirect method to solve such problems. The direct method is to discretize and parameterize the variables in the optimal control problem, so as to transform the optimal control problem into a nonlinear programming (NLP) problem, and then solve it with numerical optimization method. In direct method, Gaussian pseudo spectrum method uses Gaussian pseudo Spectrum Approximation parameterization to divide the trajectory into several segments, so that all points on the flight path meet the complex constraints [5][6]. In recent years, various heuristic algorithms have been proposed and applied to reentry trajectory optimization. In reference [7], the decimal ant colony algorithm with local search strategy is used to realize the trajectory optimization design of minimizing the total heat absorption under overload constraints; in reference [8], the reentry trajectory optimization problem was solved by the improved chicken swarm algorithm with fixed angle of attack profile and discrete pitch angle; in reference [9], 2 the artificial bee colony algorithm is used to discretize the angle of attack at Legendre Gaussian collocation points to realize the direct configuration of trajectory optimization. Sparrow search algorithm was proposed by Xue et al. In 2020, compared with other algorithms, it has the advantages of good stability and high convergence accuracy [10]. In reference [11], sparrow search algorithm is applied to the path planning of UAV, and the constrained path under time cooperation is obtained. But the basic sparrow search algorithm also has some problems, such as easy to fall into local optimum, long search time and so on. This paper proposes a sparrow search algorithm based on opposition-based learning, adaptive distribution and random walk strategy (OTRSSA), which can increase the search space, jump out of the local optimal solution and quickly search for the global optimal solution. The improved sparrow search algorithm 2.1. Basic sparrow search algorithm SSA algorithm is a new swarm intelligence optimization algorithm inspired by Sparrow's foraging and anti predation behavior. It divides the whole population into discoverer, follower and early warning, and they update their positions according to their respective ways. The location of the discoverers is about 10% 20%  of the population, and the location renewal mode is as follows: Where: t is the number of iterations, max iter is the maximum number of iterations, indicates a safe value, q is a random number that obeys normal distribution, L represents a matrix with size 1 d  and elements 1. In addition to the discoverer, the rest of the sparrows are followers, which are updated according to the following formula: Where: p X indicates the best location for the discoverer, worst X indicates the worst position, A is a vector of 1 and -1 . The early warning persons account for about 10% 20%  of the population, and their location is updated as follows: Where: best X represents the current global optimal position,  is the step control parameter, Opposition-based Learning strategy initialization population Opposition based learning (OBL) was proposed by Tizhuosh in 2005, which has been proved to be an effective method to improve the search ability of the algorithm [12]. Definition 1 Opposite Number: If x is any real number between [ , ] a b , then the opposite number of is: Because the feasible scheme and the scheme based on reverse learning are located on both sides of the search space, the reverse learning strategy can expand the search area and enhance the global search ability. Adaptive t-distribution strategy to update sparrow position T-distribution is also called student distribution. Its curve shape is related to the degree of freedom parameter n. The probability density function is as follows: Where: After the sparrow is updated, the adaptive t -distribution is used to update the sparrow position again. Compared with the sparrow before and after the update, the sparrow before is replaced if it is better. The updating method of adaptive t -distribution is as follows: * ( ) Where: t i x is the position of sparrow after variation; i x is the position of the - is a t -distribution with the number of iterations as the parameter. Random walk strategy perturbs sparrow position The mathematical expression of random walk strategy is as follows: Where: ( ) X t is defined as the set of steps of random walk; cussum represents the cumulative sum; t is the number of steps of random walk (in this paper, we take the maximum number of iterations);   r t is a random function, defined as: Since there is a boundary in the feasible region, it is not possible to update the position directly with equation (8), so it is necessary to normalize it: To sum up, the flow chart of OTRSSA algorithm is shown in figure 1. Starting Set parameters Initialize the population using opposition-based learning, let t=1 t < t_max Calculate fitness value and sort, record the best individual and location Ending Update the finder location according to Equation (1) Update the follower location according to Equation (2) Update the alerter location according to Equation (3) Dynamic t-distribution according to Equation (7) Calculate fitness value and sort Random walk according to Equation (8) and (10 As shown in Table 4, for high-dimensional unimodal test functions F1 ~ F5, OTRSSA greatly improves the optimization accuracy and stability compared with the other five algorithms, for F1 ~ F4, the average and standard deviation of OTRSSA are improved by more than 25 orders of magnitude compared with the other five algorithms, for F5, although the optimization performance of OTRSSA is not as good as F1 ~ F4, it is still improved by an order of magnitude. For high-dimensional multimodal test function F6 ~ F9, in F7 ~ F9, OTRSSA and SSA are far better than the other four algorithms in the optimization accuracy and stability, which shows that OTRSSA can jump out of the local optimum, find the global optimum stably, and has strong robustness. Although the performance of OTRSSA is not as good as SSA in F6, the optimization accuracy and stability are still very high. For low dimensional functions F10 ~ F12, although the average value and standard deviation of multiple optimization by OTRSSA are not significantly improved compared with the other five algorithms, for each test function F10 ~ F12, OTRSSA can find its approximate optimal solution, which shows that OTRSSA has strong robustness and good stability. In order to further test the advantage of OTRSSA in iteration speed, F6 and F10 are selected to test with OTRSSA and SSA respectively. The convergence curves are shown in figure 2 and figure 3. It can be seen from table 4, figure 2 and figure 3 that although the average and standard deviation of multiple optimizations by OTRSSA in test functions F6 and F10 are not as accurate as those by SSA, its convergence speed is fast and the global approximate optimal solution can be obtained stably. To sum up, compared with other algorithms, OTRSSA has a greater improvement in the optimization accuracy and stability, and its iterative convergence speed is faster, the search ability is strong, it can jump out of the local optimal solution, and find the global optimal solution or global approximate optimal solution stably. Motion model of reentry process Considering that the earth is a stationary homogeneous sphere and the vehicle's bank angle is zero. The reentry motion equation is: Where: r is the radial distance from the Earth center to the vehicle, v g m   , , , , are the velocity, the flight path angle, the range angle and the acceleration of gravity respectively. L and D the aerodynamic lift and drag accelerations respectively, the specific expression is: Where: max q is the maximum dynamic pressure, max n is the maximum overload value, max q  max n is the maximum heating rate, min max   、 are the maximum and minimum angle of attack. Terminal constraints The terminal constraints of reentry process mainly consider the terminal height and velocity constraints. other times is obtained by linear interpolation, and the flight trajectory satisfying the constraints and the optimal performance index is obtained by integrating the motion equation and optimizing the variables, which belongs to the multi constraint optimization problem. Simulation calculation and verification The trajectory optimization problem of reentry vehicle is a multi constrained optimization problem. Using OTRSSA to solve the reentry trajectory optimization problem is to use OTRSSA to optimize the objective function. The position of sparrow individual represents a group of optimization variables. Individual fitness is the value of objective function. The objective function is composed of cost function and constraint function. The discoverer in sparrow population is the individual with better value of current objective function, and the follower is the individual with worse value of current objective function, after several iterations, the optimal solution returned is the optimal value of the variable corresponding to the farthest voyage. Simulation parameters The aerodynamic reference area of the aircraft is / kW m 、 80kPa 和 3g ,the terminal height is 20km ,terminal speed is greater than 1000 / m s . Figure. 4 and figure. 5 are the range angle curves obtained by using the six algorithms of OTRSSA, SSA, WOA, PSO, Ba and GWO, respectively. It can be seen from figure. 4 that the range angle obtained by using the optimization of OTRSSA is the largest, which is, converted into the range of about; as can be seen from figure 5, OTRSSA converges rapidly to the optimal value at the beginning of iteration, and then stabilizes at the optimal value, which indicates that OTRSSA has strong global optimization ability. Simulation results and analysis figure. 4 Curve of range angle Figure. 5 Curve of iterative convergence The altitude and speed change are shown in figure 6 and figure 7. From figure 6, it can be seen that the aircraft shall be wave like jumping after re-entry and pull, with a smaller jump range, and the re-entry glide time is about 3000s and the terminal height is 20km ; it can be seen from figure 7 that the overall velocity of the aircraft decreases in the course of re-entry and glide, and the terminal speed is slightly higher than 1000 / m s , which meets the design requirements. 1) In this paper, the basic sparrow search algorithm is improved from three aspects: opposition-based learning, adaptive t-distribution and random walk, and the improved sparrow search algorithm (OTRSSA) is compared with other five algorithms. The results of 12 test functions show that compared with other algorithms, OTRSSA has higher optimization accuracy and stability, faster iterative convergence speed, and it can effectively jump out of the local optimal solution and quickly find the global optimal solution; 2) In this paper, OTRSSA is used to solve the trajectory optimization problem. The simulation results show that OTRSSA can optimize a reentry trajectory with the farthest range and meet the constraints, which has a good application value in the reentry trajectory optimization problem. 3) There is still room to improve the stability of multi-dimensional function optimization for OTRSSA, which can be further applied to the cooperative trajectory optimization design of reentry vehicle.
2,949.8
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A Method of Waypoint Selection in Aerial Images for Vision Navigation , Introduction In recent years, the research on vision navigation based on scene matching technique has attracted more and more attentions for its accuracy and independence [1].In order to assure the matching precision, it is a primary step to select waypoints from aerial images of candidate flying regions for scene matching, which can be implemented via suitability analysis [2][3][4]. Some researchers have made efforts to solve the problem of suitability analysis.Yang et al. [2] used gray-based descriptors and edge-based descriptors as the input of SVM to classify the matching suitability of the image.Jiang and Chen [5] provided a hierarchical way of selecting optimal scene matching areas.Liu et al. [6] presented the method of selecting matching areas using independent pixel numbers and variance.In [7] it was suggested that information entropy and summation of image gradient can be used for evaluating image suitability.However, features used in the above methods are inadequate to describe the suitability and there is redundancy information among the feature descriptors.Research has shown that the suitability is considered to consist of information, obviousness, stability, and uniqueness [1].From the view of visual analysis, the information and obviousness can be analyzed via visual saliency.Visual saliency is a cognitive procedure which can rapidly select a small set of highly informative or visually salient objects from a scene for further processing [8].The stability and uniqueness are the feature attributes of an image.So we can transform the suitability analysis to the combination of visual saliency analysis and feature attributes classification. For visual saliency analysis, in order to avoid the uncertainty influence of statistic features, Li et al. [9] proposed the visual saliency analysis method based on length of incremental coding, which is based on sparse coding [10,11] and centersurround model (C-S); Han et al. [12] proposed saliency analysis method based on weighted length of incremental coding.They are all efficient approaches to select regions with local saliency, but for matching suitability the saliency should be with unique structure information, which means it is sparse in the global image.So low-rank recovery is introduced to analyze the global and local saliency with sparse coding for preparatory selection.For feature attributes classification, SVM is used to analyze stability and uniqueness for optimizing selection.So this paper presents a practical framework for waypoint selection, as illustrated in Figure 1. The proposed framework consists of two major components: a preparatory selection model based on visual saliency analysis and an optimal selection model based on SVM.In the formal model, the initial image is decomposed as the sum of sparse matrix and low-rank matrix, and then saliency of sparse matrix and low-rank matrix is analyzed, respectively, to construct a new saliency map.The preparatory selection results are got with threshold constraint and nonmaxima suppression.In the second component, SVM is used for optimizing selection, and the input vector is composed of four measure parameters based on the edge and cross correlation surface. The rest of the paper is organized as follows.Section 2 describes the preparatory selection model based on visual saliency analysis.Section 3 presents optimal selection of waypoints model based on feature attribute classification.Section 4 reports and analyzes evaluation results.Finally, conclusions are drawn in Section 5. The Preparatory Selection Based on Visual Saliency Analysis Salient objects can be viewed as a small number of foregrounds, which are different from the surrounding backgrounds.So low-rank matrix recovery is introduced to separate foreground from background [13].The low-rank matrix and the sparse matrix are recovered by optimizing the constraint, min , (rank () + ‖‖ 0 ) , s.t. = + , where rank(⋅) represents matrix rank, ‖ ⋅ ‖ 0 represents 0norm, and is the weight parameter which is used to weight the sparse relationship between the rank of and the sparsity of .Given a suitable , we want to be able to get a pair of (, ).However, it is a nonconvex problem for the existence of rank(⋅) and ‖ ⋅ ‖ 0 .Usually, (1) (c). There are two kinds of saliency in aerial images.One is with its unique information of scene structure (shown as green dotted line in Figure 2(c)) and the other is some small man-made areas with high brightness where there is no structure information (shown as red dotted line in Figure 2(c)) . We can see that the second saliency is the same as its surrounding objects in the image whereas the first saliency in both and is different from its surrounding objects.So we can separate the two kinds of saliency by local saliency analysis of and , respectively, based on centersurround model (C-S). Here sparse coding is introduced to describe the local saliency with C-S model.Sparse coding codes the center patch over a dictionary constructed by the surrounding patches.If the center patch is similar to its surroundings, it has sparse coefficients.Denote by the th patch of , and ( ) is dictionary consisting of surrounding patches, which is represented by a set of vectors ( ) = { 1 , 2 , . . ., } as a dictionary.Consider ∩ = 0.So the problem of saliency based on sparse coding is shown as follows: where is the balance factor between sparsity and data integrity.The 1 -norm optimization problem can be solved efficiently by Lasso method [11].The local saliency of image patch is obtained as (4).The process is shown as in Figure 3. Consider All patches of the image can be calculated, so we can get the saliency map Sal(). We calculated the local saliency in both and based on sparse coding.And then the new saliency could be represented by the following function: A certain threshold is used to judge possible waypoints, and the rule is For the region () = 1, the nonmaxima suppression is used to get peaks in Sal(), which are the centers of possible waypoints. Optimal Selection Based on Feature Attribute Classification It is a problem to judge whether there are stability and uniqueness of preparatory results.From the viewpoint of pattern recognition, it can be solved by two-class classification.So, SVM is introduced. where ‖ ⋅ ‖ is 2 -norm of a vector, so ( 7) is equivalent to minimize (1/2)‖‖ 2 with the same constraint.The decision function is International Journal of Optics When the original set will not be linearly separable, it is common to define a soft margin by including variables and parameter > 0 The original sample can be mapped into a highdimensional space (named feature space) by nonlinear transform Φ : → and is expressed as the dual form = ∑ =1 .So classification output can be predicted using the decision function, as where ( , ) is the kernel function.There are three forms of kernel function: radial basis function and the linear and polynomial kernel.Preliminary researches suggest that the radial basis function outperforms the others.So the radial basis function kernel in the following equation will be used in the classification: 3.2.Feature Selection.Suitable descriptors as the input vector of SVM can optimize computational efficiency and gain the better classification results.Here measure parameters based on the edge and cross correlation surface are considered for stability and uniqueness analysis. Stability. The stability is an important feature attribute of an aerial image, which is suitable for scene matching.So we need to select waypoints with stable features.Edge complexity and edge density are selected to be measure parameters of stability. (1) Edge Complexity.Edge complexity is a homogeneity parameter of edge texture distribution.When it is smaller, the image is more smoother, which will lead to mismatching more easily.Edge complexity is calculated by where Γ(, ) is a local neighborhood with the center (, ) and (, ) and (, ) are 1-dimensional derivative and 2-dimensional derivative, respectively. (2) Edge Density.Edge density can show the concentration of features in the original image.It is computed by where Edge Pixel (, ) is the number of points in the neighborhood with the center (, ).(, ) is the number of pixels in the neighborhood. Uniqueness. The global uniqueness of waypoints is analyzed to avoid the selection of repeated scenery areas.The uniqueness is determined by cross correlation plane statistic feature.Cross correlation plane is computed pixel by pixel in the whole image via matching waypoint image . We use two features of cross correlation, Submaxratio and Ngb8maxratio: where is the mean of and is the mean of an area with the same size × as in . (1) Submaxratio.Submaxratio denotes ratio of secondary maximum peak to maximum peak, which is computed using where sub is secondary high correlation peak and max is maximal correlation peak.It means the waypoint has better uniqueness when the value of SMR is closer to zero. (2) Ngb8maxratio.Ngb8maxratio represents ratio of maximum of eight neighbor peaks to maximum peak, which is computed using where ngb is maximum of eight neighbor peaks and max is maximal correlation peak. Training and Classifying. Select 100 sample images as a training set and label each image manually.50 images are waypoints, and the other 50 images are nonwaypoints, which are shown as in Figure 4. The image is decrypted by the measure parameter vector based on the edge and cross correlation surface.So the vector In = [EC, ED, SMR, NMR] is used as the input for SVM.The feature vectors should be normalized before training.The best parameters = 6.8 and = 0.0769 for the SVM classifying using (11) are obtained via training.In the testing, each of preparatory results is decrypted as In and normalized, which is put into SVM for classifying suitable or unsuitable results. Experiments As known at present, automatic selection method of waypoints has not been reported up to now, so there is no public dataset for method validation.To evaluate the performance of our method, experiments on aerial images from Google Earth are conducted.The image pair of the same scene is taken at different time, as shown in Figure 5.One is used as reference image for waypoint selection, and the other is as sensed image used to verify the suitability of selected waypoints.For simplicity, the size of reference image and sensed image is set to be 223 × 223 and the size of waypoint image is set to be 51 × 51 in all experiments. There are two kinds of reference images.One is called Class 1 reference, part of which is with its unique information of scene structure and can be selected as waypoints.The other is called Class 2 reference which is without any unique information of scene structure at all, as shown in Figure 6.The quantities of the two kinds are 150, respectively.1. A few waypoints can be selected from one reference image, so the quantity of waypoints is larger than the quantity of references.Part results of Class 1 are shown as in Figure 7.In order to reduce the complexity of analysis, International Journal of Optics of scene structure.Our method can extract the areas with salient structure information and effectively inhibit the disturbance of brightness; for example, in the fourth line results, just the traffic intersections are extracted because of their structure information.Therefore, the number of the waypoint candidates is less than the results of the former methods.When we analyzed the reference images from Class 2 in the preparatory selection, there were still saliency regions in the images of Class 2, as shown in Figure 8.We note that the methods are ineffective in analyzing the saliency in the references from Class 2. It is because of the normalization of the saliency coefficients.The results in Class 2 are not suitable to be waypoints, so we need to classify the results in Class 1 and the results in Class 2 with SVM. Optimal Selection. SVM is used to optimally select waypoints by classifying the feature attributes.To evaluate the results of classification, cross correlation matching method is used for verification (it is thought to be a correct matching when the matching error is smaller than 5 pixels).The result is shown as in Table 2. There are two kinds of mistakes in the process of classification.One is called "undetected, " which means a waypoint is classified as nonwaypoint.The other is called "false detected, " which means a nonwaypoint is classified as waypoint.The former error is tolerable, but the latter is fatal for vision navigation.So the latter should be avoided or be reduced as much as possible.The classification rate is shown as in Table 3. From Table 3, we know that though there are many waypoints in images of Class 1, false detection still exists because of the scenes changing with time.Undetected is produced by the difference between training samples and testing samples.There is no undetected mistake in Class 2. It is because that there is no waypoint in the reference image of Class 2. For comparison, the algorithms [2,5] are used for waypoints selection.Random sampling investigation was carried out on sets of Class 1 and Class 2 with the same quantity of waypoints as in Table 1.The times are 1000 and the result is shown as in Table 4. We can see that, in the analysis of Class 1, our method is better than the other two methods, and, in the analysis of Class 2, our method is better than [2] and almost the same as [5].It is because that the threshold in [5] is set manually. Conclusions A method of waypoints selection was proposed in this paper, which firstly selected salient areas as candidate waypoints and then classified the candidates based on their feature attributes.The method combined the visual saliency analysis and feature attributes classification and especially avoided the inference of some small man-made areas with high brightness where there is no structure information. The sensed image and the reference image are both from Google Earth, which makes the suitability analysis only depend on the original reference image itself and does not consider the matching condition under the geometrical transformations such as image scaling and rotation.In the next stage, we plan to extend this work along the following directions.Firstly, the matching condition will be considered for suitability analysis, and more aerial images under the real flying condition will be used to test the validity of the approach.Secondly, we will incorporate more powerful features or improve the saliency analysis model and classifying model to improve the effectiveness of the method. Figure 7 :Figure 8 : Figure 7: Results of waypoints in Class 1 (the first line is two reference images; the second line is the results of Li et al. [9]; the third line is Han et al. 's [12]; the fourth line is ours). =1 of labeled examples, with each input ∈ and the output label ∈ {−1, 1}, is the number of training samples.The best hyperplane + = 0 ( is a constant) to separate two classes is achieved by satisfying the constraint: ( + ) ≥ 1, = 1, . . ., , [2].Introduction of SVM.SVM lies in strong connection to the underlying statistical learning theory, where it implements the structural risk minimization for solving the problem of two-class classification[2].SVM has advantages in solving the problems like small samples, high dimensions, and large scale.Given a training sample set = {( , )} Table 1 : Result of preparatory selection. Table 2 : Results of optimal selection by SVM. Table 4 : The comparison of results.
3,539.2
2014-10-08T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Decachlorocyclopentasilanes coordinated by pairs of chloride anions, with different cations, but the same solvent molecules The planar decachlorocyclopentasilane rings in the title compounds are coordinated by two chloride ions to generate inverse-sandwich complexes. We have determined the crystal structures of two decachlorocyclopentasilanes, namely bis(tetra-n-butylammonium) dichloride decachlorocyclopentasilane dichloromethane disolvate, 2C 16 H 36 N + Á2Cl À ÁSi 5 Cl 10 Á2CH 2 Cl 2 , (I), and bis(tetraethylammonium) dichloride decachlorocyclopentasilane dichloromethane disolvate, 2C 8 H 20 N + Á2Cl À ÁSi 5 Cl 10 Á2CH 2 Cl 2 , (II), both of which crystallize with discrete cations, anions, and solvent molecules. In (I), the complete decachlorocyclopentasilane ring is generated by a crystallographic twofold rotation axis. In (II), one cation is located on a general position and the other two are disordered about centres of inversion. These are the first structures featuring the structural motif of a five-membered cyclopentasilane ring coordinated from both sides by a chloride ion. The extended structures of (I) and (II) feature numerous C-HÁ Á ÁCl interactions. In (II), the N atoms are located on centres of inversion and as a result, the ethylene chains are disordered over equally occupied orientations. Chemical context The title compounds are the first known halide diadducts of the long-known perchlorinated cyclopentasilane Si 5 Cl 10 (Hengge & Kovar, 1977). Their structures can be seen as inverse-sandwich complexes, in which two chloride ions lie above and below the planar five-membered silicon ring. Supramolecular features The components of (I) and (II) are linked by a plethora of C-HÁ Á ÁCl contacts (Tables 1 and 2, respectively); in particular the chloride ions are surrounded by C-H groups. For an example, see Fig. 3. As a result of the disorder of the N2 and N3 cations in (II), a plot showing the coordination of the Cl ions looks extremely crowded and is therefore omitted. Database survey The present structures are the first examples of a decachlorocyclopentasilane ring coordinated by two anions. There are only two structures of a decachlorocyclopentasilane ring in the CSD (Version 5.38 of November 2016 plus three updates; Groom et al., 2016), namely decachlorocyclopentasilane 4methylbenzonitrile solvate (refcode ELAFON; Dai et al., 2010) and decachlorocyclopentasilane acetonitrile solvate (ELAFIH; Dai et al., 2010). In both of them, the decachlorocyclopentasilane ring is almost planar (0.017 Å for ELAFON and 0.001 Å for ELAFIH) and shows almost no variation in the Si-Si (2.358-2.368 Å for ELAFON and 2.342-2.349 Å for ELAFIH) and Si-Cl (2.030-2.059 Å for ELAFON and 2.034-2.038 Å for ELAFIH) bond lengths. The distance of the N atom to the centroid of the ring is 2.152 and 2.196 Å for ELAFON and 2.234 Å for ELAFIH. This difference could be due to the steric demand of the benzene ring in ELAFIH. The NÁ Á ÁCg distances are in the same range as the ClÁ Á ÁCg distances in (I) and (II). Figure 3 Perspective view of (I) showing the environment of the Cl anion. The contact to the centre of the five-membered ring is drawn as an open dashed bond. HÁ Á ÁCl contacts less than 3.5 Å are drawn as dashed lines. lengths do not vary significantly between the five and sixmembered Si rings, but the ClÁ Á ÁCg distance in the dodecachlorocyclohexasilanes is significantly shorter than for decachlorocyclopentasilane. This might be due to the fact that the Cl ligands form a narrower cone in five-compared to sixmembered rings. Refinement details Crystal data, data collection and structure refinement details are summarized in Table 4. H atoms were refined using a riding model, with C methyl -H = 0.98 Å or C methylene -H = 0.99 Å and with U iso (H) = 1.5U eq (C methyl ) or 1.2U eq (C). The Cl atoms of the dichloromethane solvent molecule in (I) have rather large displacement ellipsoids, but since no valid disorder model for splitting this molecule could be found, refinement with enlarged ADPs was preferred. In (II), atoms N2 and N3 are located on centres of inversion. As a result, the ethylene chains are disordered over equally occupied orientations. (Sheldrick, 2008), SHELXL2014 (Sheldrick, 2015) and publCIF (Westrip, 2010). Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 ) x y z U iso */U eq Cl1 0.44920 (9) 0.65838 (13) 0.63371 (9) 0.0392 (4) where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max = 0.001 Δρ max = 0.87 e Å −3 Δρ min = −0.84 e Å −3 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
1,142.4
2017-11-21T00:00:00.000
[ "Chemistry" ]
Performance analysis of peak tracking techniques for fiber Bragg grating interrogation systems In this paper, we propose a spectral correlation-based technique for tracking the wavelength shift of a fiber Bragg grating. We compared this approach, by means of a Monte Carlo numerical simulation, to the typical peak tracking techniques applied in classic interrogation systems. As result, we obtained a considerable gain in terms of noise tolerance (about 20 dB), which can be further incremented by selecting large-bandwidth gratings. This permits to increase the power budget of a fiber Bragg grating interrogator without changing the optical layout, overcoming classical limitations of commercial and custom systems. Penalties due to the non-idealities have been evaluated through the same Monte Carlo approach. Finally, we discuss a practical application of the peak tracking techniques to a fiber Bragg grating-based weight sensor, in which we applied the spectral correlation to track both the Bragg wavelength position, spectral deformations due to high strain, and spectral non-linearity. I. INTRODUCTION Fiber Bragg gratings (FBGs) are receiving a considerable interests in optical sensors [1]- [3]: besides owning the advantageous properties of optical technology, the employment of FBGs yields additional benefits such as in-fiber integration, wavelength-encoded operation, possibility to create multiplexed sensor networks (the so-called "smart structures") and predictable reaction to temperature variation. Several interrogation techniques have been documented in literature, using a wide number of principles of operation, such as spectral reconstruction through scanning source [4] or filter [5], matched-filter demodulation [6]- [7], interferometric techniques [8]- [9], birefringence [10]- [11]. Within the last few years, many commercial interrogators have converged to similar optical and processing architectures, most of them based on the straightforward and reliable principle of operation of spectral reconstruction; interrogators measure the wavelength shift of a FBG array simply by scanning a spectral bandwidth with a tunable laser, a tunable filter, or a spectrometer. In this case, a technique for evaluating the peak wavelength from the measured spectrum has to be implemented; typical interrogators define the peak wavelength either as the wavelength correspondent to the spectral Performance analysis of peak tracking techniques for fiber Bragg grating interrogation systems evaluates the FBG wavelength shift as the maximum of the correlation. We performed a Monte Carlo statistical simulation in order to compare the behavior of all peak tracking techniques, using the theoretical expressions of the FBG spectra and reproducing a typical FBG interrogation system, with the purpose of finding the minimum signal-to-noise ratio (SNR) that assesses a correct detection. As result from our simulation, we demonstrate that a FBG interrogator based on spectral reconstruction is capable of gaining a power budget of about 20 dB only by replacing the peak tracking algorithm, and this margin can be even increased by proper choice of the FBGs. Also, the Monte Carlo technique return quantitative results for FBG design, whereas the optimization of the FBG profile can lead to superior performance in peak tracking. Finally, peak tracking techniques have been compared in an applicative context of FBG sensor, validating the proposed concept. II. THEORY The mode-matching technique [13] provides a closed-form expression for the FBG reflectivity that permits to generate grating spectra from the parameters of the refractive index modulation. Hence, the reflectivity of a uniform FBG can be expressed as: where R(λ) is the reflectivity, L is the grating length, κ is a parameter that expresses the reflectance per unit length and σˆ is a self-coupling coefficient; the parameter κL expresses the strength of the grating. Figure 1 shows typical FBG spectra at 1550 nm for different values of κL: as the grating strength increases, the maximum reflectivity R max = tanh 2 (κL) grows and, at the same time, the spectrum enlarges and side lobes become more effective. When an axial strain ε is applied to a FBG or its temperature is changed by the quantity ΔT, the resulting spectrum is modified. If the perturbation is low, the Bragg wavelength shift Δλ B of the FBG can be approximated with a linear relationship [14][15]: where k ε and k T are the strain-and thermo-optic coefficients; typical values are k ε = 1 pm/με and k T R λ,ε,ΔT III. PEAK TRACKING TECHNIQUES The typical structure of a FBG static interrogator is based on a spectral reconstruction technique. A tunable laser source, or, alternatively, a broadband source followed by a tunable optical filter, performs a continuous spectral sweep over the wavelength range λ 1 ,…, λ N , with sampling resolution δλ. The source is coupled to the FBG array, and the backreflected power is received with a photodetector, synchronized to the spectral sweep. Hence, the whole FBG spectrum over the interrogator bandwidth is reconstructed. With this arrangement, a technique for extracting the peak position, and consequently the spectral shift, of each FBG has to be defined [16][17][18]. The typical tracking method is to estimate the peak wavelength as the maximum of the spectral response or, more commonly, to define the FBG pitch as the middle-point of the X-dB bandwidth. A possible alternative approach is to estimate the peak position as the centroid of the FBG spectrum S(λ): The alternative technique proposed in this paper, and illustrated in Fig. 2, is based on the exploitation of the linear strain/temperature-spectrum relationship. The FBG spectrum is measured, with the same setup, at the initial time, obtaining the reference spectrum S ref (λ). Then, at each instant the measured spectrum S m (λ) is compared to the reference one, and the wavelength shift Δλ s is estimated as the maximum of the correlation between the reference spectrum and the shifted replica of the measured one [16]: IV. SIMULATION A. Outline The core of the paper relies on the proposed benchmark for peak tracking evaluation, based on Monte Carlo simulation [19]. The Monte Carlo approach allows simulating each peak tracking technique in presence of arbitrary SNR, and with any FBG profile, while traditional simulation techniques operate only with a single FBG profile [16][17]. Thus, the use of Monte Carlo returns not only a qualitative estimate of the superior performance of correlation-based tracking, but also crafts specific FBG design considerations for maximizing interrogation performance. Thus, the purpose of our simulation is to evaluate the noise tolerance of each peak tracking technique, assuming the typical parameters of a FBG interrogator (either based on scanning filter or spectrometer) and the theoretical behavior of grating spectra. We apply a statistical simulation derived from the Monte Carlo technique: we generate several instances of additive noise, multiplied for different values of SNR, and then evaluate the error on the peak estimation. The goal is to determine, for each value of the grating strength κL, the threshold SNR, defined as: where P(|Δλ err |<Δλ TH ) is the probability of correct detection, i.e. the error Δλ err between the FBG wavelength λ P and the estimated peak λ Pe is lower than the acceptable error Δλ TH , and P0 is the target probability. We define the SNR as: where σ n 2 is the noise variance; this assumption permits to evaluate the SNR related to the FBG reflectivity. As known from the theory of statistical simulations, the Monte Carlo approach does not assess that the error is limited, unless the noise probability density is limited, but gives information only on the probability of correctness of detection. The operative approach of the simulation is schematized in Figure 3. As first step, the reflection spectrum of a FBG with peak wavelength λ P = 1550 nm and a fixed value of κL is generated, according to (1); the spectrum is evaluated over the wavelength range 1547-1553 nm, with step δλ = 1 pm. The sampling grid is consistent with the typical commercial FBG interrogator [20][21]. Then, for each Monte Carlo iteration, a noise instance with unitary variance and a fixed probability density function is generated, normalized to the desired SNR value and summed to the FBG spectrum. Since we assume, from the linear model described by (3), that the spectrum shifts without distortion, there is no need to include a wavelength shift in the simulation. Then, the FBG peak is estimated with all the previously described techniques, obtaining the error Δλ err ; we set the error threshold for correct peak detection Δλ TH as 10pm, which is close to the typical accuracy of a FBG interrogator. The simulation is repeated M =1000 times, setting the target probability of correct detection P0 = 96% and changing the SNR, in order to evaluate SNR TH for each value of κL. Since the peak tracking performances depend on the noise distribution, we performed the Monte Carlo simulation with both gaussian and uniformly distributed noise. These almost uncorrelated noise profiles tend to advantage the spectral correlation technique; however, realistic systems include a whitening filter that removes the residual correlation bringing the system back to an almost uncorrelated noise shape. B. Results and discussion The results of Montecarlo simulation, obtained with additive white gaussian noise, are reported in is the best case of the -3dB BW curve, the SNR required by the spectral correlation techniques is 18.2 dB lower. For κL>2.5, SNR TH is lower than 0 dB, which means that the system is extremely noisetolerant; for κL=12, the target SNR for the spectral correlation is -5.3 dB, with a difference of 34.0 dB -34.9 dB with respect to -3 dB BW and centroid curves, respectively, at the same κL. As expected, the spectral enlargement occurring when κL increases permits a better alignment of the spectra. The case κL=0.1 (R max = 1%), reported for completeness, is not of practical interest. The variation of the noise probability density from gaussian to uniform distribution does not modify the behavior of the SNR TH (κL) curves, as shown in Figure 5, although performances are slightly improved. Peak and -3dB BW curves reproduce the same trend of gaussian noise, but the target probability is achieved with a lower SNR; the gain, with respect to the gaussian probability density, is 7.3 dB -14.1 dB for the peak curve and 3.0 dB -6.5 dB for the -3dB BW tracking, with a small dependence on κL. On the other hand, centroid and correlation curves exhibit the same behavior of The results shown in Figure 4-5 report, for the spectral correlation, the ideal case, based on the assumption that the reference spectra are available without noise. In order to evaluate the impact of the finite SNR of the initial spectrum, we repeated the simulation shown in Figure 4 adding white gaussian noise on both spectra. We report in Figure 6 the SNR penalty, i.e. the SNR variation from the ideal case, when the reference spectrum is measured with SNR equal to 30 dB and 20 dB. In the first case, the penalty lays on a floor (~1.3 dB) for κL>4, and increases up to 8.0 dB for κL=1; in the second case, the SNR penalty exhibits the same trend, but the floor value is ~4.0 dB for κL>5, with a maximum penalty of 14.9 dB for κL=1. As expected, the penalty depends on the FBG bandwidth, since larger spectra are more tolerant to noise; again, this suggests than the application of spectral correlation provides better performances when applied to large FBGs. However, even supposing an inaccurate measure of the reference spectrum and a highly selective grating, the correlation technique is still the most advantageous in terms of SNR TH . We also performed a further simulation in order to evaluate the effect of the laser linewidth on the spectral estimate: this was implemented by filtering the spectra with a sliding gaussian window with different width. No clear impact on the SNR has been noticed for κL>0.5, even for a laser linewidth of 100 MHz; although a large laser spectral width tends to flatten the spectral profile, the correlation techniques is capable of dealing with noise fluctuations much greater than the effect of this spectral flattening. To summarize, the Monte Carlo analysis clearly demonstrates the effect of FBG design on the peak tracking performance: strong large-bandwidth FBGs are preferred for correlation technique, while traditional techniques work better with narrow-bandwidth FBGs. This is an important consideration for practical FBG applications, as the majority of the research papers and industrial installations mount narrow-bandwidth FBGs, with κL=0.9-1.5. Our simulation shows instead that an approximate increase of power budget figure up to 20 dB is feasible, it entirely relies on FBG sensors profile and peak tracking software, without any modification of the optical architecture, and works with any spectral scanning technique like the vast majority of FBG commercial interrogators. V. EXPERIMENTAL RESULTS The previously discussed peak tracking techniques have been applied in a practical case for the static characterization of a FBG-based weight sensor embedded in a road bump illustrated in Figure 7. Three FBGs have been mounted on the rear side of the dump in different positions; this arrangement permits to localize the load from different punctual strain measurements. The reflection spectra are recorded using a sweeping tunable laser (Photonetics Tunics-BT) and a power meter (HP8163A), over a wavelength grid with spacing of 10 pm. In order to load the structure, we vertically applied a weight of 0 -100 kg, with step 10 kg, on the bump in different positions; hence, the spectral shift of each FBG depends on the weight applied to the sensor and on its position. As example, we report in Figure 8 the measured spectra of one edge-FBG when the load is applied exactly over the sensor. we defined as reference FBG spectrum the unloaded condition (0 kg). In the first case, the perturbation of the FBG is strong, and the wavelength shift exceeds the linear range even for weights higher than 30 kg, hence the Bragg wavelength measurement is accurate only in the first part of the curve. Assuming the correlation technique as the correct estimate, the error on peak position with the other methods is ±40 pm for 10 -20 kg, hence a considerable uncertainty. As the strain increases, the deformed profile of the spectrum leads to an inaccurate estimate: spectral correlation and centroid techniques tend to find a sub-optimal alignment between the different spectra, while peak and BW tracking tend to follow the peak position of the deformed spectrum. The second FBG, instead, is weakly perturbed by the mechanical pressure, and exhibits a reduced spectral shift (0.15 nm). The spectral shift estimates lay in a range of ±20 pm with respect to the spectral correlation computation. The application of a correlation technique can be exploited for determining whether the FBG wavelength shift exceeds the approximately linear region identified by (3). If that occurs, the FBG spectrum is significantly distorted, and the wavelength shift is no longer related to the mechanical/thermal excitation through a closed and repeatable relationship; consequently, the measure of the FBG peak is unreliable. Figures 8-9 show a typical example of this situation: due to the low elastic isolation between the bump surface and its rear side, the application of a heavy load produces an intense strain on the FBG, deforming its spectrum. A possible way for extracting this information is to evaluate the mean square error (MSE) between reference and measured spectra, after having found the optimum alignment: This permits an immediate evaluation of the quality of the spectral reconstruction: as soon as the measured spectrum is distorted, the MSE grows with respect to the unstrained condition. We applied this technique to the measurement reported in Figures 8-9, in which we evaluated the MSE of the spectral alignment both for the FBG subjected to the heavy load and for the one positioned at the opposite side. The graph clearly shows that MSE achieved for the FBG far from the vertical load is approximately constant, confirming that the system operates in a linear regime. In contrast, for the FBG positioned close to the pressure point, the heavy load induces a perturbation that exceeds the linear regime even for light weight (20 kg), in which the MSE of the spectral reconstruction is approximately 8 times greater than the linear value. This result is compatible with Figure 8, in which we observe that the FBG spectrum for 20 kg weight is clearly much larger than the reference spectrum. We observe an almost linear increase of the MSE curve with respect to the applied weight; this suggests the possibility to exploit also the non-linear region of the strain/wavelength characteristic, expanding the FBG interrogation range. VI. CONCLUSIONS The application of a spectral correlation-based technique to the peak estimation of a FBG permits a considerable increase of performances, expressed in terms of SNR resilience, of a typical interrogation system. Our statistical Monte Carlo-based simulations show that the gain, with respect to the standard wavelength tracking techniques, is about 20 dB, depending on the κL parameter of the FBG: the employment of broad-band gratings permits to further improve the SNR tolerance. The increased noise tolerance achieved by this routine can be spent for enhancing the power budget of the FBG interrogator, increasing the maximum number of sensing FBGs as well as the acceptable losses over the fiber link, without changing the optical layout of the system. Finally, a practical case of the application of peak tracking methods has been discussed, pointing out the capability of the spectral correlation to detect whether the sensor exceeds the linear regime of the strain characteristic.
4,039
2012-08-01T00:00:00.000
[ "Physics" ]
Circadian Disruption Leads to Loss of Homeostasis and Disease The relevance of a synchronized temporal order for adaptation and homeostasis is discussed in this review. We present evidence suggesting that an altered temporal order between the biological clock and external temporal signals leads to disease. Evidence mainly based on a rodent model of “night work” using forced activity during the sleep phase suggests that altered activity and feeding schedules, out of phase from the light/dark cycle, may be the main cause for the loss of circadian synchrony and disease. It is proposed that by avoiding food intake during sleep hours the circadian misalignment and adverse consequences can be prevented. This review does not attempt to present a thorough revision of the literature, but instead it aims to highlight the association between circadian disruption and disease with special emphasis on the contribution of feeding schedules in circadian synchrony. The Relevance of Circadian Rhythms for Homeostasis Our physiology is organized around the daily cycle of activity and sleep [1]. In the active phase, when energy expenditure is high and food and water are consumed, organs need to be prepared for the intake, processing, and uptake of nutrients. During sleep, energy expenditure and digestive processes decrease and cellular repair takes place [1,2]. The autonomic nervous system and hormones, especially melatonin and corticosterone, are used to transmit signals from hypothalamic and brain stem nuclei to the body in order to prepare it for the daily changes in activity, food intake, and rest. Hypothalamic structures receive information from the suprachiasmatic nucleus (SCN), the biological clock [1,2]; this nucleus transmits 24 h time information synchronized by the light dark (LD) cycle. It is known that neurons of the SCN, even in vitro, maintain a 24 h rhythm of electrical activity and neurotransmitter, release [3]. Via the secretion of its neurotransmitters, the SCN transmits rhythmicity to hypothalamic structures, to the brain, and to the rest of the body. An example is the secretion of corticotrophin releasing hormone and thus the secretion of ACTH, which is modulated by the SCN via vasopressin projections to the dorsomedial hypothalamus (DMH) and subsequently to the paraventricular nucleus (PVN) [4]. Simultaneously, the SCN modifies the sensitivity of the adrenal gland for ACTH via the preautonomic sympathetic neurons of the PVN, thus creating the most efficient way to transmit its time message to the body [5]. Light influences directly the neuronal activity of the SCN and thus inhibits melatonin secretion from the pineal via preautonomic neurons of the PVN [6,7]. Likewise, via the autonomic system, the SCN influences peripheral organs such as liver, adrenal gland, heart, and even fat tissue for daily adjustment [8,9]; such that the physiology of the body responds optimally to support activity or sleep and to keep it coupled to the external cycles. Light is the main "Zeitgeber" or time signal for the SCN, however, other temporal signals may also exert synchronizing effects on the biological clock and are considered "weak Zeitgebers" because they are obscured by the dominating LD cycle [10]. For the humans, social activities and social schedules function as relevant synchronizing signals that compete 2 Sleep Disorders or can enforce the influence of the LD cycle depending on how they are scheduled [11]. As such, work and school schedules, exercise and physical activity, as well as meal time can provide additional time signals to the biological clock [11][12][13]. Thus, the SCN is influenced by a complex combination of temporal signals that require congruency in order to keep all functions and behavior synchronized. Circadian Disruption The development of modern technology has promoted a relative independency of social and work activities from the environmental LD cycle. In the last 100 years, lifestyle has changed radically due to the use of electric light that allowed to extend activities into late hours of the night [14,15]. The proportion of individuals engaged in night work is increasing and has reached a large part of the economically active population. Also, the development of technology has provided new possibilities of amusement and recreational activities towards the sleep hours. This nocturnal life style is extremely attractive especially for young people, resulting that a large number of teenagers and young adults are awake for many hours during the night [16]. Remaining alert in the night promotes physical activity and arousal, in addition, individuals that are awake tend to eat at the moment that the biological clock indicates time to sleep [16]. During week days, school and work schedules require that individuals wake up early, which results in sleep deprivation, while weekends allow to sleep over. The constant shift of the sleep/wake schedule from weekdays to weekends is accompanied by constant shifts of general activity providing a "weak" temporal signal for the SCN. This results in misalignment of the social time from the biological clock, this condition has recently received the term of "social jet lag" [17]. The disturbed activity schedules and the consequential constant sleep deprivation lead to a disruption of the internal temporal order, to anxiety, depression, and altered behavioral performance [18,19]. These alterations emphasize the importance of circadian synchrony for mental health. Because young people are shifting their temporal patterns in the night period, the impact of night activities on behavioral performance and mood requires more research and is becoming a topic of high priority. Circadian misalignment is also the consequence of transmeridian traveling, which causes an abrupt change in the time schedule and a syndrome known as "jet lag." Jet lag is the result of a slow readjustment of physiological and behavioral rhythms that shift with different speed to the new schedule [20]. The transitory loss of circadian synchrony among different tissues and with the biological clock results in loss of homeostasis and a feeling of malaise [21], general discomfort, decrement of physical and mental performance, irritability, and depression [20]. Also, gastrointestinal disorders can be seen as a by-product of food consumption at an unusual schedule [20]. This state of internal desynchrony is transitory and depends on the number of time zones that are crossed, consequently adaptation to a new external cycle can take from 4 to 10 days [22,23]. Circadian disruption is also observed in individuals exposed to shift work or to nocturnal work schedules (night work). In such conditions, circadian fluctuations in behavioral, hormonal, and metabolic parameters are observed but their temporal relation with the external cycles is modified. The internal synchrony is affected by environmental signals that are out of phase with the daily activities of the individual; such as the exposure to light during resting hours and the forced activity and food intake when homeostatic processes indicate a need to rest [24]. This condition leads to sleepiness and disturbed performance which may lead to increase of work accidents [25]. Similar to the alterations observed in jet lag, circadian internal desynchrony is associated with gastrointestinal disorders, disturbed metabolic fluctuations, disturbed cardiovascular functions, altered menstrual cycle, and propensity to develop cancer [26][27][28][29]. Despite the fact that persons may work for many during the night, they still may be incapable to adapt to such a night work scheme [30]. Only a minority of shift workers or night workers are able to adjust spontaneously their rhythms of core body temperature, melatonin, cortisol, or prolactin [30,31]. In shift and night workers, who do not manage to adapt, a high propensity to smoke, drink alcoholic beverages, and use stimulants has been reported [32]. An important factor promoting circadian disruption is light at night. The human biological clock is sensitive to light changes including low-intensity light exposure [33,34]. In healthy volunteers, different light intensities, ranging from 0.03 to 9,500 lux, during the night cause in a short term an important impairment of temperature and hormonal rhythms [35]. Such evidence indicates that humans are sensitive to light intensities used for illuminating house interiors and job areas, and that such intensities are sufficient to alter the biological clock, which can cause circadian disruption and propensity to disease [14,35]. The Consequence of Internal Desynchrony Several aspects of modern life, as described above, provide conflicting signals out of synchrony with temporal signals transmitted by the SCN, which mainly follows the LD cycle [2]. The consequence is a disturbed phase relation of circadian fluctuations in behavioral, hormonal, and metabolic variables, leading to circadian misalignment. In the long term, circadian disruption due to shift work or chronic jet lag may result in increased mortality in male and female workers due to cardiovascular, gastric disorders, or cancer [27,[36][37][38][39][40][41]. Recently, disturbed circadian rhythms have been suggested as strong promoters of obesity and metabolic syndrome [42][43][44]. Therefore, it is important to develop an understanding of the impact of circadian disruption due to shifted activity schedules and light pollution on physiological systems and homeostasis. Rodent Models of Circadian Disruption, What We Have Learnt Animal models provide experimental and controlled conditions for further understanding the mechanisms of circadian Sleep Disorders 3 disruption. A strategy used frequently to induce circadian disturbance is exposing rodents to constant shifts in the onset and offset of light. Shifts of 6-8 hours are scheduled once to several times per week and for as long as 3 to 6 months [45]. Shifting the LD cycle resembles the condition of transmeridian traveling and is accepted as a good model for frequent exposure to jet lag. Phase shifts, especially when the LD cycle is advanced, imply a gradual resetting of physiological and behavioral rhythms. The speed of reentrainment after a 6 h phase advance differs among the functional systems. While general activity and the skeletal muscle achieve complete adjustment after 6-10 days [46,47] core temperature, lungs and the liver adjust faster [21,47,48], leading to a loss of synchrony. Depending on the frequency of the shifts and the duration of this treatment, some groups have reported disruption of behavioral and physiological rhythms [45,48,49], while others do not observe significant effects [50,51]. In a long term, frequent LD shifts alter cognitive functions and neurogenesis in the hippocampus [52], they result in a weak immune response and accelerated tumor growth in the liver after exposition to a cancer promoter [37]. Even more, a recent study reported that rats exposed to frequent shifts of the LD cycle increased the ratio of body weight gain and increased abdominal fat accumulation [45]. Such detrimental effects, however, were not found when shifts were scheduled in semicircaseptan (half-weekly) cycles; in addition, in such conditions, tumor growth was diminished in male rats [51]. Further studies are needed to better understand the contribution of infradian cycles on homeostasis and disease. Other models of circadian disruption expose rodents to short days of 20-22 h, which are incongruous with the normal endogenous 24 h period. This short photoperiod challenges the capacity of circadian system to adjust and to produce a circadian desynchrony [53]. Under such conditions, rodents exhibit two components of activity, one freerunning under a long period and the second component entrained to the 22 h LD cycle. In the SCN, this protocol produces a disruption of neuronal activity, where the dorsal SCN reflects the free-running rhythm while the ventral SCN reflects the LD synchronized rhythm [53]. Rodents housed under these conditions developed dissociation of the sleepwake cycles from the core temperature [54] as well as changes in metabolic hormones. In the brain, circadian-disrupted mice exhibit decrease of dendritic length and decreased complexity of neuronal dendritic trees in the prelimbic prefrontal cortex, associated with reduced cognitive flexibility and altered emotional responses [55]. In order to model the effects of light pollution, rodents are maintained in constant light conditions (LL). Many studies show that circadian organization can be disrupted by LL. After 2-3 weeks in LL, rodents develop arhythmicity and hamsters exhibit "splitting" of circadian locomotor activity patterns [56,57]. In the SCN, LL uncouples individual neurons, although individually each neuron maintains its capacity to generate circadian oscillations, their cycles are out of phase from each other [58]. In rodents, exposure to constant light leads to irritability, anxiety-like and depressivelike behaviors, and deficient performance in tests for learning and memory skills [59]. At the physiological level, LL inhibits melatonin secretion, a hormone secreted during the night, which is suggested to be a signal controlled by the SCN for transmitting the night information to the rest of the body. Melatonin also signals back to the SCN possibly to fortify the night signal to the biological clock [60]. Also, constant light may affect metabolic activity, glucose utilization, and protein synthesis [61]. The absence of melatonin secretion due to LL conditions may affect and decrease the activity of the immune system, which probably leads to accelerated aging and tumor growth [62]. The effect of constant light on tumor growth is, however, not clear, in view that constant light reduced the development of mammary tumors in young female rats [63]. On the other hand, constant light leads to increased visceral adiposity, propensity to obesity, and altered cardiovascular function [64,65] indicating that a disturbance of the function of the SCN by constant light may alter the integrity of metabolic functions. Interestingly, the disruptive effect of constant light is delayed when rats are housed in groups, suggesting that social interaction, which is a secondary "weak" synchronizer [66], partially compensates for the missing light/dark cycle. For rodents, shifting the LD cycle does not completely mimic the conditions of the human shift workers or night workers, because night workers are awake and active during their sleep phase and thus experience conflicting signals between their biological clock and the unchanged LD cycle. In our group, we developed a rat model of "night work" based on forced activity during the light phase, which is the period when rats mainly sleep. To induce activity, rats were placed in slowly rotating drums (33 cm diameter × 33 cm long) with four concentric subdivisions, which allow individual housing. Drums rotate with a speed of one revolution/3 min and due to this speed rats can sit, groom, and even lie down, however, they cannot sleep and are forced to be awake and active. Importantly, they can eat from chow pellets and drink from a small bottle hanging from the middle tube of the drums [68]. Rats are placed in such drums daily during 8 h of the light phase, for 5 days per week (Monday to Friday) without altering their LD cycle. After 8 hours of forced activity rats are returned to their home cages, allowing them (like human shift workers) 16 h for sleep and recovery. During weekends, rats remained undisturbed in their home cages. After 4 weeks in this protocol, rats developed disturbed daily activity rhythms characterized by reduced nocturnal activity and enhanced activity during the day. Interestingly rats shifted their feeding patterns toward the light phase and a high proportion of their daily ingestion occurred during the hours spent in the drums [68]. Consequently, core temperature and metabolic rhythms were shifted to the light phase or were completely disrupted (see Figure 1). Also, body weight and abdominal fat increased in rats exposed to the drums during the light phase, in contrast, all these effects were not observed in rats exposed to activity drums in the night, which corresponds to their active phase. The neuronal activity in hypothalamic nuclei involved in feeding and activity was also shifted in this "night work" model, while the activity of the SCN remained locked to the LD cycle [69]. Consequently, this Figure 1: Moments of peak activity for physiological variables in rats living undisturbed in control ad libitum conditions (a) and rats exposed for 4 weeks to 8 h forced activity during the rest phase (b). Symbols represent the moments of maximal expression of each variable along the 24 h cycle. Data were obtained from three previous reports [60,61,67] and peak values were statistically different from low values of the same group according to a one way ANOVA (P < 0.05). Day and night are represented by white and black horizontal bars below the graphs and the time in the activity drums is represented by the grey striped horizontal bar. The "y" axis represents variables measured in metabolism (triangles) in the liver (squares) and in the brain (circles). Horizontal lines indicate loss of rhythmicity and therefore no significant peak value for that variable and condition. The daily scheduled activity induced shifts of several variables and led to circadian misalignment. Abbreviations: Temp: core temperature; TAG: triglycerides; Cort: corticosterone; Per1: clock gene period 1; Per2: clock gene period 2; SCN: suprachiasmatic nucleus; PVN: paraventricular nucleus in the hypothalamus; DMH: dorsomedial nucleus in the hypothalamus; ARC: arcuate nucleus. model revealed circadian desynchrony already at the level of the hypothalamus with an absence of changes in indicators of SCN activity. Also, metabolic and behavioral daily rhythms indicated a circadian misalignment. The Contribution of Feeding Schedules for Circadian Synchrony/Desynchrony Rodent models of circadian disruption have confirmed the relevance of circadian synchrony for homeostasis and the results of these studies suggest that loss of circadian synchrony affects physiological and metabolic congruency leading to disease and overweight. These observations are in consonance with health problems observed in individuals exposed to chronic jet-lag, to shift and night work, and to light at night, confirming that circadian misalignment is the mechanism underlying the loss of homeostasis [25,70]. Several strategies have been tried to prevent circadian disruption and to accelerate resynchrony after a phase disturbance. Melatonin administration simultaneous to phase shifts is highly effective for the treatment of a range of symptoms that result from jet lag [21,71]. Melatonin directly acts on the SCN and via melatonin receptors (MT1 and MT2) may reset, the biological clock, restore disturbed circadian rhythms and thus sleep disorders [72]. Arousal and enhanced locomotor activity, including scheduled exercise, have also been suggested as therapeutic strategies to accelerate circadian adjustment [67]. In addition, animal models, indicate that the beneficial effects are dependent on the time of the day when they are applied [73]. With our model of forced activity, we have demonstrated that activity during the sleep phase not only disrupts the daily activity pattern but also shifts the normal nocturnal pattern of food intake toward the working hours in the light phase. These observations indicate that when forced to be active, rodents choose to eat. This is congruent with observations in night workers, since it is well documented that shift and night works promote changes in feeding patterns, resulting in increased food intake during the working hours that coincide with the normal resting phase [74,75]. This effect has been observed in night workers, who ingest up to 70% of their daily intake during their work hours, and it is common that they choose diets rich in carbohydrates [76]. A follow-up study with university students reported that Sleep Disorders 5 that night-active persons develop low amplitude and desynchronous timing of endocrine metabolic rhythms, which was associated with shifted eating patterns [77]. As observed with our rodent model, several studies have confirmed that in human populations, shifting activity and the main food consumption toward the night results in propensity to obesity and increased accumulation of abdominal fat [78,79]. In agreement with Kreier et al. [9], the increase of abdominal fat can be seen as a symptom of unbalance in metabolism and in circadian synchrony. Disturbed circadian rhythms also impact the quality and amount of sleep. Jet lag, as well as shift work, night work, and light pollution, leads to poor or short sleep, which also contributes to food ingestion at the wrong time, especially because individuals tend to ingest food while staying awake. This condition leads to metabolic alterations and overweight [80][81][82]. Moreover, the restriction of food consumption and activity to the active phase should prevent circadian disruption and overweight [78,79,83]. In rodents, circadian disruption and metabolic dysfunction can also be prevented by exposing individuals to hypocaloric and/or low-fat diets that reduce body weight. When rodents are food restricted with a hypocaloric diet, the circadian amplitude is enhanced regardless whether food is restricted to the day or night [84,85]. Also, hypocaloric diets lower the metabolic rate and lead to healthier individuals and a longer life span [85,86]. The mechanism underlying the influence of feeding schedules on circadian rhythms, metabolism, physiology, and life span may be associated with the potent influence of food as synchronizer of brain and peripheral oscillators [87]. In rodents, daily food restriction with a normo-or a low-caloric diet induces metabolic and digestive temporal adjustments to meal time [88,89] as well as neural oscillations in the brain [90][91][92][93]. Glucose, ATP levels, and the redox state in the cell set cellular daily oscillations that may provide support to signals transmitted by the biological clock for peripheral entrainment [87]. In a recent study, we explored the power of food as synchronizing signal for the circadian system. With a protocol of jet lag based on a phase advance of the LD cycle, we reported that when meal time was scheduled to coincide with the new onset of activity, the days required for reentrainment were substantially reduced, especially when meal time was shifted simultaneous to the LD shift [46]. This study provided evidence that in fact feeding schedules give support to temporal signals transmitted from the SCN to the body for circadian synchrony. A similar beneficial effect was observed with our rodent model of night work, where scheduled food to the night, the normal active phase, prevented circadian disruption [88]. The relevance of keeping food intake coupled with the LD cycle as a complementary time signal was confirmed by a recent study reporting that a simultaneous shift of the feeding schedule with the LD cycle facilitated the circadian resetting of clock genes in peripheral organs [94]. On the other hand, uncoupling LD cycles from feeding schedules leads to circadian desynchrony and metabolic alterations [78,79]. Conclusions Aligned circadian rhythms in brain and periphery are necessary for adaptation and for homeostasis. When the circadian order is disturbed, individuals cannot produce efficient responses to daily challenges associated with the day/night cycle and in a long term develop disease. Although the main Zeitgeber adjusting the biological clock is the light/dark cycle, other temporal signals considered "weak" contribute to its daily entrainment, among them arousal and activity, exercise, temperature, and for the human, social schedules. Evidence collected in recent years demonstrate the importance of feeding schedules as a powerful entraining signal for diverse functional systems and support the significance of maintaining feeding schedules in harmony with the LD cycle and with the SCN for efficient and correct entrainment. This implies that a correct entrainment can only be achieved when all temporal signals are coupled and when feeding schedules are congruent with the light/dark cycle. The implications of these observations point out that modern life style which promotes predominantly nocturnal activities can have deleterious consequences for human physiology. This requires attention because children, teen-agers, and young adults are shifting their temporal activity pattern toward the night. Nocturnal activity, nocturnal food intake, night work, transmeridian traveling, and light during the night elicit confounding temporal signals to the biological clock and promote circadian misalignment and physiological disturbances. This nocturnal life style also induces poor sleep quality and quantity, anxiety, depression, and modified feeding patterns. When individuals are unable to correct their nocturnal habits, a possible strategy to prevent circadian misalignment is to couple feeding schedules to the day. According to the evidence discussed in this review, this strategy will avoid internal circadian disruption and may be useful to prevent disease. The mechanisms underlying this beneficial effect on physiology, behavioral performance, and mood require more research and should become a topic of high priority in the area of circadian physiology.
5,472.6
2012-01-24T00:00:00.000
[ "Biology", "Psychology" ]
Cepheid Metallicity in the Leavitt Law (C-MetaLL) Survey. V. New multiband (grizJHKs) Cepheid light curves and period-luminosity relations We present homogeneous multiband (grizJHKs) time-series observations of 78 Cepheids including 49 fundamental mode variables and 29 first-overtone mode variables. These observations were collected simultaneously using the ROS2 and REMIR instruments at the Rapid Eye Mount telescope. The Cepheid sample covers a large range of distances (0.5 - 19.7 kpc) with varying precision of parallaxes, and thus astrometry-based luminosity fits were used to derive PL and PW relations in optical Sloan (griz) and near-infrared (JHKs) filters. These empirically calibrated relations exhibit large scatter primarily due to larger uncertainties in parallaxes of distant Cepheids, but their slopes agree well with those previously determined in the literature. Using homogeneous high-resolution spectroscopic metallicities of 61 Cepheids covering -1.1<[Fe/H]<0.6 dex, we quantified the metallicity dependence of PL and PW relations which varies between $-0.30\pm0.11$ (in Ks) and $-0.55\pm0.12$ (in z) mag/dex in grizJHKs bands. However, the metallicity dependence in the residuals of the PL and PW relations is predominantly seen for metal-poor stars ([Fe/H]<-0.3 dex), which also have larger parallax uncertainties. The modest sample size precludes us from separating the contribution to the residuals due to parallax uncertainties, metallicity effects, and reddening errors. While this Cepheid sample is not optimal for calibrating the Leavitt law, upcoming photometric and spectroscopic datasets of the C-MetaLL survey will allow the accurate derivation of PL and PW relations in the Sloan and near-infrared bandpasses, which will be useful for the distance measurements in the era of the Vera C. Rubin Observatory's Legacy Survey of Space and Time and upcoming extremely large telescopes. Introduction Classical Cepheid variables in the Milky Way (MW) provide the absolute calibration of their period-luminosity (PL) relation or the Leavitt law (Leavitt & Pickering 1912) based on their individual distances derived from geometric parallaxes.The absolute calibration of the Leavitt law is crucial in order to measure extragalactic distances and determine the Full Table 1 is available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr(130.79.128.5) or via https:// cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/683/A234 Marie Skłodowska-Curie Fellow. present expansion rate of the Universe through the 'cosmic distance ladder' (Freedman et al. 2001;Riess et al. 2022b).The present expansion rate in the late evolutionary universe, the local Hubble constant, is currently in discord with its early universe measurement from the Planck mission (Riess et al. 2022b;Planck Collaboration VI 2020).This tension between the Hubble constant measurements from the two extreme ends of the Universe may hint at new or missing physics in the standard cosmological model (Di Valentino et al. 2021;Abdalla et al. 2022). The accuracy and the precision of Cepheid-based distance measurements in the traditional cosmic distance ladder are now only limited by the systematic uncertainties in the calibration of their PL relation and its metallicity dependence (Riess et al. 2022b;Bhardwaj et al. 2023).Before Gaia, the absolute calibration of Cepheid PL relations was mostly based on a limited sample of Cepheids with Hubble Space Telescope parallaxes (Benedict et al. 2007;Riess et al. 2014Riess et al. , 2018) ) or using distances determined from independent methods, such as the Baade-Wesselink or infrared surface brightness methods (Gieren et al. 1998;Fouqué et al. 2007;Storm et al. 2011;Bhardwaj et al. 2016).This is now changing thanks to increasingly more accurate and precise geometric parallaxes of thousands of Cepheids in the MW from the Gaia mission (Gaia Collaboration 2016, 2023b;Ripepi et al. 2023).Several recent studies have utilized unprecedentedly precise Gaia parallaxes to calibrate PL relations at multiple wavelengths (e.g.Groenewegen 2018;Ripepi et al. 2021Ripepi et al. , 2023;;Breuval et al. 2022;Riess et al. 2022a;Cruz Reyes & Anderson 2023;Narloch et al. 2023).It has been claimed that some of these calibrations of Cepheid luminosities have reached percent-level precision at specific wavelengths (Riess et al. 2022a;Cruz Reyes & Anderson 2023). The increasing precision of Gaia parallaxes also enabled some of the above-mentioned studies to investigate the metallicity dependence of Cepheid PL relations at multiple wavelengths.However, there is currently no consensus on the metallicity dependence of Cepheid period-luminosity-metallicity (PLZ) and period-Wesenheit-metallicity (PWZ) relations at multiple wavelengths.Most of the recent empirical studies have found a negative metallicity coefficient with values between −0.2 and −0.5 mag dex −1 (e.g.Fig. 19 in Trentin et al. 2024), which was also confirmed by models for the Wesenheit relations (Anderson et al. 2016;De Somma et al. 2022).Nevertheless, these coefficients are often either weakly constrained and/or differ by a factor of two among recent empirical studies (e.g.Ripepi et al. 2021;Riess et al. 2021Riess et al. , 2022b;;Breuval et al. 2022;Molinaro et al. 2023;Bhardwaj et al. 2023;Trentin et al. 2024).For example, Trentin et al. (2024) found a metallicity coefficient of −0.458 ± 0.052 mag dex −1 in K s -band for the MW Cepheid PL relation, while Breuval et al. (2022) found a coefficient of −0.321 ± 0.068 mag dex −1 using Cepheids in the Galaxy and the Magellanic Clouds.Recently, Bhardwaj et al. (2023) determined a metallicity term of −0.43 ± 0.18 mag dex −1 in K s -band using individual metallicities of MW Cepheids.This coefficient is better constrained (−0.33 ± 0.07 mag dex −1 ) using MW and Large Magellanic Cloud (LMC) Cepheids within a very narrow range of their mean metallicities (∆[Fe/H] = 0.46 dex).Homogeneous high-resolution spectroscopic metallicities of MW Cepheids having a wide range of metallicity and multiband photometry are needed to complement their Gaia parallaxes to better constrain the zero-points and the metallicity coefficients of Cepheid PLZ relations. Most of the recent studies on the empirical calibration of Cepheid PL or PLZ relations in BV I JHK s bands utilized heterogeneous optical and infrared photometry together with homogeneous Gaia data.The optical PL relations, except those in the Gaia photometric systems, primarily used one of the largest compilations of photoelectric observations of 894 Cepheids in standard Johnson-Cousin-Kron filters (U BV(RI)c, Berdnikov 2008) or the modern CCD photometry of Cepheids in BV I bands (e.g.Berdnikov et al. 2015).Time-domain variability surveys such as the Optical Gravitational Lensing Experiment (OGLE, Udalski et al. 2018) and the All-Sky Automated Survey for Supernovae (ASAS-SN, Jayasinghe et al. 2018) have also provided optical V and/or I light curves of thousands of Cepheids in the MW, but lack multiwavelength coverage.Moreover, several ongoing large-scale variability surveys are now being carried out in the Sloan (ugriz) photometric system such as the Zwicky Transient Facility (ZTF, Bellm et al. 2019, in gri), enabling the discovery and identification of hundreds of Cepheids in the MW (Chen et al. 2020). However, the pulsation properties of Cepheid variables and their PL relations have not yet been explored in detail in the specific Sloan filters.Hoffmann & Macri (2015) used the 8.1m Gemini North telescope to observe Cepheids and derive their PL relations in the gri filters in NGC 4258, which is one of the anchor galaxies for the extragalactic distance scale (Riess et al. 2022b).Kodric et al. (2018) presented PL relations for Cepheids in the Andromeda galaxy in Sloan-like (gri) filters using the data from the Panoramic Survey Telescope And Rapid Response System survey (Tonry et al. 2012).Recently, Adair & Lee (2023) presented the largest sample of more than 1600 Cepheids in M33 using photometry in the gri bands.Narloch et al. (2023) provided light curves of 96 MW Cepheids and their PL relations in the Sloan (gri) filters.Similar calibrations of gri band PL relations for RR Lyrae, Type II Cepheids, and the anomalous Cepheids in the globular clusters have also been provided using the ZTF data (Ngeow et al. 2022a,b,c).In view of the much anticipated Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST, Ivezić et al. 2019), which will operate at ugrizy wavelengths, it is crucial to provide an empirical calibration of PL relations for pulsating stars in Sloan filters. Homogeneous near-infrared (NIR) JHK s band light curves of MW Cepheids for the largest sample of 131 Cepheids was provided by Monson & Pierce (2011).The empirical calibrations of Cepheid PL relations in JHK s bands in the literature are either based on the above-mentioned dataset combined with older photometric measurements (e.g.Welch et al. 1984;Laney & Stobie 1994;Barnes et al. 1997) or using random phase-corrected Two Micron All Sky Survey (2MASS) observations (Skrutskie et al. 2006).NIR photometry offers several advantages over optical bands, due to lower extinction and smaller temperature variations leading to less scatter in the PL relations (see the review by Bhardwaj 2020).However, smaller variability amplitudes and near-sinusoidal light curves of Cepheid variables in the NIR bands can complicate their identification and classification, which is easier in the optical domain due to asymmetric features and larger amplitudes (Bhardwaj et al. 2015(Bhardwaj et al. , 2017;;Udalski et al. 2018).Given that all upcoming large observational facilities will mostly operate at infrared wavelengths, it is imperative to obtain homogeneous optical and infrared photometric light curves of Cepheids in the MW to fully utilize current and future Gaia parallaxes for calibrating the Leavitt law at multiple wavelengths. This paper presents simultaneous optical (griz) and NIR (JHK s ) light curves of 78 classical Cepheids for the first time.The manuscript is the fifth in the series of the Cepheid Metallicity in the Leavitt Law (C-MetaLL) survey, which aims to provide homogeneous time-series photometry and highresolution spectroscopy of Cepheids in our Galaxy (Ripepi et al. 2021;Trentin et al. 2023).While previous C-MetaLL publications explored the impact of new spectroscopic observations of Cepheids on their PLZ relations, this paper presents the first results of the ongoing photometric programme.The details regarding the photometric observations, the data reduction and analysis, and the photometric calibration are presented in Sect. 2. Multiwavelength light curves of Cepheids and their PL relations are discussed in Sects.3 and 4, respectively.The impact of metallicity on the Leavitt law is discussed in Sect.5, and the results of this work are summarized in Sect.6. The sample of Cepheid variables The sample of stars was selected to obtain complementary timeseries data for Cepheids that have high-resolution spectroscopic observations as part of the C-MetaLL survey (Ripepi et al. 2021).Our initial sample consists of 80 Cepheids including 49 fundamental mode (FU) and 31 first-overtone mode (FO) stars.The majority of our Cepheids were identified as variables in the Gaia data releases.However, some of our initial targets, which were included in Ripepi et al. (2021), come from the ASAS-SN or the OGLE survey (Udalski et al. 2018;Jayasinghe et al. 2018).The pulsation periods, epochs of maximum brightness, intensity-weighted mean magnitudes in G, G BP , and G RP bands, and G band pulsation amplitudes for 73 Cepheids were taken from Gaia DR3 Cepheid sample (Ripepi et al. 2023).For remaining seven Cepheids, only photometric mean magnitudes in Gaia filters were taken from the source catalogue (Gaia Collaboration 2023b) since they were not included as variables in Gaia.The periods and epochs of maximum brightness, and the V/I band amplitudes for these stars were taken from the OGLE (OGLE-GD-CEP-0307/1210/1227/1286) or ASAS-SN (J163656.55-322102.2,J184816.35+004903.4/V0912Aql, J193837.90+172322.8)catalogues (Udalski et al. 2018;Jayasinghe et al. 2018).Figure 1 displays the location of Cepheid variables used in this work. Figure 2 displays the histogram of period distribution of Cepheids in our sample.The FU mode Cepheids cover a period range of 3.15-70.80days while FO Cepheids have periods between 0.78 and 4.91 days.These Cepheids are bright and cover an apparent magnitude range between G = 8.89 and G = 14.78 mag.The distances to our Cepheids range from 0.5 kpc to 19.7 kpc with a median distance of 3.6 kpc.High-resolution spectroscopic metallicities for 59 Cepheids were already obtained as part of the C-MetaLL survey (Ripepi et al. 2021;Trentin et al. 2023).Among the remaining stars, six more Cepheids have medium-resolution spectroscopic metallicities from Gaia Radial Velocity Spectometer (RVS, Recio-Blanco et al. 2023).However, four of these six Cepheids have poor RVS quality flags and were assigned a large uncertainty of 0.5 dex (Trentin et al. 2024).The middle panel of (Ripepi et al. 2021;Trentin et al. 2023).Bottom: period-Wesenheit relation in Gaia filters for the sample of 80 stars.The circles and squares represent fundamental and first-overtone mode Cepheids, respectively.Two extreme outliers are marked with their Gaia DR3 source ID. Fig. 2 shows the histograms of available metallicities for all 65 Cepheids. To confirm the identification and classification of 80 Cepheids in our sample, we derived their PW relation, W G = 1.90(GBP − G RP ), in the Gaia bands (Ripepi et al. 2019).The distances to these Cepheids from Bailer-Jones et al. (2021) were used to obtain absolute Wesenheit magnitudes.We noticed two outlier stars (5325503574371253120/OGLE-GD-CEP-0307, 3318591594625903104/OGLE-GD-CEP-1286 in the PW relation shown in the bottom panel of Fig. 2. OGLE-GD-CEP-1286 is classified as rotating variable in the ASAS-SN survey (J061105.05+045303.5).The parallax uncertainties of these stars are less than 3% and these were not classified as Cepheids in the Gaia data.Therefore, we excluded these two stars from our final sample, which consists of 78 stars with 49 FU and 29 FO Cepheid variables. Observations and data reduction Our multiband observations of Cepheid variables were obtained between February 2021 and September 2022 using the Rapid Eye Mount (REM1 ) Telescope located in La Silla, Chile.REM is a 60 cm diameter fast reacting telescope with two parallel imaging instruments: ROS2, a visible imager with four simultaneous passbands (griz), and REMIR, an infrared (JHK s ) imaging camera.The pixel scale of ROS2 and REMIR is 0.58 and 1.2 arcsec pixel −1 , respectively, for a total field of view of about 10 × 10 arcmin.The time-series observations were obtained in a monitoring mode where each Cepheid was observed once per night, when visible during the semester.Our observations for each epoch consisted of three dithered frames (exposures) in optical and a minimum of five dithered frames in NIR.The exposure times in optical bands were optimized depending on the brightness of the sources, and range from two to 240 s.The individual exposures were relatively short in NIR (between one and ten seconds), such that the background sky variations were negligible.For fainter targets, multiple sequences of five dithered exposures were taken in NIR bands.These Cepheids were observed on average on 24 different nights with the number of observations varying between 10 and 60. The optical and NIR science images and associated calibrations (bias and flat frames) were downloaded from the REM data Archive2 .For optical images, the bias subtraction and flat fielding were performed using monthly calibration frames.For NIR images, pre-processed (dark subtraction and flat fielding) images were downloaded including the sky background and coadded images.However, the co-added NIR images were not used because several bright residual patterns were present which likely resulted from over-subtraction of sky background in several pixels.These residual variations were normalized using SExtractor (Bertin 2006) in the sky background images before applying background subtraction.Therefore, we used individual dithered frames in both optical and NIR bands for photometric data reduction. Photometry The photometry was performed on all dithered images separately in each filter using the DAOPHOT/ALLSTAR (Stetson 1987) and ALLFRAME (Stetson 1994) routines.In a given filter, all point-like sources were first identified using SExtractor to determine a full width at half-maximum (FWHM).An aperture photometry was then performed using DAOPHOT with an aperture equal to the median FWHM.Up to ten brightest, nonsaturated stars were used to construct an empirical point-spreadfunction (PSF) excluding sources closer to the detector edges.We selected the first-frame (images of the first-night observations) as reference frame to create a star-list.In cases where there were not enough detections (<10) in the first-frame, we visually selected images taken on another night to create a reference star list.The PSF photometry was obtained using ALLSTAR on all images and frame-to-frame coordinate transformations between the reference image and all epoch images were derived using DAOMATCH/DAOMASTER.This procedure required the most intervention because of the lack of a sufficient number of stars in images taken in poorer weather conditions resulted in inaccurate transformations.Those frames were excluded for which no transformation could be obtained due to poorer statistics or lack of significant overlap with the reference frame.Finally, the reference-star list and the derived coordinate transformations were used as input to ALLFRAME for performing PSF fitting across all the frames, simultaneously.The photometry on different nights was internally calibrated to the reference frame using secondary standards that have small photometric uncertainties and no epoch-to-epoch variability.These thresholds on photometric uncertainties and epoch-to-epoch variability varied between 0.01 and 0.1 mag depending on the number of sources within the field of view and their photometric precision. The photometric calibration was performed using Gaia synthetic photometry (GSP, Gaia Collaboration 2023a) in Sloan filters (griz) and 2MASS (Skrutskie et al. 2006) catalogue in the JHK s bands.We selected homogeneous GSP catalogue for calibrating optical photometry because there were not enough common stars for several sources in the American Association of Variable Star Observers (AAVSO) Photometric All-Sky Survey (APASS) DR10 (Henden 2019).Moreover, most of these bright Cepheids were saturated in the Sloan Digital Sky Survey catalogue.For each Cepheid location, all sources from GSP and 2MASS catalogues were extracted within the field of view of 10 × 10 arcmin.Our photometric catalogues in a given filter were then cross-matched with GSP or 2MASS within an initial tolerance of 1.0 .The common sources were used to determine a zero-point offset to calibrate our instrumental magnitudes to Sloan or 2MASS filters.The number of calibrating stars varied between five and 117 for different star or filter, and there were fewer common stars in gz-bands.The lack of number of stars covering a wide colour range precluded us from solving for a colour-term in the calibration equation.The uncertainties in the magnitude zero-point offset were propagated to the instrumental magnitudes for each Cepheid variable. Figure 3 shows a comparison of our calibrated mean magnitudes of Cepheids (see Sect. 3.2) with the magnitudes from APASS DR10 and 2MASS for griz and JHK s filters, respectively.The offsets are small in ri bands and increase in gz bands and at NIR wavelengths, where magnitudes from the literature are based on random-epoch observations.In the case of NIR magnitudes, the offsets increase significantly at the bright end (K s < 8.5 mag).We note that the stars with 4 < K s < 8.5 mag saturate in the primary 1.3 s 2MASS exposure and their NIR magnitudes come from aperture photometry on 51 ms exposure images (Skrutskie et al. 2006).The largest offset occurs at the brightest end in the H-band.For H < 6.5 mag, the median errors in the REMIR J/H/K s magnitudes of these Cepheids are 0.05/0.09/0.08 mag, hinting at possible non-linearity and saturation at the bright end in HK s .The typical uncertainties on 2MASS magnitudes are 0.02-0.03mag, and thus the offsets at the bright end are still within 2σ of the combined REMIR and 2MASS uncertainties.However, we also noted increased scatter in the light curves around maximum light for Cepheids with K s < 5 mag.In these cases, the brightest source in the REMIR images is typically the target Cepheid, and the PSF is generally constructed from fainter stars.The aperture photometry may be a better option for these brightest Cepheids, and will be explored in the subsequent photometric studies of the C-MetaLL survey.their pulsation periods using the multiband periodogram of Saha & Vivas (2017).This hybrid algorithm for period determination using sparsely sampled light curve data at multiple wavelengths was employed to search periods in the range from 0.1 to 100 days with a stepsize of 0.001 days.An average difference of 0.002 days with a scatter of 0.012 days was found between our periods and those adopted from the litera-ture after excluding a few spuriously determined periods.The difference in periods exceeded 0.1 days for 17 Cepheids, all of which either had large phase gaps in their light curves or poor light curve quality in at least one filter.Since all the Cepheids in our sample are bright (G < 14.5 mag), we adopted literature periods primarily from Gaia DR3 (Ripepi et al. 2023) that were based on much longer temporal baseline than our observations. Optical and near-IR light curves of Cepheid variables Multiwavelength light curves were phased using the adopted periods and the epochs of maximum brightness.Since we obtained multiple dithered frames in both optical and NIR filters and performed photometry on individual frames, there are often several data points at a given phase albeit with larger uncertainties due to lower signal-to-noise of individual exposures.Furthermore, the scatter at a given phase is larger for stars with a small number of secondary standards within their imaged field of view since these were used to obtain frame to frame transformations and also to calibrate the photometry.Figure 4 displays example light curves of FU and FO Cepheids with all epoch observations (in grey).The scatter in the light curves is evident which comes from larger errors due to low signal-tonoise of dithered frames and uncertainties in relative zero-points of photometry from different nights.Therefore, we decided to bin the light curves in phase improving the accuracy and precision of photometric data points in a given phase bin.We computed sliding mean values with a bin width of 0.1 in phase with 15/10 steps for FU/FO Cepheids.While 10 phase averaged data points were sufficient for sinusoidal nature of FO Cepheid light curves, we used 15 steps for FU Cepheids to fully recover the saw-tooth feature of their light curves.We found these choices recovered the sharp extrema of the light curves and also yielded the least amount of scatter between consecutive points.A weighted mean of all magnitudes was obtained from dithered frames within a given phase bin and the standard deviation of the robust mean was propagated to the photometric uncertainties. Figure 4 also shows the phase-binned light curves of Cepheids overplotted on the original data points.While the variability was quite evident in the original light curves, it is more pronounced in the phase binned light curves.The phase-binning procedure was particularly useful for low-amplitude FO variables for which the larger uncertainties on photometry from individual dithered frames often dominated variability amplitudes in NIR bands.The light curves constructed from the phase binning procedure are used for further analysis in this work.These light curves were visually inspected to assign a quality flag of A,B, and C to the best, good, and poor light curves depending on the scatter and the phase coverage of the light curves.There are 52/22 stars with best and good ('A and B') quality flags, and only four stars have poor light curves.We note that the entire sample of Cepheids was used in the subsequent analysis irrespective of their light curve quality flags. Light curve fitting and mean magnitudes The light curve templates for Cepheids are not available in optical Sloan (griz) filters, but are available in NIR bands (e.g.Inno et al. 2015).Since our light curves are phase-binned, we do not expect to recover small features such as bump near the maximum light, particularly at NIR wavelengths where light curves are more sinusoidal.Therefore, we decided to analyse multiband light curves with the Fourier decomposition method (e.g.Bhardwaj et al. 2015Bhardwaj et al. , 2017) ) and not rely on a particular set of templates.The phase-binned light curves were fitted with a A234, page 5 of 17 Fourier sine series fit in the following form: where m λ is the magnitude as a function of phase (φ λ ) at a given wavelength (λ).The mean magnitudes (m 0 ) and Fourier coefficients (A k , φ k ) were determined using a Fourier sine series with varying order of fit (k = 1-5).A minimum order of k = 1 was chosen to obtain purely sinusoidal fits, which are ideal for FO Cepheids, particularly at NIR wavelengths.Since the phase-binned light curves do not necessarily provide enough data points around their extrema, we find that a fifth order Fourier series was sufficient to fit the main saw-tooth characteristics and Hertzsprung progression (Hertzsprung 1926) features.Furthermore, limiting higher order fit to k = 5 also avoided possible numerical ringing in case of larger phase gaps.The best order of fit was determined using the Baart's criteria (Baart 1982). Figure 5 displays the Fourier-fitted light curves in grizJHK s filters for FU and FO Cepheids with varying periods.The sawtooth light curve shape for a short-period (P = 3.86 days) FU Cepheid and flatter maxima for Cepheids with periods close to 10 days can be noted despite the phase-binning.Most Cepheids with periods longer than 20 days typically have similar near-sinusoidal light curve shapes with decreasing amplitude as a function of period.In the case of FO Cepheids, sinusoidal Fourier series with k = 1 fitted well the majority of stars at all wavelengths.The best-fitting Fourier fits were used to derive intensity-averaged magnitude and peakto-peak amplitude for all Cepheids except for four stars in g band and one star in K s band.These Cepheids with lack of measurements in one filter did not have enough light curve phase points for a Fourier-fit or mean-magnitude determinations.The photometric mean magnitudes of all stars are listed in Table 1.In general, there is a good agreement between predicted and observed amplitudes for FU mode Cepheids with a maxima around log P ∼ 1.3 days.The short-period FO Cepheid models exhibit larger amplitudes than observations (De Somma et al. 2022).We note that the predicted amplitudes are sensitive to the adopted composition, mass-luminosity relations, and the convective efficiency in the pulsation models. Period-amplitude diagrams In grizJHK s period-amplitude diagrams, two low-amplitude FU Cepheids (open stars) with log P ∼ 1.4 days can be seen as outliers in Fig. 6.These variables were classified as FU Cepheids in the ASAS-SN survey, but their light curves show large scatter not typical of Cepheids.Given their small V-band amplitudes of less than 0.3 mag, no obvious periodic variations were seen in our photometry despite a good phase coverage.The amplitudes of phase-binned grizJHK s light curves are smaller than 0.1 mag, and these were not classified as variables in Gaia as well.We keep their classification as FU Cepheids since their mean magnitudes are well determined, and are used to probe their position on the PL relations.In NIR bands, four Cepheids with 1.1 < log(P) < 1.4 days exhibit relatively high amplitudes.These Cepheids have good-quality light curves and their independent G-band amplitudes also range between 0.78 and 1.12 mag. In general, the decrease in amplitudes from the shortestperiod to 10 days, an increase in amplitude with periods up to A234, page 6 of 17 20 days, and a reverse trend for the longer period Cepheids is also seen, which is typical for FU Cepheids (Bhardwaj et al. 2015).The separation of FU and FO Cepheids on the periodamplitude plane is noted for all wavelengths, but is most distinct in K s -band.However, two FU Cepheids (open stars) with P < 10 days and K s -band amplitudes of less than 0.1 mag are also populating FO amplitude cluster in all other bands.These Cepheids (5718760258978008320, 2936274153063501824) were classified by Gaia as FU mode and have periods of 4.70 and 6.39 days, and G-band amplitudes of 0.35 and 0.20 mag, respectively. Gaia parallaxes The Gaia astrometric parallaxes were obtained for all Cepheids from Gaia Collaboration (2023b) to calibrate multiband PL and PW relations.A few of these Cepheids are in the Galactic disk and anti-centre direction, and therefore the parallaxes of these distant targets have large uncertainties.The median error on parallaxes is 9.4%, but 15 Cepheids have parallax uncertainties of >25%.Two Cepheids in our sample have negative parallaxes, and therefore the subsequent analysis is A234, page 8 of 17 carried out in the parallax space.Several studies have found that the Gaia parallaxes are systematically smaller resulting in larger distances (Arenou et al. 2018;Groenewegen 2018;Riess et al. 2021;Bhardwaj et al. 2021;Molinaro et al. 2023).Lindegren et al. (2021) provided a parallax zero-point correction recipe that derives a shift for an individual star based on its magnitude, colour, and ecliptic latitude.The parallax correction varies between −0.075 and 0.023 mas for Cepheids in our sample.However, these parallax corrections are now known to over-correct parallaxes resulting in smaller distances (Riess et al. 2021;Cruz Reyes & Anderson 2023;Molinaro et al. 2023).We adopted Gaia parallaxes after applying the suggested corrections by Lindegren et al. (2021) and include an over-correction offset of 0.014 mas from Riess et al. (2021).The parallax uncertainties were increased by 10% following Riess et al. (2021) and include parallax correction uncertainty of 0.005 mas added in quadrature (Lindegren et al. 2021). In addition to the uncertainties on parallaxes, we also looked at Gaia astrometric quality flags: renormalized unit weight error (RUWE) and goodness of fit (GOF).There are three stars with RUWE > 1.4 and one of these also has a GOF = 13.98 exceeding a threshold of 12.5 adopted by Riess et al. (2021).The largest RUWE among these three Cepheids is 1.55, and their parallax uncertainty are <15%.Moreover, six Cepheids also fall near the sharp inflation point in the Lindegren et al. (2021) formulae at G = 11 mag, and their parallax corrections may have larger uncertainties.Given that our sample size is modest, we do not exclude these Cepheids from our analysis but treat them cautiously when deriving PL relations. Reddening and extinction corrections Groenewegen (2018) provided reddening values for more than 450 Cepheids that were primarily taken from Fernie et al. (1995).However, there are only two Cepheids (X Sct and V5567 Sgr) in our sample in common with Groenewegen (2018).Ripepi et al. (2021) derived new period-colour relations to estimate their intrinsic colours, and therefore reddening values.We used Eq.(4) of Ripepi et al. (2021) to derive intrinsic (V − I) 0 colour for all Cepheids.For apparent (V − I) colours, we first used photometric transformations from Pancino et al. (2022) to derive V and I band magnitudes using Gaia photometric data.These photometric transformations based on homogeneous Gaia data have been used to provide accurate magnitudes and colours in Johnson-Kron-Cousins photometric systems (Trentin et al. 2023) and were preferred over the heterogeneous literature compilations.The apparent V and I magnitudes were obtained for Cepheids which have (BP − RP) colours within the recommended range for photometric transformations (Pancino et al. 2022).The colour-excess values in (V − I) were converted using E(V − I) = 1.28E(B − V) (Tammann et al. 2003).Therefore, reddening E(B − V) values were obtained for 76 Cepheids, which vary between 0.092 and 2.186 mag.The uncertainty in the empirical equation used to obtain intrinsic colours of Cepheids were propagated to the errors in the reddening values. Recently, Breuval et al. (2022) used Bayestar19 3 threedimensional reddening maps from Green et al. (2019), and period-colour relations from Riess et al. (2022b) to derive reddening values for 222 Cepheids.There is only one star in common with Breuval et al. (2022) sample.We also obtained E(B−V) G19 reddening values for 70 Cepheids from Bayestar19 3 http://argonaut.skymaps.info/usage maps which varied between 0.145 and 4.017 mag.A comparison of the two sets of reddening values for 67 Cepheids suggests a median difference of E(B − V) G19 − E(B − V) = −0.02mag with a scatter of 0.10 mag after excluding six stars with the largest differences.If we scale the Bayestar19 reddening values by a scale factor of 0.884 as recommended by Green et al. (2019), the median difference increases to −0.11 mag.Recently, Narloch et al. (2023) found that the Bayestar19 reddening maps are inadequate for Cepheids in their sample because the reddening values resulted in unexpectedly large scatter in the resulting PL relations.Therefore, we adopted the reddening values based on period-colour relations. To correct the magnitudes for extinction, we adopted the Fitzpatrick (1999) reddening law assuming an R V = 3.1.The total-to-selective absorption ratios were estimated using the dust_extinction 4 python package.The effective central wavelength corresponding to each filter was adopted from the Spanish Virtual Observatory filter profile service 5 .The absorption ratios in different filters are provided in Table 2.These absorption ratios were used together with the E(B − V) values to apply extinction corrections to the mean-magnitudes at all wavelengths. Period-luminosity relations The parallax uncertainties for Cepheids in our sample are large (>25%) for 17 stars and two stars have negative parallaxes.Moreover, 12 Cepheids have distances larger than 10 kpc.We did not adopt distances from Bailer-Jones et al. ( 2021) for which priors based on a three-dimensional model of the Galaxy become dominant for these distant targets.Therefore, instead of converting apparent magnitudes to absolute magnitudes using geometric distances, we decided to work in the parallax space to derive 4 https://dust-extinction.readthedocs.io/en/stable/ 5http://svo2.cab.inta-csic.es/theory/fps/A234, page 9 of 17 PL relations.We derived astrometry-based luminosity (ABL, Feast & Catchpole 1997;Arenou & Luri 1999) defined as: ABL = ω (mas) 10 0.2m λ −2 = 10 0.2(α λ +β λ (log P i −log P 0 )) , (1) where m λ is the extinction corrected magnitude at a given wavelength for a Cepheid with period (P i ).The period (P 0 ) at which the zero-point is determined, is adopted at log P 0 = 1.0 days for FU Cepheids and log P 0 = 0.4 days for FO Cepheids.Since our sample size of each subtype is small, we also considered a combined sample of Cepheids by fundamentalizing the periods of FO Cepheids using the equation: P FU = P FO /(0.716−0.027log P FO ) from Feast & Catchpole (1997).The absolute zero-point was also obtained at log P 0 = 1.0 days for this combined sample. While fitting these relations in the form of Eq. ( 1), we iteratively removed the single largest outlier in each iteration until all residuals are within ±3σ, where σ represents the root-mean square (rms) error.We created 10 4 random realizations of ABL fits to derive coefficients and their associated uncertainties.In this procedure, we also included the parallax zero-point offset error of 6 µas (Riess et al. 2021). Figure 7 displays the ABL of Cepheids in multiple bands and the results of the best-fitting relation in the form of Eq. ( 1) are listed in Table 3.A clear decrease in the scatter and PL residuals is seen moving from optical to NIR wavelengths.Among the Cepheids that have metallicity measurements, the residuals are larger for metal-poor stars ([Fe/H] < −0.3 dex).This A234, page 10 of 17 suggests that the metal-poor stars have higher ABL and fainter absolute magnitudes for a given period.This is consistent with recent results of a negative metallicity coefficient of PL relations in the literature (e.g.Breuval et al. 2022;Molinaro et al. 2023;Bhardwaj et al. 2023).However, the metal-poor stars, most of which are more distant, also have larger parallax uncertainties.We note that the PL relation in the g band is not shown in Fig. 7, but exhibits trends similar to other optical filters.The results given in Table 3 show that the slopes of PL relations are steeper at NIR wavelengths than optical bands, a trend that is typical for both Cepheid and RR Lyrae stars (Bhardwaj 2020). Period-Wesenheit relations The lack of accurate and independent reddening values of Cepheids in our sample can impact their PL relations.The Wesenheit magnitudes, which include a colour-term, are constructed to be reddening independent (Madore 1982).Given a reddening law, the coefficient of the colour-term of a twoband Wesenheit magnitude is derived from the total-to-selective absorption parameters at those wavelengths.We adopted the Fitzpatrick (1999) reddening law and derived PW relations for different combination of bandpasses as listed in Table 2. Figure 8 displays ABL for the PW relations for Cepheids in our sample.These relations are apparently tighter than the PL relations suggesting that the reddening values based on period-colour relations are not very accurate.The trend of larger residuals for metal-poor stars can also be seen in the bottom panels similar to PL relations.Table 4 lists the coefficients of PW relations.We tested the PL and PW relations listed in Tables 3 and 4 for different samples after excluding light curves with poor quality flags, Cepheids with RUWE > 1.4 and GOF > 12.5, and applying different outlier rejection thresholds.No statistically significant variations are seen in the coefficients of the PL/PW relations.If we restrict the sample to parallax uncertainties smaller than 15% and/or exclude stars beyond 10 kpc, the coefficients of PL relations vary within 1σ of their quoted uncertainties, which increase with lower statistics.While using reddening values from the maps of Green et al. (2019), the PL relations exhibit significantly larger scatter, as noted by Narloch et al. (2023).The dominant source of scatter in our PL and PW relations is due to large parallax uncertainties, which are expected to improve in the future Gaia data releases.between the slopes and the wavelength across all filters, as noted by Trentin et al. (2024).The slopes of ri band PL relations for the combined sample of Cepheids are in excellent agreement with the slopes of empirical PL relations for FU Cepheids derived by Narloch et al. (2023).The slope of g band PL relation in this work is steeper than that derived by Narloch et al. (2023), but the difference is still within 2σ of their quoted uncertainties.Di Criscienzo et al. (2013) provided theoretical PL relations in Sloan filters based on non-linear pulsation models representative of Cepheids in our Galaxy.The authors found their predicted slopes to be mildly steeper than the empirical slopes of PL relations derived using photometric transformation from traditional Johnson-Cousins to Sloan bands.The theoretical griz band slopes in Di Criscienzo et al. (2013) are also steeper for shorter period variables (log P < 1.0 days) than those for longer period stars.The slopes of riz band PL relations in this work are in good agreement with the theoretically predicted slopes for the entire period range models, but are sig-nificantly steeper in the g-band.The slopes of our NIR PL relations for Cepheids are in good agreement with the results of Breuval et al. (2022), Trentin et al. (2024), and Bhardwaj et al. (2023).The slopes of G, BP and RP band PL relations are in agreement with those from Breuval et al. (2022).In the case of optical PW relations, the adopted filter combinations and/or the color-coefficients are different in this work as compared to Narloch et al. (2023).Nevertheless, the slopes of W gr PW relations are in good agreement when we consider the combined sample of Cepheids.The slope of W G P-W relation is similar to the one derived by Breuval et al. (2022), while it is consistent within 2σ for the W JK s PW relation. Metallicity dependence of the Leavitt law The influence of metallicity on the absolute magnitudes of Cepheid variables needs to be properly quantified because it is a crucial parameter in improving the overall fit of the cosmic A234, page 12 of 17 distance ladder used to measure the Hubble constant (Riess et al. 2022a).Most of the recent studies have measured a negative sign for the metallicity coefficient of PL relation (Gieren et al. 2018;Riess et al. 2021;Ripepi et al. 2021;Breuval et al. 2022;Bhardwaj et al. 2023).While our sample of stars with spectroscopic metallicities is rather small, it is based on a homogeneously collected high-resolution spectroscopic dataset within the framework of the C-MetaLL survey, thus minimizing systematics in combining different literature [Fe/H] measurements. The residuals of PL and PW relations in Figs.7 and 8 show a clear trend as a function of metallicity such that metal-poor stars exhibit larger residuals.Figure 2 displays metallicity distribution of 65 stars with available [Fe/H] values for our sample of Cepheids.The most metal-poor star ([Fe/H] = −1.66dex) in our sample comes from Gaia and has poor RVS quality flags.There are four more stars with similar quality flags and an assigned error of 0.5 dex (Trentin et al. 2024), which were excluded from this analysis.Therefore, our sample consists of 61 stars including 33 FU and 28 FO Cepheids.The [Fe/H] values of these Cepheids are between −1.11 and 0.6 dex.Given small samples of both subtypes, we only derive PLZ and PWZ relations for the combined sample of Cepheid variables.A metallicity term is added to Eq. ( 1) in the following form: ABL = ω (mas) 10 0.2m λ −2 = 10 0.2(α λ +β λ (log P i −log P 0 )+γ λ [Fe/H]) . (2) In order to better constrain the metallicity coefficients, we assume that the slope of the PL relation does not change with metallicity.This is a basic assumption that is applied when measuring extragalactic distances using Cepheid variables.However, the metallicity can also affect the slope of the PL relations at different wavelengths (e.g.Ripepi et al. 2021;Trentin et al. 2024). Given a small sample size, we assume a fixed slope of PL/PW relation from Tables 3 and 4 in Eq. (2) at a given wavelength.Therefore, we only solve for the absolute zero-point and the metallicity coefficient as free parameters. Table 5 lists the coefficients of PLZ and PWZ relations.The metallicity coefficient varies significantly between −0.62 ± 0.14 mag dex −1 in G band and −0.30 ± 0.11 mag dex −1 in K s band.These metallicity coefficients are systematically larger than those determined by Breuval et al. (2022) and Bhardwaj et al. (2023), but are more in agreement with previous results from the C-MetaLL survey (Ripepi et al. 2021;Trentin et al. 2024).We note that the metallicity range of MW Cepheid sample was rather small in Breuval et al. (2022) and Bhardwaj et al. (2023) with a mean value of 0.09 dex (dispersion of 0.12 dex) and 0.04 dex (dispersion of 0.09 dex), respectively.In contrast, the mean metallicity of the sample used in this work is −0.18 dex (median value of −0.11 dex) and a three times larger dispersion of 0.37 dex than Breuval et al. (2022) sample.If we restrict the sample to 38 stars with [Fe/H] values larger than −0.3 dex, the metallicity coefficients decrease (in absolute sense) considerably but exhibit larger uncertainties.For example, the metallicity coefficient in r-band becomes −0.20±0.24mag dex −1 from a value of −0.46 ± 0.14 mag dex −1 quoted in Table 5.These coefficients are almost zero in the PLZ/PWZ relations involving the K s band when most metal-poor stars ([Fe/H] < −0.3 dex) are excluded.While the metallicity coefficients listed in Table 5 are weakly constrained due to lower statistics, it is important to probe this dependence more in detail.These large metallicity coefficients of the order of −0.5 mag dex −1 can lead to significant biases of 0.05 mag in distance modulus determinations if the mean metallicities of the calibrator and target Cepheid PL relations differ by even ∆[Fe/H] = 0.1 dex.and ∆(ABL) = 0.048 for metallicities higher (metal-rich) and lower (metal-poor) than −0.3 dex, respectively.When the metallicity term is included, the median residual of metal-poor stars decreases to ∆(ABL) = 0.014, while it remains the same for metal-rich stars.Similarly, the median residuals of the ABL fits for W JK s Wesenheit relations decrease from ∆(ABL) = 0.021 to ∆(ABL) = 0.012 for metal-poor stars.However, this is also expected since the median metallicities for metal-rich and metalpoor stars are 0.10 and −0.51 dex, respectively.Therefore, the contribution to average residuals due to metallicity coefficients is smaller for metal-rich stars than for metal-poor stars.Nevertheless, it is difficult to separate the contribution of metallicity effects and parallax uncertainties to the scatter in the residuals seen for metal-poor stars in Fig. 10.If we exclude stars with larger parallax uncertainties, there are not enough metal-poor Cepheids in the sample for a proper quantification of metallicity coefficient of PL/PW relations. Metallicity coefficient as a function of wavelength We compared the metallicity coefficients of PLZ/PWZ relations with recent determinations in the literature.Figure 11 displays metallicity coefficient as a function of wavelengths.Breuval et al. (2022) fitted a linear regression between γ and 1/λ and concluded that the metallicity effect is uniform over a wide A234, page 13 of 17 range of wavelength.We do not see a strong linear correlation between metallicity coefficient and wavelength.The PLZ relations at longer wavelengths seem to suggest a marginally smaller metallicity term than at shorter wavelengths, but the uncertainties on γ values are larger for our sample of stars.The metallicity coefficients derived in this work are consistent with earlier results of Ripepi et al. (2021) and Trentin et al. (2024) within the C-MetaLL survey.We note that the approach of using indi-vidual metallicities of MW Cepheids is the same in these studies.In contrast, the results of Gieren et al. (2018), Breuval et al. (2022), andBhardwaj et al. (2023) are based on a comparison of intercepts of PL relations for Cepheids in the MW and the Magellanic Clouds.The average value of the metallicity coefficient (−0.28 ± 0.07 mag dex −1 ) derived in these studies is systematically smaller than those listed in Table 5.In contrast, the mean value of the metallicity coefficient (−0.47 ± 0.08 mag dex −1 ) A234, page 14 of 17 2024) is significantly larger.However, these metallicity coefficients are also based on a sample of Cepheids covering a wide range of metallicities.Nevertheless, the metallicity coefficients of our PLZ/PWZ relations are in agreement with most of these measurements given their large uncertainties due to a small sample of Cepheid variables. Parallax correction and the uncertainty in the zero-point As shown in the previous subsections, the sample presented in this work is not optimal for the accurate and precise calibration of PLZ and PWZ relations due to low statistics, large parallax uncertainties, and the lack of accurate reddening measurements.Nevertheless, we investigate the reliability of the large metallicity coefficients and the zero points of the calibrated PLZ and PWZ relations.When compared with the results of Bhardwaj et al. (2023), the zero-points of K s band PLZ, and W J,K s /W G PWZ relations are nearly ∼0.25 mag brighter in this work.We note that both studies adopted the same parallax zero-point offset correction of −14 µas (Riess et al. 2021), but Bhardwaj et al. (2023) sample is based on MW Cepheid standards that have accurate parallaxes, low reddening, and a limited metallicity range.Therefore, the readers interested in determining Cepheid-based distances are recommended to use the calibrated relations provided in Bhardwaj et al. (2023) for metallicities closer to solar value, and those in Trentin et al. (2024) for a wide-range of metallicities. In the recent work within the C-MetaLL survey, Trentin et al. (2024) suggested that a larger parallax zero-point offset correction results in a larger distance measurement.The authors found that both the slope and the intercept of PLZ/PWZ relation increases by 1−4% and the metallicity coefficient decreases (in absolute sense) if a larger parallax offset correction is adopted.For the W J,K s Wesenheit, Trentin et al. (2024, Table 3) found the coefficients of PWZ relation: α = −6.09± 0.02 mag, β = −3.29 ± 0.03 mag dex −1 , γ = −0.45± 0.05 mag dex −1 , with no parallax zero-point offset correction.When compared with the results in Table 5 based on a parallax zero-point offset correction of −14 µas, indeed the slope and the intercept increase and the metallicity coefficient decreases (in absolute sense), in agreement with Trentin et al. (2024).However, the zero-point is significantly brighter by ∼0.25 mag, but this difference becomes smaller when no parallax zero-point offset correction is adopted. For this latter case, we find the coefficients of W J,K s PWZ relation: α = −6.25 ± 0.04 mag, β = −3.42± 0.07 mag dex −1 , γ = −0.43 ± 0.10 mag dex −1 .We note the slope is fixed in our analysis, and the metallicity coefficient is now in excellent agreement with Trentin et al. (2024).However, the zero-point is still significantly brighter by ∼0.16 mag, due to larger parallax uncertainties.In terms of the distance to the LMC, Trentin et al. (2024, Table 3) found a value of 18.41 ± 0.02 mag using W J,K s Wesenheit.For the LMC Cepheids, we adopt the zero-point of 12.46 ± 0.05 mag for W J,K s Wesenheit from Bhardwaj et al. (2016) based on the data from Macri et al. (2015), and a meanmetallicity of −0.41 ± 0.02 (Romaniello et al. 2022).We find a LMC distance of 18.53 ± 0.06 mag for the no parallax zero-point offset correction.However this value of LMC distance increases to 18.72 ± 0.06 mag for the calibrated PWZ relation listed in Table 5 due to a smaller metallicity term and a brighter absolute zero-point.With the sample presented in this work, it is not possible to separate the contribution of parallax uncertainties (and parallax zero-point offset correction) and metallicity effects on the absolute calibration of PLZ and PWZ relations. Summary We presented new homogeneously collected light curves of 78 MW Cepheid variables at optical Sloan (griz) and NIR (JHK s ) wavelengths.These observations were obtained simultaneously using the REM telescope and present the light curves in z band for FU Cepheids and in griz band for FO Cepheids for the first time.Multiband light curves of Cepheids were obtained primarily to complement the high-resolution spectroscopic metallicities within the framework of the C-MetaLL survey (Ripepi et al. 2021).The sample of 78 Cepheids includes 49 FU and 29 FO mode variables.The light curves of Cepheids were fitted with Fourier sine series to determine accurate mean magnitude and peak-to-peak amplitudes, which were used to investigate their pulsation properties.The period-amplitude diagrams were presented for the time in Sloan filters for both subtypes of Cepheids.The mean magnitudes were used to derive PL and PW relations for Cepheid variables using their astrometry-based luminosity at multiple wavelengths. Cepheid variables in the present study are located at a wide range of distances between 0.5 and 19.7 kpc, and therefore their Gaia parallaxes exhibit varying uncertainties with larger errors for distant objects.Moreover, some of these Cepheids are located A234, page 15 of 17 in the Galactic disk and anti-centre direction and are significantly reddened with colour-excess values exceeding 1 mag.There are no reddening values available in the literature for most of these Cepheids, and thus the colour-excess values were obtained by determining their intrinsic colours using the empirical periodcolour relations for Cepheid variables (Tammann et al. 2003;Ripepi et al. 2021).The larger uncertainties in parallaxes and reddening values and a modest sample size limits the accuracy and precision of the PL relations for these variables.The reddening uncertainties can be mitigated by employing the Wesenheit magnitudes, which results in tighter PW relations, in particular, at shorter optical wavelengths where extinction is more severe. In addition to parallax uncertainties, metallicity variations can also contribute to the scatter in the empirical PL and PW relations.Homogeneous high-resolution spectroscopic metallicities for 59 of 78 Cepheids have already been collected as part of the C-MetaLL survey.In addition, six Cepheids have medium-resolution spectra from Gaia-RVS (Recio-Blanco et al. 2023), two of which have reliable quality flags.We investigated the residuals of astrometry-based luminosity with PL/PW relation fits as a function of their metallicities, if available.The residuals exhibit a clear trend with metallicity such that metal-poor stars have higher astrometry-based luminosity and fainter absolute magnitudes.This trend becomes significant for [Fe/H] < −0.3 dex, where the parallax uncertainties are also larger.When deriving PLZ and PWZ relations, the metallicity coefficient of these relations varies between −0.30 ± 0.11 mag dex −1 in K s band to −0.55 ± 0.12 mag dex −1 in z band.While these empirical metallicity coefficients are weakly constrained due to the small sample size, they are systematically larger than previous determinations at optical and NIR filters.We compared the residuals of PL/PW and PLZ/PWZ relations to further investigate the impact of including the metallicity term, and found that the metallicity contribution predominantly affects the residuals for the most metal-poor Cepheids ([Fe/H] < −0.3 dex).The metallicity effect becomes smaller if we exclude these metal-poor Cepheids from the sample, but the uncertainties on the metallicity coefficient increase due to low statistics.The small sample size prevents us from separating the contribution to the scatter in the PL/PW relations due to the metallicity term and parallax uncertainties.These large metallicity coefficients and bright zero-points should be treated cautiously, and a larger sample of metal-poor Cepheids with more accurate parallaxes is needed to confirm these relatively large metallicity coefficients of the PLZ/PWZ relations. The aim of the ongoing C-MetaLL survey is to increase the sample of Cepheids with homogeneous photometric and spectroscopic data to a few hundred stars, which is ideal for the absolute calibration of the Leavitt law and a proper quantification of metallicity effects at multiple wavelengths.In addition, the extension of the photometric data presented in this paper will also be useful for several other scientific goals, for example investigating the light curve structure of these variables at multiple wavelengths.The simultaneous optical and NIR light curves will be useful for a quantitative comparison with the theoretically predicted light curves, thus providing strong constraints for the input parameters to the pulsation models.Multiwavelength data will also be used to constrain the reddening values for Cepheid variables in a future study.Moreover, light curves in the optical Sloan filters will serve as templates for the identification and classification of Cepheid variables in the Vera C. Rubin Observatory's Legacy Survey of Space and Time. Fig. 1 . Fig. 1.Spatial distribution of classical Cepheids used in this work in Galactic coordinates.The smaller symbol sizes represent distant Cepheids.Open circles represent stars with no metallicity measurements.The colour bar represents metallicity. Fig. 2 . Fig.2.The period and metallicity distributions, and period-Wesenheit relation for MW Cepheids.Top: period distribution of 80 candidate Cepheids in the sample studied in this work.The filled and lined histograms correspond respectively to FU and FO Cepheids.Middle: histogram of metallicities for 65 Cepheids, of which 59 have homogeneous high-resolution spectroscopic metallicities from the C-MetaLL survey(Ripepi et al. 2021;Trentin et al. 2023).Bottom: period-Wesenheit relation in Gaia filters for the sample of 80 stars.The circles and squares represent fundamental and first-overtone mode Cepheids, respectively.Two extreme outliers are marked with their Gaia DR3 source ID. 3. 1 .A234Fig. 3 . Fig. 3. Comparison of the calibrated mean magnitudes of Cepheids in this work with the magnitudes from APASS DR10 (in griz) and 2MASS (in JHK s ).The dashed and dotted lines respectively represent zero and mean offset.The mean (and the standard deviation) offsets for the bright (K s < 8.5 mag) and faint (K s ≥ 8.5 mag) Cepheids are also mentioned at the top left and bottom right of each panel.The filled circles and squares represent FU and FO Cepheids, respectively. Fig. 4 . Fig. 4. Example light curves of FU (top) and FO mode (bottom) Cepheids based on all the photometric data (in grey) from the dithered frame images.The light curves are shown in two optical (gi) and two NIR (JK s ) filters.The black circles represent the weighted mean values in a given phase bin and the errors represent their standard deviations.The range of y-axis is the same in each band for a given Cepheid.The Gaia source ID, pulsation mode, and the periods are listed at the top of each panel. Figure 6 Figure6displays period-amplitude diagrams for Cepheids at multiple wavelengths.For reference, the Gaia G band amplitudes, if available, are shown in the first panel.The predicted G band amplitudes for Cepheid models representative of MW variables with Z = 0.02, Y = 0.28 are also shown for different masses (M = 4−11 M ).These models were computed adopting canonical mass-luminosity relation and a fixed mixing length parameter (α = 1.5) for both FU and FO mode Cepheids (see DeSomma et al. 2022, for details).In general, there is a good agreement between predicted and observed amplitudes for FU mode Cepheids with a maxima around log P ∼ 1.3 days.The short-period FO Cepheid models exhibit larger amplitudes than observations(De Somma et al. 2022).We note that the predicted amplitudes are sensitive to the adopted composition, mass-luminosity relations, and the convective efficiency in the pulsation models.In grizJHK s period-amplitude diagrams, two low-amplitude FU Cepheids (open stars) with log P ∼ 1.4 days can be seen as outliers in Fig.6.These variables were classified as FU Cepheids in the ASAS-SN survey, but their light curves show large scatter not typical of Cepheids.Given their small V-band amplitudes of less than 0.3 mag, no obvious periodic variations were seen in our photometry despite a good phase coverage.The amplitudes of phase-binned grizJHK s light curves are smaller than 0.1 mag, and these were not classified as variables in Gaia as well.We keep their classification as FU Cepheids since their mean magnitudes are well determined, and are used to probe their position on the PL relations.In NIR bands, four Cepheids with 1.1 < log(P) < 1.4 days exhibit relatively high amplitudes.These Cepheids have good-quality light curves and their independent G-band amplitudes also range between 0.78 and 1.12 mag.In general, the decrease in amplitudes from the shortestperiod to 10 days, an increase in amplitude with periods up to Fig. 5 . Fig.5.Representative phase-binned light curves of FU and FO Cepheids with varying periods at all wavelengths (grizJHK s ).The magnitudes in g, r, J, H bands were offset by −0.5, −0.2, −0.1, +0.1 for visualization purposes.The best-fitting Fourier series fits are also shown as dashed lines.The Gaia Source ID, pulsation mode, and period are listed at the top of each panel.The light curve quality flag is given at the bottom left of each panel.The uncertainties in magnitudes at a given phase point also include the scatter at that phase in the original light curves. Fig. 6 . Fig. 6.Period-amplitude diagrams for Cepheids in multiple bands.The filled circles and squares represent FU and FO Cepheids, respectively.In the case of G band, the theoretically predicted amplitudes for metallicities (Z = 0.02) representative of Cepheids in the MW are also shown for different masses (De Somma et al. 2022).The solid and dashed lines represent FU Cepheid models, while asterisks show FO models.In grizJHK s bands, overplotted Cepheids in open star symbols are discussed in Sect.3.3. Fig. 7 . Fig. 7. Astrometry-based luminosity of FU (circles) and FO (squares) Cepheids as a function of pulsation period with magnitudes in rizJHK s bands is shown in the top part of each subpanel.The dashed and dotted lines represent the best-fitting relation for FU and FO mode Cepheids, respectively.The middle plot in each subpanel displays the residuals of the best fit as a function of period.The dashed lines show ±3σ scatter for FU and FO modes Cepheids.The bottom plot in each subpanel shows the residuals as a function of metallicity, if available.The dashed lines in the bottom two subpanels represent zero-residual variation.The partially filled symbols represent outliers, while empty symbols represent no metallicity measurement.The colour bar represents metallicity. Figure 9 Fig. 8 . Figure9displays a comparison of slopes of PL relations in optical and NIR filters with previously reported values in the recent literature.The typical trend of steeper slopes at NIR wavelengths is seen.However, we did not find a distinct linear relation A234, page 11 of 17 Fig. 9 .Fig. 10 . Fig. 9. Comparison of slopes of PL relations at different wavelengths.The slopes of the Sloan band PL relations are compared with empirical (gri) and theoretical (griz) PL relations from Narloch et al. (2023) and Di Criscienzo et al. (2013), respectively.The slopes of the G, BP, RP, V, I, J, H, and K s band PL relations are from Breuval et al. (2022), Trentin et al. (2024), and Bhardwaj et al. (2023).The shaded regions display the ±1σ standard deviation around the mean value of the slopes for the wavelengths under consideration. Table 1 . (Trentin et al. 2023 and multiband photometric intensity-averaged magnitudes of Galactic Cepheid variables.Notes.The Gaia DR3 Source IDs, periods, and pulsation modes (M) (fundamental, FU, or first-overtone, FO) are given in the first three columns.QF is the quality flag of the light curve (A, B, C for best, good, poor).E BV = E(B − V).(a)The seven Cepheids that were not classified as variables in the Gaia DR3(Ripepi et al. 2023); see Sect.2.1 for details.The full table is available at the CDS.References.For metallicities: R21(Ripepi et al. 2021), T22(Trentin et al. 2023), RVS (Recio-Blanco et al. 2023). Notes.The zero-point (α), slope (β), dispersion (σ ABL ) of ABL fits, and the number of stars (N) in the final PL relations are listed. Notes.The zero-point (α), slope (β), dispersion (σ ABL ) of ABL fits, and the number of stars (N) in the final PW relations are listed.
13,869.8
2024-01-07T00:00:00.000
[ "Physics" ]
Strong continuity for the 2D Euler equations We prove two results of strong continuity with respect to the initial datum for bounded solutions to the Euler equations in vorticity form. The first result provides sequential continuity and holds for a general bounded solution. The second result provides uniform continuity and is restricted to H\"older continuous solutions. Introduction Let us consider the Euler equations for an incompressible fluid: In two space dimensions, the vorticity ω = curl u is a scalar and satisfies the continuity equation where the velocity u can be recovered from the vorticity using the Biot-Savart law: where K(x) = x ⊥ /(2π|x| 2 ) is the Biot-Savart kernel. We refer to [12,13] for a comprehensive presentation of the existence and uniqueness theory for the Cauchy problem for the Euler equations. For the purposes of the present paper, it is sufficient to mention that [18] provides existence and uniqueness in the class of bounded vorticities if the initial vorticityω belongs to L 1 ∩ L ∞ (R 2 ), while [10] establish existence of a solution of (1) for initial vorticities in L 1 ∩ L p (R 2 ), where p > 1 (the uniqueness question is however an open problem). A fundamental question in fluid dynamics is the continuity of the solution with respect to the initial datum. In the context of bounded vorticities, it is not difficult to prove continuity with respect to the Wasserstein norm (a weak norm arising in the theory of optimal mass transportation). The proof is based on the almost-Lipschitz continuity of the fluid trajectories and the continuity estimate involves a double exponential function of the time (see [11,13]). More difficult is to prove continuity estimates with respect to strong norms. To the best of the authors' knowledge, the first result in this context is [17], where stability in L 1 of a circular vortex patch was proven. Notice that a circular vortex patch is a stationary solution of the Euler equations. This result was generalized (and the proof simplified) in [14]. The only known extension to non-stationary solutions involves elliptic vortex patches [16,15], solutions with a rigid geometry that move with constant angular speed. Nothing is known about general vortex patches. In this note we provide two continuity results with respect to strong norms for nonstationary solutions to the Euler equations, without any geometric requirement on the shape of the solutions. Our first theorem holds for general bounded solutions ω of the Euler equations and provides strong convergence in space at time t > 0, provided that the initial data converge strongly. Theorem A. Letω ∈ L 1 ∩L ∞ (R 2 ) and let {ω n } n ⊂ L 1 ∩L 2 (R 2 ) be a sequence withω n →ω strongly in L 2 (R 2 ). Fix T < ∞ and for every n let ω n ∈ C([0, T ]; L 1 ∩ L 2 (R 2 )) be a solution of the Euler equations (2)-(3) with initial datumω n . Then First, we want to point out that the continuity in time with values in L 1 (R 2 ) ∩ L 2 (R 2 ) follows from the fact that for any n the the vorticity ω n is a renormalized solutions of (2) and the velocity u n is in the setting of [9], see Step 1 in the proof of Theorem A. Moreover, notice that in the above theorem we do not require boundedness ofω n , therefore the Euler equations with such initial data may have more than one solution. Nevertheless, thanks to the uniqueness for the (limit) problem with bounded initial datumω, the result holds for any sequence of solutions, and does not require the passage to a subsequence. The proof relies on the DiPerna-Lions theory of continuity equations with Sobolev velocity field [9], see also [4] for a general account of this research area. This proof will be presented in §2. Remark 1. If the limit vorticityω only belongs to L 1 ∩ L 2 (R 2 ) we only have convergence of a subsequence of ω n (t, ·) to some solution of the limit problem, since the latter has no uniqueness: Step 4 in the proof of Theorem A does not apply. See also [5] for a proof of the existence of solutions to the Euler equations via Lagrangian techniques. Remark 2. If we consider in Theorem A a sequence of initial data {ω n } n ⊂ L 1 ∩ L p (R 2 ) converging toω strongly in L p (R 2 ), with p > 2, then it is possible to prove the strong convergence in L p (R 2 ) of ω n (t, ·) to ω(t, ·). On the other hand, if we relax the integrability assumption on the sequenceω n to L 1 ∩ L p (R 2 ) for some p < 2, our proof breaks down in its full generality. Indeed, a vorticity in L p advected by a velocity in W 1,p loc with p < 2 does not fall in the context of [9], therefore existence of a flow as in Step 1 of the proof of Theorem A is not guaranteed (in fact, if p < 4/3 equation (2) does not even make distributional sense). One should consider a sequence of approximate solutions ω n that are a priori required to be Lagrangian. Note that [8] guarantees that solutions obtained via vanishing viscosity approximation are indeed Lagrangian. The above theorem provides sequential continuity (with respect to the L 2 norm) of the map that associates to the initial datum the solution at time t. This continuity property holds at every bounded solution. The rate of continuity may however depend on the solution itself. Our second result provides uniform continuity with an explicit convergence rate, provided we restrict our attention to slightly more regular solutions to the Euler equations (2)-(3): we require that the (compactly supported) initial data (and therefore the solution at any time) belong to some Hölder class C α c (R 2 ), where 0 < α < 1 is arbitrary. The proof of this theorem involves an interpolation argument in homogeneous fractional Sobolev spaces. Essentially, the Hölder regularity of the solution allows to "upgrade" weak estimates (as in [11,13]) to strong estimates. The proof will be presented in §3. Remark 3. Inequality (4) can be extended to L p norms with 1 ≤ p < ∞, although with a different value for the constants and the exponent. Acknowledgment. This research has been partially supported by the SNSF grants 140232 and 156112. Proof of Theorem A Step 1. Let us consider the velocity u n associated to the vorticity ω n as in (3). Decomposing the Biot-Savart kernel as K = K 1 + K 2 = K1 |x|≤1 + K1 |x|>1 and noting that K 1 ∈ L 1 (R 2 ) and K 2 ∈ L ∞ (R 2 ), we obtain with Young's inequality that In particular, formula (3) is well-defined in this summability context. Moreover, u n is divergence-free and (by elliptic regularity, since ω ∈ L ∞ ([0, T ]; L 2 (R 2 ))) belongs to L ∞ ([0, T ]; W 1,2 loc (R 2 )). The bounds above imply that we are in the setting of [9]: there exists a unique forwardbackward regular Lagrangian flow (i.e., in this context, an incompressible flow defined almost everywhere in space) X n = X n (s, t, x) associated to the velocity field u n , and the vorticity ω n is transported by such a flow, in the sense that ω n (t, x) =ω n (X n (0, t, x)) . Step 2. From the representation (5), together with the convergence ofω n toω, it follows that ω n ∈ L ∞ ([0, T ]; L 1 ∩ L 2 (R 2 )) uniformly in n. Therefore, along a subsequence we have ω n(k) ⇀ w weakly* in L ∞ ([0, T ]; L 1 ∩ L 2 (R 2 )). Moreover, all the bounds on u n listed in Step 1 are uniform in n. Arguing as in [10, Theorem 1.2] we find a further subsequence (that we do not relabel) u n(k) converging strongly in L 2 loc ([0, T ] × R 2 ) to a limit velocity v (notice that the convergence is strong also with respect to the time: this makes use of Aubin's lemma). One can readily check that v enjoys the same bounds as in Step 1 for the sequence u n and that the couple (v, w) solves (2)-(3). Step 3. We can therefore apply the stability theorem from [9] (see also [7,6] for a purely Lagrangian proof of such a stability theorem, and [1,2,3], specific to the two-dimensional context). We obtain that the flows X n(k) from Step 1 converge locally in measure in R 2 , uniformly in t, s ∈ [0, T ], to the unique forward-backward regular Lagrangian flow X associated to the velocity field v. Therefore strongly in L 2 (R 2 ), uniformly for t ∈ [0, T ] (here one can argue using Lusin's theorem and exploiting the incompressibility of the flows, see for instance the argument in [6, Propositions 7.2 and 7.3]). Hence, the weak limit w of ω n defined in Step 2 is in fact a strong limit and coincides withω(X(0, t, x)). Step 4. The representation in (6) entails that w is a bounded function that solves (2)-(3), therefore by uniqueness it coincides with the solution ω in the statement of the theorem. By uniqueness of the limit the whole sequence ω n (t, ·) (and not only the subsequence ω n(k) (t, ·), as in (6)) converges to ω(t, ·). This concludes the proof of Theorem A. Proof of Theorem B First of all, we observe that, since ω 1 , ω 2 ∈ L ∞ ([0, T ]; L 1 ∩ L ∞ (R 2 )), the velocities u 1 and u 2 are uniformly bounded. This in turn implies that ω 1 and ω 2 are compactly supported in space, uniformly for t ∈ [0, T ]. In the course of the proof, we will use the notationḢ s (R 2 ) to denote the homogenous Sobolev space in R 2 of real order s. Step 1. Let us fix 0 < β < α. The classical interpolation inequality for homogeneous Sobolev spaces gives It is known (see for instance [12, Page 326]) that Hölder regularity of the vorticity is propagated in time by (2), and it is immediate to check that C α c (R 2 ) ֒→ H β (R 2 ) for α > β. Since moreover R 2 ω 1 (t, ·) − ω 2 (t, ·) dx = 0 for all times, the second factor in the right hand side of (7) is bounded by a constant uniformly in time.
2,577.4
2015-07-01T00:00:00.000
[ "Mathematics" ]
Insights into Ultrasonication Treatment on the Characteristics of Cereal Proteins: Functionality, Conformational and Physicochemical Characteristics Background: It would be impossible to imagine a country where cereals and their byproducts were not at the peak of foodstuff systems as a source of food, fertilizer, or for fiber and fuel production. Moreover, the production of cereal proteins (CPs) has recently attracted the scientific community’s interest due to the increasing demands for physical wellbeing and animal health. However, the nutritional and technological enhancements of CPs are needed to ameliorate their functional and structural properties. Ultrasonic technology is an emerging nonthermal method to change the functionality and conformational characteristics of CPs. Scope and approach: This article briefly discusses the effects of ultrasonication on the characteristics of CPs. The effects of ultrasonication on the solubility, emulsibility, foamability, surface-hydrophobicity, particle-size, conformational-structure, microstructural, enzymatic-hydrolysis, and digestive properties are summarized. Conclusions: The results demonstrate that ultrasonication could be used to enhance the characteristics of CPs. Proper ultrasonic treatment could improve functionalities such as solubility, emulsibility, and foamability, and is a good method for altering protein structures (including surface hydrophobicity, sulfhydryl and disulfide bonds, particle size, secondary and tertiary structures, and microstructure). In addition, ultrasonic treatment could effectively promote the enzymolytic efficiency of CPs. Furthermore, the in vitro digestibility was enhanced after suitable sonication treatment. Therefore, ultrasonication technology is a useful method to modify cereal protein functionality and structure for the food industry. Introduction The global population is increasing and is forecasted to reach 10 billion by 2050. To meet the growing population, cereals, one of the most basic foodstuffs around the world, provide energy and nutrients to the global population [1]. Wheat, rice, barley, rye, oat, maize (corn), millet, and sorghum are the most common staple cereals [2]. The fact that cereals is a larger share of sustainable agriculture adds to the importance of cereal and cerealbased products. Cereal provides considerable carbohydrates, protein, dietary fiber, and bioactive nutrients that our bodies require for growth and metabolism [3]. Cereal proteins (CPs), a prior micronutrient of cereal, are gaining more attention from scholars working in the food industry because of their nutritional benefits and functional components. CPs are primarily applied in food processing owing to their nutritious values and functional characteristics [4]. They are available for solubility, foaming, emulsifying, and gelling applications [5]. CPs, particularly gluten, are utilized in many products and byproducts, such as leavened and unleavened bread, noodles, and cookies [6]. However, the characteristics of proteins must be stable during mixing, heating, desiccating, storage, Solubility The solubility of CPs, as one kind of functionality, plays a major role in the quality of food products. The solubility of proteins refers to the extent to which proteins were denaturized and aggregated. This may impact other functional characteristics, such as foaming and emulsifying properties [15]. Therefore, the other technofunctional characteristics of proteins and their application may be influenced by their solubility within the food industry. Reasonable ultrasonic conditions are used to enhance solubility up to a certain point. The influences of ultrasound on CP solubility can be ascribed to conformational variations in the (partial unfolding of the) protein structure caused by the implosion of cavitation bubbles. Physical perturbations help in breaking up hydrogen and hydrophobic bonds, exposing the buried hydrophilic groups to the surrounding water. Thus, the interaction between protein molecules and water is promoted, enhancing protein solubility [16,17]. A study by Wang et al. [18] demonstrated that, in comparison with the rice bran protein without ultrasonic treatment, the solubility of rice bran protein with ultrasound (20 kHz; 6 mm diameter titanium probe with a depth of 1 cm; URBP) had obviously been increased via the enhancement of ultrasonic power. The solubility of URBP reached the maximum when the ultrasonic power was 200 W and the ultrasonic time was 10 min. Other findings regarding ultrasonic treatment of CPs exhibited a homotrend of an increase in protein solubility, such as protein isolates (BPIs) [19], rice dreg protein isolates (RDPIs) [20], sorghum kafirin [21], and oat protein isolates (OPIs) [22], via various types of ultrasonic devices. The decrease in the particle size and partial unfolding of proteins during ultrasonication increased the charged groups on the surface of the protein. This phenomenon also has significant influence on protein-water and protein-protein The solubility of CPs, as one kind of functionality, plays a major role in the quality of food products. The solubility of proteins refers to the extent to which proteins were denaturized and aggregated. This may impact other functional characteristics, such as foaming and emulsifying properties [15]. Therefore, the other technofunctional characteristics of proteins and their application may be influenced by their solubility within the food industry. Reasonable ultrasonic conditions are used to enhance solubility up to a certain point. The influences of ultrasound on CP solubility can be ascribed to conformational variations in the (partial unfolding of the) protein structure caused by the implosion of cavitation bubbles. Physical perturbations help in breaking up hydrogen and hydrophobic bonds, exposing the buried hydrophilic groups to the surrounding water. Thus, the interaction between protein molecules and water is promoted, enhancing protein solubility [16,17]. A study by Wang et al. [18] demonstrated that, in comparison with the rice bran protein without ultrasonic treatment, the solubility of rice bran protein with ultrasound (20 kHz; 6 mm diameter titanium probe with a depth of 1 cm; URBP) had obviously been increased via the enhancement of ultrasonic power. The solubility of URBP reached the maximum when the ultrasonic power was 200 W and the ultrasonic time was 10 min. Other findings regarding ultrasonic treatment of CPs exhibited a homotrend of an increase in protein solubility, such as protein isolates (BPIs) [19], rice dreg protein isolates (RDPIs) [20], sorghum kafirin [21], and oat protein isolates (OPIs) [22], via various types of ultrasonic devices. The decrease in the particle size and partial unfolding of proteins during ultrasonication increased the charged groups on the surface of the protein. This phenomenon also has significant influence on protein-water and protein-protein interactions via hydrogen bonds, hydrophobicity, and electrostatic forces, and as a result, causes protein dispersion and increases protein solubility. Figure 2 presents an illustration of the cavitation effect. It is also essential for maintaining a consistent temperature with ultrasonication when investigating protein dissolution [23]. Similar results utilizing sonication are reported in Table 1. interactions via hydrogen bonds, hydrophobicity, and electrostatic forces, and as a result, causes protein dispersion and increases protein solubility. Figure 2 presents an illustration of the cavitation effect. It is also essential for maintaining a consistent temperature with ultrasonication when investigating protein dissolution [23]. Similar results utilizing sonication are reported in Table 1. [39] Note: -, the corresponding functionality and surface hydrophobicity was not mentioned in the paper. Nevertheless, studies have demonstrated that the solubility of CPs significantly decreases when they are exposed to excessive high-intensity ultrasound. For example, Chen et al. [30] observed the impact of the solubility of rice protein (RP)-dextran conjugates treated with ultrasound-assisted glycation (URPDCs) (25 kHz; power: 400, 500, and 600 W). When the ultrasonic power increased (under 700 W), the solubility increased significantly. The solubility reached up to 88.5% at 600 W and then reduced at ultrasonic power of 700 W. Cavitation generated by ultrasonic treatment likely enhanced the proteins' solubility, whereas excessively high ultrasonic power could cause protein aggregation, restraining the reaction. Proteins were more susceptible to aqueous solutions because of the broken covalent bonds and more hydrophobic groups caused by acoustic cavitation. Therefore, larger aggregates are formed through hydrophobic bonds, destabilizing dissociated proteins and thus reducing solubility [40,41]. The majority of the available literature on ultrasonication regards the frequency range of 20-50 kHz and moderate power under 700 W; ultrasonic treatment may effectively promote the solubility of CPs. Emulsifying Properties Emulsifying properties play a crucial role in the application of proteins as surfactant substances. Because of the amphiphilic nature of proteins, the solid homogeneous emulsion of a protein can be established in an oil-water system via surface active agents [42], thus improving the emulsification attributes, including the emulsifying activity index (EAI) and emulsion stability index (ESI). In addition, the modification of CPs' emulsification properties dramatically influences the protein size, conformation, surface hydrophobicity, and molecule flexibility under an ultrasonic field [24,29]. Modifications in the emulsifying characteristics of an isolated cereal protein caused by sonication were evaluated. Cavitation efficiency may affect emulsification during sonication. Zhang et al. [43] significantly increased the EAI and ESI of wheat gliadin (WG) and green wheat gliadin (GG) treated with sonication with various power inputs. The increase in the emulsification characteristics of WG and GG via sonication was related to the smaller particles with the help of sonication acoustic cavitation. With the extension of acoustic time or/and ultrasonic power, the dispersed phase volume and bubble population increased, increasing the shear forces transferred through the rapid collapse of bubbles. Further, the disruption of oil droplets became more favorable, thereby strengthening emulsion stability [12,44]. In another study, Hu et al. [45] identified an association between protein secondary and tertiary structural variation, and improved the emulsifying properties. When α-helix and β-sheets are influenced by ultrasonication, the ultrasound-treated protein has a better effective potential for adsorption capacity on the interface of oil/water. The particle size was reduced with ultrasonication, which increased the ratio of the surface area to the volume, thus increasing the emulsification characteristics [46]. Further, the proteins' surface hydrophobicity that is increased with ultrasonic treatment can also act as a driving force for a reduction in the tension on the oil-water interface. Hence, the protein absorption rate was increased, which helped in rendering the films rigid through hydrophobic/hydrophilic groups. As a result of these alterations, proteins treated with sonication may be more easily emulsified. Furthermore, high-intensity ultrasound can enhance emulsion homogenization by pretreating the proteins; it is widely applied in food processing. We show previous results from other authors in Table 1. The results show that ultrasonic treatment such as at a 20 or 25 kHz frequency and with power of 100 or 300 W could lead to a significant increase in emulsification efficiency. In contrast, excessive ultrasonic power input might cause a loss in the emulsification properties of the protein. Wang et al. [29] found that the emulsification characteristics of rice bran protein decreased slightly as ultrasonic power increased between 450 and 600 W. The opposite might be attributed to the intensified sonochemical effects induced by excessive power disrupting the protein's secondary structure, resulting in the flocculation of interfacial proteins and the aggregation of emulsion droplets. Therefore, it is important to select the appropriate ultrasonic power and frequency levels for different CPs to maintain an equilibrium between the exposure of hydrophobic/hydrophilic groups and protein aggregation to achieve excellent emulsification performance. Foaming Properties Foaming characteristics are largely determined via molecular movement, penetration, and rearrangement at the air/water interface, mainly applied in food processing. Foaming capacity (FC) depends on the protein dispersion, unfolding, and repositioning at the gas/solvent interface to decrease tension at the interface, while foaming stability (FS) is mostly dependent on the formation of a cohesive, robust layer around air bubbles. There are verified relationships among the foaming properties, structural flexibility, and surface hydrophobicity of proteins [44]. After sonication, the rapid diffusion of molecules in the air/liquid interface and molecular rearrangement allow for cohesive viscoelastic films to entrap air, which can modify the foaming properties. On this basis, there are also close relationships between foaming properties and other properties, such as particle size, surface hydrophobicity, molecular weight, and structural flexibility [47]. Appropriate ultrasonic modes can improve the FC of CPs. Akharume et al. [34] reported that FC improved significantly under a long treatment for both the prolamin and glutelin fractions via sonication (20 kHz; 12.70 mm probe; 100%, 75%, 50% amplitude; 5, 10 min). FC was enhanced from 3.83 to 10.33% for the Dawn prolamin and from 22.50 to 34.33% for the plateau glutelin. In addition, FS increased from 57.50 to 100% at 52.72 W for 5 min. Comparable observations were reported in earlier studies: rice bran protein [19], millet protein concentrate (MPC) [14], and foxtail millet concentrates [25]. Study outcomes are listed in Table 1. Ultrasonication treatment enhanced the foaming properties with a frequency of 20 kHz and power range of 100-600 W. The observed improvement in the foaming characteristics was attributed to the cavitation effect of sonochemical action. The surface hydrophobicity and molecular flexibility of protein molecules were improved. The particles of proteins were also distributed more evenly, and particle size was reduced. These changes might have led to a rapid enhancement of the adsorption ability on the gas/liquid interface and thus resulted in greater foaming capacity. Furthermore, ultrasonic treatment could induce changes in the conformation of protein molecules, namely, the partial exposure of the protein structure and hydrophobic amino acid residuals to polymerize viscoelastic films at the air/water interface [48]. However, excessive ultrasonic treatment should be noted considering that the reaggregation of proteins induces the desorption of protein molecules at the air-water interface [24]. Meanwhile, the foaming properties of protein might be influenced by other sonication parameters, such as intensity (W/cm 2 ) and power density (W/mL). Surface Hydrophobicity (H 0 ) Hydrophobic interactions play an essential role because they are highly associated with the content of hydrophobic amino acids in the food system [37]. Hydrophobicity (H 0 ) is partly responsible for the conformational structures of proteins and protein correlations, such as polar-nonpolar group interactions and complex formation [45,49]. The H 0 of the protein significantly increased as hydrophobic groups were exposed to ultrasonic cavitation, which impacted the functionality of the proteins. With the prolongation of ultrasonic treatment, the spatial structure of the protein changed, exposing more hydrophobic amino acid residues and increasing the surface hydrophobicity because ultrasonic waves can break the hydrogen bonds, electrostatic interactions, and hydration between protein molecules, exposing the hydrophobic groups buried inside the protein molecules. For example, Zhou et al. [50] evaluated the functional impact of ultrasonication (25 ± 1 • C initial temperature; 30 min; power 600 W) on the surface hydrophobicity of defatted wheat germ protein (DWGP) and indicated that the fluorescence peak intensity (420-540 nm) of DWPG increased gradually from 63.7 to 573.25 W/cm 2 via sonication. Wang et al. [18] also examined a similar finding and reported an increase in H 0 with ultrasound-treated rice bran protein. Later, Yang et al. [51] found that the increase in H 0 could have been attributed to the cavitation action of ultrasound that had generated the turbulent shear force, microflow, and other effects. The initially buried hydrophobic regions in the interior of the molecule were effectively revealed to the hydrophilic surrounding medium through strong cavitation effects. Further, cavitation actions can destroy the protein molecules, shrinking the particles from large aggregates into smaller fragments, hence improving the hydrophobic surface of the protein [48]. Thus, as demonstrated in the studies shown in Table 1, ultrasonic treatment with a frequency range of 20-40 kHz and power range of 80-400 W could enhance the H 0 of CPs to varying degrees. Sulfhydryl (SH) and Disulfide Bond (SS) Content The sulfhydryl (SH) and disulfide (SS) groups are widely acknowledged as critical functional groups in protein molecules. Both play essential roles in maintaining the stability of protein structures, and their ratio can also influence the functionality of proteins [37]. The modification of the free sulfhydryl content of protein molecules could be directly related to the denaturation degree of proteins. For example, Yang et al. [52] found that, at ultrasonic power intensity from 40 to 100 W/L, the SH of defatted wheat germ protein increased significantly (p < 0.05). The highest increment in SH at 60 W/L was 53.20 µmol/g, an increase of 43.21%, which remained steady with increasing ultrasonic power density. The increase in SH can be attributed to the stretching of the protein and its internal sulfhydryl group being exposed to its external surface. The possible reason was that the buried sulfhydryl groups could be unfolded, accompanied by a reduction in protein size, disrupted by the high pressure and shear force caused by sonication [51]. Later, another study by Qin et al. [37] reported that the number of free SH groups in soy protein isolate/wheat gluten mixture increased during the dual modification of protein under high-intensity ultrasound and microbial transglutaminase (MTGase) cross-linking. Similarly, this trend is consistent with the work by Zhang et al. [43] for wheat gliadin. Their study indicated that SH content reached the maximum (10.99 µmol/g) with ultrasonic power of 400 W. Another study, by Zhang et al. [20], examined the effect of sonication (20,28,35,40, and 50 kHz; power density 400 W/L) on the SH and SS of rice dreg protein isolates. The results showed that sonication treatment caused an increase in the total sulfhydryl group and free sulfhydryl content, and a decrease in SS bonds, especially at a frequency of 20/40 kHz. Later, Liu et al. [35] studied the effect of sonication time (5, 10, 15 min) and power (130, 160, 200 W) on the SH and SS of yellow dent corn-separated protein. The works indicated that the SH content increased by 37.21%, and the SS content decreased by 43.66% via sonication (15 min, 200 W). This was attributed to ultrasonic cavitation, which induced the sulfhydryl group to be exposed to the protein surface. In other words, CPs treated with sonication may exhibit an increase in SH due to the oxidation of hydroxyl radicals generated by cavitation. Meanwhile, the disintegration of SS destroyed the protein conformation and hydrophobic groups initially inside, and the protein molecule was exposed to the external surface more [53]. Particle Size The particle size of proteins is an essential factor influencing protein functionality. A (controlled or moderate) sonication treatment could reduce the particle size of CPs through protein aggregations. The smaller particles of proteins could be attributed to the disruptive effects of sonication acoustic cavitation. Cavitation damages the electrostatic interactions and hydrogen bonds between CPs molecules, resulting in protein molecules aggregating into smaller fragments [29,54]. Similar study outcomes of CPs via sonication displayed a decrease in the particle size under various ultrasonic conditions. For example, as reported by Sharma et al. [25], ultrasonic treatment (amplitude: 5 and 10%; duration: 5, 10, and 20 min) significantly decreased (p < 0.05) the particle size of foxtail millet protein, and the decrease continued with the increase in ultrasonic time. Moreover, Zhang et al. [43] indicated that, as the ultrasonic power was from 0 to 400 W, the average particle size of wheat gliadin and green wheat gliadin by sonication was reduced by 42.1% and 32.2%, respectively. Further, O'Sullivan et al. (2016) [31] also found that ultrasonic treatment (20 kHz, 34 W/cm −2 for 2 min) could significantly reduce the particle size of wheat protein. The tendency was in concordance with the reports by Qin et al. [37], Sun et al. [19], Wang et al. [18], and Jin et al. [38]. In addition, when ultrasonic treatment is combined with other treatments, the particle size of proteins might be altered. For instance, Zhang et al. [20] concluded that the particle size of rice dreg protein isolates decreased from 330.8 to 219.6 nm after ultrasound-assisted alkali treatment. However, various trends in the particle size of CPs were reported in several studies. Wang et al. [29] found that, as ultrasonic power increased to 500 W, there was an increase in the particle size of rice bran protein. The phenomenon may have been due to the reaggregation of small particles through the thermal effect generated by the ultrasound. This result agreed with Jiang et al. [55], who confirmed an increase in the particle size of black bean protein isolates via ultrasonication (20 kHz, 450 W). The enhancement in particle size might be related to the repolymerization of aggregates through noncovalent and covalent interactions. In general, when ultrasound is applied, appropriate frequency and power parameters could reduce the particle size of CPs. The particle size might have been enhanced, but ultrasonic power was excessive [31]. Conformational Structures In addition to modifying the functional characteristics properties of the protein, ultrasonication can alter its structural characteristics. In accordance with the progressive state of the spatial arrangement of polypeptide chains, the protein structure is categorized into primary, secondary, and tertiary structures. The protein structure is differently affected by ultrasonic treatment on the basis of the types and conditions of ultrasonic treatment. As shown in Figure 3, when the treated protein was exposed to ultrasonic cavitation, alterations only happened in the secondary and higher-order structure of the protein except for the primary [56]. Once the noncovalent interactions between proteins and polysaccharides are modified by cavitation and shearing forces, and hydrophobic and hydrogen bonds are broken, thereby contributing to a variety of corresponding conformational changes (such as unfolding, denaturation, and reaggregation) and sequential alterations in technofunctional and nutritional properties [19,57]. cavitation, alterations only happened in the secondary and higher-order structure of the protein except for the primary [56]. Once the noncovalent interactions between proteins and polysaccharides are modified by cavitation and shearing forces, and hydrophobic and hydrogen bonds are broken, thereby contributing to a variety of corresponding conformational changes (such as unfolding, denaturation, and reaggregation) and sequential alterations in technofunctional and nutritional properties [19,57]. Primary Structure Proteins are composed of amino acid sequences as their primary structure. Sodium dodecyl sulphate-polyacrylamide gel electrophoresis is the main method used to examine the changes in subunits of CPs following sonication. According to Wang et al. [11], there was no noticeable alteration in the subunit of rice dreg protein isolates via sonication (20,28,35,40,52 kHz). This phenomenon was attributed to alterations in the molecular weight of the proteins, which was not reflected in the electrophoretic spectrum of the protein. Li et al. [22] found that sonication did not alter oat protein composition when the oat protein was subjected to high-intensity ultrasound (20 kHz, 80 W) for 5 min at 70% amplitude. However, some studies are in disagreement with these viewpoints. For example, Jhan et al. [59] showed that the main polypeptide band was around 20 kDa for all native proteins. and that the molecular weight (27-10 kDa) of sorghum protein nanoparticles was reduced following sonication-assisted nanoreduction. This might be attributed to the breakdown of intermolecular hydrogen and hydrophobic bonds, decreasing the molecular weight of the proteins. Furthermore, Nazari et al. [14] indicated that there was a reduction in the molecular weight (40-50 kDa) of millet protein concentrate after ultrasonic treatment (20 kHz, 73.95 W/cm 2 for 12.5 min). So, sonication has various effects on the primary structure of proteins due to various ultrasonic conditions. Secondary Structure The secondary structure of the protein is formed by primary polypeptides on a nascent protein in distinctively coiled aqueous environment. The secondary structure of proteins is characterized through the ratios of α-helix, β-sheet, β-turn, random coil, and unordered groups. Those movements could be transformed by each other to some extent after ultrasonic treatment. Therefore, sonication could alter the protein secondary structure. For instance, Sullivan et al. [21] reported that there was an increase in the Figure 3. Sequential alterations in the primary, secondary, tertiary and quaternary structures of a protein exposed to ultrasonic treatment [58]. Primary Structure Proteins are composed of amino acid sequences as their primary structure. Sodium dodecyl sulphate-polyacrylamide gel electrophoresis is the main method used to examine the changes in subunits of CPs following sonication. According to Wang et al. [11], there was no noticeable alteration in the subunit of rice dreg protein isolates via sonication (20,28,35,40,52 kHz). This phenomenon was attributed to alterations in the molecular weight of the proteins, which was not reflected in the electrophoretic spectrum of the protein. Li et al. [22] found that sonication did not alter oat protein composition when the oat protein was subjected to high-intensity ultrasound (20 kHz, 80 W) for 5 min at 70% amplitude. However, some studies are in disagreement with these viewpoints. For example, Jhan et al. [59] showed that the main polypeptide band was around 20 kDa for all native proteins. and that the molecular weight (27-10 kDa) of sorghum protein nanoparticles was reduced following sonication-assisted nanoreduction. This might be attributed to the breakdown of intermolecular hydrogen and hydrophobic bonds, decreasing the molecular weight of the proteins. Furthermore, Nazari et al. [14] indicated that there was a reduction in the molecular weight (40-50 kDa) of millet protein concentrate after ultrasonic treatment (20 kHz, 73.95 W/cm 2 for 12.5 min). So, sonication has various effects on the primary structure of proteins due to various ultrasonic conditions. Secondary Structure The secondary structure of the protein is formed by primary polypeptides on a nascent protein in distinctively coiled aqueous environment. The secondary structure of proteins is characterized through the ratios of α-helix, β-sheet, β-turn, random coil, and unordered groups. Those movements could be transformed by each other to some extent after ultrasonic treatment. Therefore, sonication could alter the protein secondary structure. For instance, Sullivan et al. [21] reported that there was an increase in the amount of unordered or random coils of purified kafirins, while the α-helix content decreased after ultrasonic treatment (20 kHz ± 50 kHz at 40% amplitude for 10 min). In addition, the work of Zhang et al. [60] concluded that the α-helical content of gluten protein was decreased, while the β-sheet and β-turn contents were increased after multifrequency sonication treatment (28,40, and 80 kHz). A similar phenomenon was studied by Wang et al. [27] regarding corn gluten meal protein, and Liu et al. [35] regarding yellow dent corn separated protein. In the former study, low-power density ultrasound (20/40 kHz, 100 W/L, 20 min, 5:2 s/s) altered the secondary conformational structure of corn gluten meal protein, leading to a reduction in α-helix and β-turn, and an increase in β-sheet and random coil. In the latter study, there was a decrease in α-helix and β-turn, and an enhancement in β-sheet and random coil via sonication (200 W, 15 min), resulting in transforming the yellow dent corn separated protein structure from order into disorder. Furthermore, these observations are in line with earlier studies on wheat gluten by Qin et al. [37], and Zhang et al. [61]. On the basis of the aforementioned studies, even though the secondary structure might be influenced by the sonication device, intensity, time, and frequency, a protein could undergo some changes in the secondary structures, and exhibit a looser and more flexible structure following ultrasonication induced by cavitation. Tertiary Structure The protein's tertiary structure is preferred three-dimensional arrangement of the folded polypeptide chains. Compared with a protein secondary structure, a tertiary structure directly affects the functional characteristics of the proteins. Tryptophan residues are essential indicators for characterizing the tertiary structure [13,62]. Alterations in the tertiary structures of wheat gliadin (GG) and green wheat gliadin (WG) with different ultrasonic conditions were investigated by Zhang et al. [43]. By means of fluorescence spectroscopy, they found that the fluorescence intensity of GG and WG decreased significantly after ultrasound. This might be ascribed to the bubbles transferring through the sound wave, and the location was instantaneously heated. Cavitation generated by the bubble burst could have unfolded and exposed the protein structure during ultrasonic treatment. Moreover, a slight red shift was observed in the maximal fluorescence emission wavelengths of GG and WG: 3 and 2 nm, respectively. In this experiment, an increase in the polarity of the tryptophan residue microenvironment was demonstrated, thereby indicating that ultrasonic treatment enhanced the tertiary structure formation of the GG and WG molecules. A similar conclusion was reached by Su et al. [63], who found that mass transfer effects enhanced by ultrasound might cause transient bubbles. When the bubbles collapsed, vast reactive free radicals could have led to the modification of amino acid side chains, which would change three-dimensional folded structures. Therefore, ultrasound has the ability to irreversibly alter and adapt the three-dimensional structure of CPs to emulsify oils, yielding a stable structure for up to years. Qu et al. [64] also mentioned that ultrasound-treated (20/28 kHz, 150 W/L) CPs had shown increased absorption intensity at 275 nm. and a new structure had been shaped. Thus, ultrasonic working parameters such as ultrasonic power, time, and temperature might influence the tertiary structure of proteins. Microstructure Having undergone ultrasonic treatment, the structure of CPs was reduced from large and interconnected aggregates to small and dense fragments with loose and granular distribution [65]. The microstructure of CPs was influenced according to the cereal protein type and source, besides ultrasonic parameters and processing conditions. For instance, Jin et al. [38] reporte that the turbulent and cavitation forces could fracture the macroparticles and further change the surface state of the proteins. According to this study, ultrasonication caused cavitation bubbles and microstreaming effects that disrupted the formation of protein aggregates. The aggregation of fragments might be attributed to the destruction of cross-links between amino acid residues in protein containing S-S bonds, hydrophobic bonds, and Van der Waals interactions generated by ultrasonic radiation [11,59]. A similar phenomenon was observed by Zhang et al. [66]. The researchers studied the effects of alternate dual-frequency ultrasound on the structure of wheat gluten, and found that sonication caused alterations in the structure, which displayed irregular agglomerates. The findings from their work indicated that the values of roughness (R a , R q ) greatly increased with sonication (20/35 kHz, 150 W/L, 10 min). Likewise, Wang et al. [11] demonstrated that sonicated rice dreg protein isolated with multifrequency countercurrent S-type sonication displayed more uniform protein fragments and a looser structure under ultrasonic conditions (20/40 kHz, 60 W/L, 20 min). Their findings also agreed with the results of rice proteins obtained by Yang et al. [51], and Wang et al. [27]. Apart from the modification of CPs via ultrasonic treatment alone, ultrasound combined with alkali treatment, one of the synergistic methods of ultrasonic modification aided by other chemical and biological means, has gained considerable attention. For example, Zhang et al. [20] observed that, after sonication-assisted alkali treatment, the structure of rice dreg protein isolates loosened and disordered, exhibiting more irregular fragments and microparticles. Li et al. performed a similar study [67]. The alterations in the microstructure may be ascribed to the generation of cavitation bubbles that disrupt protein aggregation, thereby destroying the cross-linking reaction between protein molecules. Meanwhile, variation in the morphological structures might be attributed to the unfolding and partial denaturation of the tertiary protein structure, liberating more hydrophobic bonds, thereby leading to different changes in the CPs' functionality. Enzymatic Hydrolysis Enzymatic hydrolysis is broadly applied to enhance the functional quality of proteins, which is beneficial to biological substances in the human body. Compared to traditional enzymatic hydrolytic technology, ultrasound-assisted enzymolysis has better advantages, such as easier control, shorter processing time, and more convenient operation [68]. Ultrasonic pretreatment could generate special acoustic cavitation that produces high-intensity shearing force, free radicals, and shock waves, leading to protein denaturation to release hydrophilic groups, thereby affecting the solubility, bioactivity, and enzymatic efficacy of proteins [67,68]. A study by Qu et al. [69] examined the influence of sweep frequency and pulsed ultrasound (24 ± 2 kHz, 24 W) on the enzymatic hydrolytic efficiency and activity of ACE inhibitory peptides from wheat germ protein. In that study, ultrasound-assisted enzymatic hydrolysis could change a conformational molecule and improve the efficiency of enzymatic hydrolysis. This observation indicated that ultrasound could increase enzymatic activity on the basis of reasonable conformational alterations in the protein. These results agree with those of Li et al. [70], who indicated that sonication (28 kHz; 2 cm deep inserted with ultrasonic probe) could shorten the enzymolytic action time of rice protein (RP) and improve the efficiency of enzymatic hydrolysis. Compared with untreated RP, the structural surface of enzymolytic residues became uniform with many small fragments. Alterations in the structures of hydrolytic residues via sonication resulted in the improvement of enzymatic efficacy. The results agreed with those by Li et al. [39,71] for rice protein. Similarly, a recent study by Wang et al. [27] found that the enzymolytic efficiency of corn gluten meal was enhanced via sonication (sequential double frequency of 20/40 kHz). Enzymolytic efficiency reached a maximum of 15.99% with a protein dissolution rate of 61.69%. The mechanism could be ascribed to the collapse of cavitation bubbles formed by sonication at a frequency of 20 kHz, providing new cavitation nuclei for the ultrasound at 40 kHz, ultimately resulting in an increase in cavitation bubbles. Figure 4 is a diagram of the whole process of ultrasonically treated corn gluten meal. Jin et al. [71] used ultrasonic sweeping frequency (28 ± 2/68 ± 2 kHz, 80 W/L, 40 min) to study the enzymolysis of corn gluten meal, and their results indicated that ultrasound could accelerate enzymolysis, leading to an increase in the affinity between enzyme and substrate. Further, a similar phenomenon was shown after ultrasonic treatment in other CPs, such as defatted corn germ protein [72], defatted wheat germ protein [52], and oat-isolated protein [73]. Therefore, whether ultrasonication conditions are a single, dual, or triple ultrasonic frequency, ultrasound can be considered a promising treatment technique for enhancing the enzymatic efficacy of CPs. gluten meal, and their results indicated that ultrasound could accelerate enzymolysis, leading to an increase in the affinity between enzyme and substrate. Further, a similar phenomenon was shown after ultrasonic treatment in other CPs, such as defatted corn germ protein [72], defatted wheat germ protein [52], and oat-isolated protein [73]. Therefore, whether ultrasonication conditions are a single, dual, or triple ultrasonic frequency, ultrasound can be considered a promising treatment technique for enhancing the enzymatic efficacy of CPs. In Vitro Digestion The digestibility of protein mainly reflects the extent to which protein in food has been absorbed and utilized by the human body in the digestive tract. Food protein digestibility is related to the susceptibility of a protein to proteolysis. Increased digestibility meant that it could be better hydrolyzed by pepsin and trypsin, and have higher nutritional value. According to previous studies, ultrasound-assisted treatment is commonly used to optimize CPs and CP-based products so as to increase their digestive value within the human gut. For instance, Hassan et al. [74] investigated the influence of ultrasonically treated sorghum grain protein on in vitro digestibility. In that study, ultrasound (40% amplitude for 5 min) showed significantly higher digestibility (64.70 ± 0.50%) than that of other ultrasonic conditions (60% amplitude for 10 min). Compared with the control, germination was significantly promoted by sonication, thereby improving protein digestibility. Moreover, Li et al. [39] investigated the influences of various ultrasonication working modes on the in vitro digestibility of rice protein. According to that study, mono- In Vitro Digestion The digestibility of protein mainly reflects the extent to which protein in food has been absorbed and utilized by the human body in the digestive tract. Food protein digestibility is related to the susceptibility of a protein to proteolysis. Increased digestibility meant that it could be better hydrolyzed by pepsin and trypsin, and have higher nutritional value. According to previous studies, ultrasound-assisted treatment is commonly used to optimize CPs and CP-based products so as to increase their digestive value within the human gut. For instance, Hassan et al. [74] investigated the influence of ultrasonically treated sorghum grain protein on in vitro digestibility. In that study, ultrasound (40% amplitude for 5 min) showed significantly higher digestibility (64.70 ± 0.50%) than that of other ultrasonic conditions (60% amplitude for 10 min). Compared with the control, germination was significantly promoted by sonication, thereby improving protein digestibility. Moreover, Li et al. [39] investigated the influences of various ultrasonication working modes on the in vitro digestibility of rice protein. According to that study, mono-, dual-, and triplefrequency ultrasonication significantly improved the simulated gastrointestinal digestion product. An increase in hydrolyzed protein content promoted the contact activity of an enzyme involved in proteolysis, subsequently boosting the simulated gastrointestinal digestion process. In another study, wheat gliadin and green wheat gliadin treated with ultrasound (400 W, 300 W) improved in vitro digestibility by 12.75% and 11.03%, respectively [43]. Similarly, Jin et al. [38] showed that sonication (20 kHz, pulsed on-time 10 s/off-time 5 s, amplitude 60% for 10 min) enhanced the in vitro digestibility of buckwheat protein, which increased by 41% compared to the untreated samples. This trend could be attributed to the alternations in protein structure by acoustic cavitation effects. Nevertheless, high-intensity ultrasonic treatment could decrease the in vitro digestibility of proso millet bran protein when the ultrasonic conditions were a frequency of 24 kHz, power of 400 W, and amplitude of 100% for 20 min. The decrease in the in vitro digestibility was caused by the temperature increase during high-intensity ultrasonic treatment. Generally, moderate sonication could influence the digestibility and nutritional quality of CPs. Conclusions and Future Trends Sonication treatment has many advantages in terms of enhancing the functionality, conformation, enzymolysis, and digestive properties of CPs. As discussed in this review, the modification of CPs with ultrasonication technology mainly improved the functional, conformational, and physicochemical characteristics of CPs. In this regard, sonication could alter protein structural characteristics, including surface hydrophobicity, particle size, conformational (primary, secondary, and tertiary) structures, and microstructures, to improve the functionality of CPs. Moreover, alterations in the protein molecules with ultrasonic acoustic cavitation can influence the physicochemical properties, such as bu improving enzymatic efficacy and in vitro digestibility. In addition, the impact of sonication on the CP structure are related to the type of ultrasonic device/equipment, ultrasonic treatment conditions, and processing intensity. Therefore, the specific mechanism of ultrasonic processing technology or various types of ultrasonic equipment on CPs should be further studied. At the same time, it is necessary to recognize the dynamic change process of CP structures via sonication through new technologies, and precisely control the spatial structure, which is conducive to improving the functionality of CPs. Lastly, research on large-scale industrialization should also be strengthened.
8,785
2023-02-24T00:00:00.000
[ "Agricultural and Food Sciences", "Materials Science" ]
A Cognitive Framework to Secure Smart Cities . The advancement in technology has transformed Cyber Physical Systems and their interface with IoT into a more sophisticated and challenging paradigm. As a result, vulnerabilities and potential attacks manifest themselves considerably more than before, forcing researchers to rethink the conventional strategies that are currently in place to secure such physical systems. This manuscript studies the complex interweaving of sensor networks and physical systems and suggests a foundational innovation in the field. In sharp contrast with the existing IDS and IPS solutions, in this paper, a preventive and proactive method is employed to stay ahead of attacks by constantly monitoring network data patterns and identifying threats that are imminent. Here, by capitalizing on the significant progress in processing power (e.g. petascale computing) and storage capacity of computer systems, we propose a deep learning approach to predict and identify various security breaches that are about to occur. The learning process takes place by collecting a large number of files of different types and running tests on them to classify them as benign or malicious. The prediction model obtained as such can then be used to identify attacks. Our project articulates a new framework for interactions between physical systems and sensor networks, where malicious packets are repeatedly learned over time while the system continually operates with respect to imperfect security mechanisms. Introduction The world is at the brink of a new digital revolution and Cyber Physical Systems (CPS) based on the Internet of Things (IoT) networks mark the next frontier. IoT allows companies to increase productivity, city services to converge, vehicles to become autonomous, and homes to become smarter. There has been much research on the design, evaluation, testing, and verification of CPS and its associated IoT. Nonetheless, research on the development of security models and frameworks for IoT networks is very limited. A key challenge is that security solutions for IoT should not hinder the openness of the network, nor should they introduce additional latency or overhead to communications across the network. These requirements are achieved by incorporating security into the design of IoT infrastructures. This project is focused on two main principles: "adaptive security architecture" and "IoT-based CPS or ICPS" both of which are listed on Gartner's 2016 top 10 strategic technology trends. Dozens of hardware platforms of embedded systems are gaining popularity as prototypes of IoT [1][2]. Smart objects and embedded sensors are currently secured based on the same best practices of traditional networks without considering the limitations imposed by the proliferation of smart nodes in terms of processing power and memory. This is mainly due to limited research in this field. Encapsulation of protocol stack layers is done on a single hardware processor and thus, leaving the lower layers unprotected has detrimental effects. With so many new forms of data, new forms of threats will come to existence targeting them. Firewalls, Intrusion Detection Systems (IDS), and Intrusion Prevention Systems (IPS) can be found as standalone platforms, or as modules integrated into other hardware, or even as software applications with the two categories of IDS being Network-based and Host-based IDS. New generations of devices bring along newer and more sophisticated generation of threat agents and attacks. This concern is addressed by integrating security in design and thus, preventing the problem from happening. ICPS lack a secure design for implementation. Because IoT systems utilize diverse protocols and technologies encompassing a wide array of technology concepts such as Application Programming Interfaces (APIs), sensor-equipped edge devices, and messaging protocols, they are prone to different attacks. Additionally, lack of standardization to support IoT increases heterogeneity of these networks and introduces inoperable components which will create vulnerabilities. Because of utilizing a wide array of heterogeneous and often unreliable smart objects, there is a need for a reliable design model capable of supporting bandwidthintensive applications. The design objectives of this framework are twofold: first, to address security concerns; and second, to provide on-demand security guidelines for the next generation of CPS. The research questions are: a) What are the security vulnerabilities and challenges presented by the emerging technologies (e.g. 802.11.5, ZigBee, GPRS, LTE) in providing IoT connectivity? b) Can Deep Learning (DL) be as successful on IoT security as it has been in computer vision and speech recognition? c) Can security by design guideline and frameworks outperform the existing security patches and protocols? and d) How different are the security gaps for smart city sensors and gateways from those of traditional networks. IoT Security The IoT is composed of many layers of technologies, each with its own set of security challenges. Smart devices are more capable of gathering and curating sensed data which makes them more susceptible to being targeted by a variety of attack types from single target impersonation, rogue nodes, and privileged access to batched ones such as botnets and DDoS. It has been reported by FCC's Technological Advisory Council (TAC) that hackers have the lead in breaching the IoT security. The reasons are threefold: i) Conventional network security wisdom is not applicable to the IoT realm. IoT is an ecosystem driven by business gaps, rather than just a myriad of devices; ii) IoT manufacturers don't prioritize security and lack a security culture. IoT vendors compromise security to gain functionality and openness for a broader target market. IoT manufacturers follow agile manifesto for their development process which opens up many security gaps; and iii) There are inherent vulnerabilities in individual IoT nodes: a) For many types of IoT devices, physical access cannot be restricted, thus devices that expose critical information on internal nodes can be compromised; b) Although chip manufacturing innovations have led to the emergence of embedded chips with hardware-based security (e.g. ARM TrustZone) and hardware with cryptography support (e.g. ARMv8), the inclusion of such chips in every device is cost prohibitive. Thus, it makes sense to look for network security solutions that do not require modification of existing and emerging IoT devices; and c) IoT nodes generally don't support advanced networking capabilities and in particular security protocols. The proposal aims to advance insight to IoT and identify its vulnerabilities, while attempting to develop methodologies to guard against cyber-attacks that can penetrate the IoT layers through a wide range of heterogeneous devices. Securing systems from a network design perspective defines security zones and layers based on data requirements of each network segment, independently of device type and location. This is different from encrypted IoT chips and restricted physical access to IoT nodes, and enhances protection against zero-day attacks and well-known threats. Smart City Cities are rapidly converging toward digital technologies to provide advanced information services, efficient management, and resource utilization that will positively impact all aspects of our life and economy. This has led to the proliferation of ubiquitous connectivity to critical infrastructures (electrical grid, utility networks, health care, finance, etc.) that are used to deliver advanced information services to homes, businesses, and government. On the other hand, such smart systems are more complex, dynamic, heterogeneous, and have many vulnerabilities that can be exploited by cyberattacks. Protecting and securing the resources and services of smart cities become critically important due to the disruptive or even potentially life-threatening nature of a failure or attack on smart cities' infrastructures. A resilient architecture that protects smart cities' communications, controls, and computations based on autonomic computing and Moving Target Defense (MTD) techniques was proposed in [3]. The key idea was to make it extremely difficult for the attackers to figure out the current active execution environments used to run smart city services by randomizing the use of these resources at runtime. An important part of Smart City is wireless communication networks which are pervading the IoT realm due to their fast, easy, and inexpensive deployment. Pervasive wireless technologies have higher security requirements. Even though the existing security protocols for wireless communications address the privacy and confidentiality issues, various unaddressed vulnerabilities exist. Such vulnerabilities target cyber and physical availability of the systems, spoof data link and network layer addresses protocols, or even upper layer session hijacking. Smart city data analytics Smart city can be illustrated as a complex network with different types of relationships. These relationships can be as simple as a one direction data connection to as complicated as a weighted prioritized two-way connection between a gateway and a data logger. Smart nodes are placed in communities of similar purpose devices. Based on the Confidentiality, Integrity, Security (CIA) mechanisms and addressing such vertices using Authentication, Authorization, Accounting (AAA), smart city security is adding different layers of access for every user of the network. Finding communities of similar devices with similar purposes is possible through evaluating similar relationship between devices which are known as nodes in the networks. Finding these communities can help level out and separate different levels of domains for various type of relationships and access. In today's world, networks are as big as billions of nodes and smart cities are no different. To secure them, we need to put them in partitions and secure each partition both separately and as a group. To find these partition, also known as communities, there are big data community detection algorithms that could be used. Also ranking the partition could facilitate finding out which partitions can achieve a higher level of security. Security can be better handled if appropriate set of partitions are identified within the networks. With sub-partitioning, systems such as Hadoop can make the parallel data handling possible [4]. Existing methods Alipour et al. [5] analyzed intrusion detection systems for Media Access Control (MAC) and Physical Layer (PHY) specifications using an anomaly-based behavioral analysis to detect abnormal behaviors, which are likely to be triggered by threat agents. They did this by monitoring the n-consecutive transitions of the protocol state machine. Then, sequential Machine Learning techniques were applied to model the n-transition patterns in the protocol. The probabilities of these transitions were normalized, reaching a low false positive of less than 0.1%. Spoofers impersonate legitimate users to exploit the user services and privileges. The Semi-Global Alignment algorithm (SGA) is an efficient technique to detect spoof attacks. The limitation of SGA is that it cannot be applied to large scale, multiuser systems due to high false positive rates. Kholidy et al. [6] proposed the Data-Driven SGA which improves the scoring systems using distinct alignment parameters per user. It also adapts to changes in the user behavior by updating the signature of a user according to his/her current behavior. The main objective of this proposal is to design a secure architectural framework for implementation of IoT-based, small to large-scale CPS in Smart City. This is important because of the inevitable migration to IoT networks and the unsafe and insecure nature of the underlying sensor-embedded smart objects, which interact with the physical world. Traditional security solutions might address security needs of IoT in part but there are challenges such as platform security limitations, ubiquitous mobility, mass quantities, and cloud-based operations that are not addressed. The proposed approach This research proposes a tunable underlying framework for IoT networks of different sizes which will, in turn, open many new research opportunities in IoT security. In addition, this research will facilitate and expedite adoption of small to large-scale IoT-based. But in the CPS context, security takes new forms and some of the previously used solutions such as Host-based IDS are not practical due to limited hardware resources on endpoint sensors. Adding to the issue is the fast-growing number of such sensors and their faster adoption by the public resulting in their widespread use without taking into account the many security gaps. Together with scientific advances in sensing and communications technologies, many consumers are using body sensors, connecting their generated data to their online profiles, or storing them on their smartphones or laptops. This project employs four technologies or methods as discussed below. The logical relation among these pieces is presented in Figure 1. Fig. 1. Framework-Development Process Anomaly-based, also called behavior-based, methods assume that attackers behave differently than normal users. The advantage of this method is the ability to spot a threat without first knowing its signature. Historically, this advantage has been offset by high false positive rates, the difficulty of training a system in a highly dynamic environment, and computational expense [7]. Some instances of the targeted vulnerabilities are presented in Table 2. It should be noted that most of the new attacks are typically minor mutations of the known ones, which leads us to believe that the DL approach can be successful on detecting imminent attacks. DL methods are successfully incorporated in various domains because DL relies on local proximity (typically spatial and/or temporal) among patterns to find and construct higher order patterns (Figure 2). Fig. 2. Overarching Scheme of Threat Prediction The factors moving Machine Learning tools and techniques from the research lab to the operational domain include both the phenomenal growth in inexpensive compute power and bandwidth and the overwhelming amount of data generated and dumped into Security Information and Event Management (SIEM) tools daily. Although Machine Learning tools can be very effective, they produce very different results depending on the source and quality of data being analyzed. Specific domain knowledge related to security-as opposed to clinical research or finance, for example-is needed to design a threat detection system using appropriate Machine Learning mathematical and statistical algorithms. A data scientist must apply security domain knowledge to identify primary and secondary sources of data, determine how to clean and transform acquired data and select the best Machine Learning analytical method or algorithm for the problem at hand. Primary sources for the security domain include network packets, Machine Learning -based analysis of which reveals otherwise invisible communication patterns from an attacker inside the network. Secondary sources are logs routinely collected from other devices, which may provide additional depth to the analysis but not direct evidence of activity due to the nature of logs' role in providing security defences [7]. Results and discussions DL [13] is a field that encompasses machine learning so it can be used to learn intricate patterns from large volumes of data. It is generally an architecture formed out of neural network activation functions. Supervised and Unsupervised learning refer to labeled and nonlabeled data respectively [14]. For supervised learning, techniques such as Recurrent Neural Networks (fast and efficient), Convolutional Neural Networks (Time consuming, but suitable for special data, such as images), Long Short Term Memory (which can be used for vanishing gradient problem [15] which occurs nearing the end of training, where gradient is supposed to be really less) apart from the traditional neural networks such as Deep Boltzmann Machines or Deep Belief Networks as well as fully connected, slow-to-train Multi-Layer Perceptron [14]. Each layer of the deep models shown below can be consisting of linear or complex activation functions depending on the overall complexity of the problem. For instance, for malware detection problem, we stacked two layers with linear activations with two layers of Rectified Linear Units in between them. This was implemented to get the best accuracy of prediction for the given data [16]. In this study, CSIC 2010 HTTP Dataset was used to detect web attacks using session IDs and indices. This data set has been widely used for abnormal behavior detection. Regarding web traffic, some of the problems of this data set are that it is out of date and also that it does not include enough actual attacks and hence, it is criticized by security researchers. This is a proven benchmark initially set for researchers to compare different methods of detection and classification of network attacks. It was built as an improvement over the earlier KDD Cup 1999 dataset in the form of a reduction in redundant records, proportionate number of records in each difficulty level group [16]. Experts believe that new attacks can be mostly identified by the signature of the known attacks. According to this principle, we train the data on the features given in this dataset, and some derived from them. They include, but are not limited to the duration, protocol type, destination network service, source and destination lengths in bytes, flags, number of wrong fragments, the number of high QoS packets, etc. The results show the logistic regression classification, where the Dependent Variable is categorical, can perform anomaly detection efficiently. The ROC curve characterizing the preliminary results is outlined in Figure 3. As illustrated in Figure 3, a simple logistic regression classification with two parameters can achieve a performance of 64%. Utilizing DL techniques with a multitude of features results in higher accuracy. Logistic regression classifies data into two categories, and the Receiver operating characteristics (ROC) curve indicates the area under the curve which signifies the percentage of accurate classification. According to the results on the dataset, accuracy percentage is 86%. A standard metric to evaluate logistic regression classification is accuracy, which is calculated by dividing true Positives over the sum of false positives and true positives. Fig. 3. ROC Curve of Logistic Regression The learning model consists of an input vector X. Logits are outputs of linear functions -that are continuous and differentiable. Logits need to be converted into a scale of probabilities [0,1]. The weight and bias parameters need training. This linear block can be cascaded with multiple different linear blocks that sum up to learn different features of the input. However, to increase the complexity to define finer features of the input, we need a combination of non-linear elements that can do so. This can be achieved by combining rectified linear units that scale inputs. Once Softmax function converts logits into probabilities we can use these values to be given to series of Rectified Linear Units. Rectified Linear Units (ReLUs) can be shown as: where, x = input vector, f(x) = output of a rectified linear unit. A series of ReLUs [17] cascaded together can form a computationally expensive, although fairly efficient non-linear differentiable model to model complexities of a function. ReLUs can be replaced by functions such as Sigmoid, Tanh, etc. Figure 4 presents the comparison diagram of such non-linear functions for the preliminary results. Sigmoid functions are used for logistic regression functions. ReLUs outperform sigmoid and give better classification accuracy only by a slight margin. Thus, both are widely used and give comparable results. They introduce non-linearity while pooling up layers of one convolutional layer on others. Components of the learning model are depicted in Figure 5. In general, the linear blocks or layers can be stacked upon each other, with non-linear interface as shown below. DL Model (forward propagation) [18] is the very basis of learning in which features from the first layer is carried forward to the next layer. For training, a widely popular algorithm is Back-Propagation [19], in which gradients or relative difference between iterations of calculating weight functions are minimized by a backward-looking architecture as shown below. Back propagation is a mean-squared-error function which is differentiable. Fig. 4. Classification Results This work is supported in part by the Doctoral Graduate Research Assistantship from UNLV Graduate College and in part by the NSF award #EPS-IIA-1301726 (EPSCoR NEXUS).
4,442.2
2018-01-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
UAV Forensics: DJI Mini 2 Case Study : Rapid technology advancements, especially in the past decade, have allowed off-the-shelf unmanned aerial vehicles (UAVs) that weigh less than 250 g to become available for recreational use by the general population. Many well-known manufacturers (e.g., DJI) are now focusing on this segment of UAVs, and the new DJI Mini 2 drone is one of many that falls under this category, which enables easy access to be purchased and used without any Part 107 certification and Remote ID registration. The versatility of drones and drone models is appealing for customers, but they pose many challenges to forensic tools and digital forensics investigators due to numerous hardware and software variations. In addition, different devices can be associated and used for controlling these drones (e.g., Android and iOS smartphones). Moreover, according to the Federal Aviation Administration (FAA), the adoption of Remote ID is not going to be required for people without the 107 certifications for this segment at least until 2023, which creates finding personally identifiable information a necessity in these types of investigations. In this research, we conducted a comprehensive investigation of DJI Mini 2 and its data stored across multiple devices (e.g., SD cards and mobile devices) that are associated with the drone. The aim of this paper is to (1) create several criminal-like scenarios, (2) acquire and analyze the created scenarios using leading forensics software (e.g., Cellebrite and Magnet Axiom) that are commonly used by law enforcement agencies, (3) and present findings associated with potential criminal activities. Introduction According to [1], there have been dramatic advancements in the drone industry, and it is estimated that the market will reach nearly USD 60 billion by 2025. Moreover, it is predicted that shipments of consumer drones will grow to USD 29 million by 2021. Additionally, the increased demand for drones in the public and private sectors is predicted to be worth around USD 100 billion in the future. To put it in perspective, in 2020, consumer drones in the United Stated sales were over USD 1.25 billion [1]. On the other hand, 367 illegal activities were reported between October 2020 and December 2020 in the USA alone by the Federal Aviation Administration (FAA) [2]. Considering that the reported number is only what was reported by the FAA, it is safe to assume that the number is much greater than noted. As there are many types of drones that are involved in many of the incidents, the FAA has reported that they are working closely with law enforcement to investigate and identify these incidents [2]. Previous research has looked into multiple variations of DJI drones (e.g., DJI Phantom series and Spark) as well as some other well known manufacturers. The studies were comprehensive and provided a body of knowledge with an adequate amount of relevant content, which contributed to the unmanned aerial vehicle (UAV) forensic community. However, there was a lack of literature supporting drones weighing less than 250 g (e.g., DJI Mini 2). Our research aims to address the lack of comprehensive forensics methods on the DJI Mini 2 drone and its necessary operational software and controllers by performing extensive experiments with multiple criminal-like scenarios. Additionally, our conducted experiments consist of related mobile software (e.g., DJI Fly app) used to operate the drone, which is populated and then forensically analyzed. The contributions of this research are as follows: • Providing a comprehensive forensic investigation across multiple devices (e.g., drone body, SD cards, and mobile devices) associated with the drone. • Analyze several criminal-like scenarios for the DJI Mini 2 drone in the real world, utilizing multiple devices. • Testing the carrying capabilities of the DJI Mini 2 to evaluate its transporting performance for criminal activities, such as terrorist attacks and smuggling. This paper is structured as follows: Section 2 discusses other researches and work that have already been done in the field of UAV/drone forensics. Section 3 demonstrates our methodology and the experiment design that was followed throughout the study. Section 4 highlights our findings, while Section 5 presents the discussion on the importance of our results. Finally, Section 6 concludes this research and provides direction for future work. Related Work As highlighted in [3], an ever-growing concern regarding UAVs would be the easy confiscation or loss of technology, as seen by drones being confiscated and thus privacy violations occurring due to the extraction of data from the device. Drones have been used as a mode of recreation, but there are many uses that companies and governments have found. Just as UAVs have the capability for good, such as collecting information to be used as evidence in lawsuits, they are easily used for malicious purposes, such as violating no-fly zones, illegal usage by criminals, or launch of areal missiles [3]. At the conclusion, the researchers were able to access the data on the Parrot AR Drone 2.0 used in their study and access the file system via multiple connection methods. This resulted in retrieving the controller's phone ID [3]. Based on the results shown in the paper, it was proposed to take the Parrot AR Drone 2.0 and other drones and analyze further to extrapolate the differences in the accessibility of data. Moreover, researchers in [4] experimented using a DJI Spark drone in a criminal-like scenario while it is being controlled and linked to a mobile device. In this study, the mobile device utilized by the researchers was Android OS-operated, and they used DJI GO 4 application (app) v4.3.11, which was downloaded from the Android Play Store to control the drone. They found that the mobile device kept the flight data files under two recorded versions; the first is .DAT, and the other is in .txt files. Both formats along with media files taken during the flight were preserved under the DJI GO 4 package folder named DJI/dji.go.v4. They also were able to find a connection between all components they used for the study, including the drone, SD card, and smartphone, using temporal rules to determine their linkage [4]. Another recent study [5] has demonstrated a comprehensive forensic study on DJI Mavic Air 2, utilizing multiple devices and case scenarios in which the last was a crash landing. Researchers have dealt with the damaged drone and performed digital acquisition from different components that were used in the case study (e.g., smartphone, laptop, and drone body). In addition, the researchers took into consideration two possible acquisition methodologies, the first being chip-off and the second being chip-on techniques. As a result, they were able to discover a board serial number that is treated as Personal Identifiable Information (PII), which can be linked to the drone owner/operator [5]. Security issues continue to rise with the popularity of UAVs, and it is ever necessary for investigators to be able to forensically analyze devices used for malicious purposes [6]. As presented in [6], one of the primary obstacles comes from the complex and individual-ized structure of the drone when performing forensics analysis. One proposed solution to the challenge was creating a framework that could aid forensics investigators by providing twelve steps of systematic analysis [6]. During the creation of the framework, five different models from multiple manufacturers were utilized. Once created, the researchers tested their proposed framework and found that it could be used for each model in the study to aid in an investigation. In addition, researchers of [7] have highlighted some UAV forensic challenges that are associated with the tools and guidelines/frameworks used by investigators. To tackle some of the challenges they discussed, they have proposed a process that consists of 20 steps while demonstrating it using DJI Phantom 3 as a case study. Moreover, the proposed framework integrates the probability that the UAV is used in an illegal operation, whereas researchers of [8] have a proposed drone technical forensic investigation process that can be used in some cases. The authors have demonstrated the proposed investigation process using a Yuneec Typhoon H drone. A recent study [9] that further explains the analysis of drones is known as the Drone Forensics discipline. This discipline is necessary due to the need for specialized tools and analysis as drones have many enhancements that create barriers during digital forensic analysis. Information can be lost if the examiners are not properly educated or equipped. The study [9] focused only on one manufacturer (DJI) but utilized various models to test and compare forensics tools. The methodology behind the approach was to discover which standards and procedures are best used in an investigation with DJI drones [9]. Tools of analysis were limited to Autopsy, Paraben's E3: Universal, and CsvView/Datcon. Despite the paper utilizing two web tools for additional data extraction at the end, it was not recommended due to the vulnerabilities and lack of reliability. It was found that file systems in different models of DJI drones contain the same standards for media files, EXIF data, spatial movement, metadata, and location [9]. Another very recent study [10] conducted on common drone models utilized in criminal activities demonstrates what information could be gathered to better inform a law enforcement investigation. From the six brands, investigators gathered such information as media files in the form of pictures and videos, flight patterns, locations, and owner identifications. This study continued to indicate the use of drones autonomously or by an operator. This information can dictate the form of data gathered. Illegal activities via drone use come in many forms, such as drug drops over foreign borders, contraband drops over prison fences, and unsolicited surveillance. The scenario provided in the study is based upon the before-mentioned instances of illegal use of drones and law enforcement, engaging forensic analysis to determine such information as ownership and link to crime [10]. The study maintains that state and federal legislation has not managed to regulate drone technology and thus leaves most stakeholders vulnerable. In the end, it was concluded that due to the lack of a universal drone structure or forensic tools, continued research must be completed to determine the method of data extraction. Lastly, the researchers left more invasive methods, such as chip-off, as a last resort due to the possibility of damage. Yousef and Iqbal [11] used the DJI Mavic Air drone and the iPhone 6 mobile device for the basis of the experiment in pursuit of determining examiner procedures, extrapolating files that could be used as court evidence. The researchers denoted limitations regarding their process being applied to various kinds of drones and found a challenge in the duration of time allotted for drone examination. Due to time constraints, they could not find the DJI's .DAT and .txt files that would have provided flight information. Hamidi et al. [12] focused primarily on the DJI Phantom 4 Standard drone to determine a specific procedure for analyzing the .DAT file and extracting data from storage sources that could be used in a criminal case. The study resulted in the researchers finding considerable relevant information, which accomplished this study's goal, but in conclusion, it was recommended to continue research on other drone types and a greater research emphasis on .DAT and .txt file structures based on their complexity [12]. On the other hand, researchers in [13] have developed Drone Open source Parser (DROP), which is an open-source parser for DJI Phantom 3 flight logs that are encrypted in the special format of .DAT files. It takes each of the unreadable .DAT files format, which represents a flight log, and decrypts them into .CSV readable format. Moreover, to validate their results, they have compared their tool with the outcome of the decrypted and parsed .txt using http://healthydrones.com/ (accessed on 31 May 2021), which is now https://airdata.com/ (accessed on 31 May 2021). As a result, they found that the decrypted outcome of the DROP tool in a CSV file format is almost identical to the outcome of the flight logs that are decoded and decrypted from the .txt format. In contrast, researchers of [14] used DJI Phantom 3 Advanced as a case study to test the ability to recover GPS data as digital evidence using DatCon tool v2.3.0, enabling them to decrypt the .DAT files recovered from the digital forensic image of the Android smartphone used in the study. Furthermore, in a recent case study [15], researchers have assessed and highlighted important differences in the capabilities of widely used UAV forensic software and tools (e.g., Autopsy, Cellebrite, and Datcon). Although they found that the DatCon tool was able to decrypt the .DAT file and convert it to a readable CSV file format, other tools used in the study have varied results regarding decryption of the .DAT files even though some were using the DatCon module. The study highlights the importance of validation and cross-checking results using multiple tools. Many developments have been integrated into forensics software to detect and acquire drone equipment. For instance, Cellebrite now supports the acquisition of many DJI types (e.g., Inspire 2, Mavic Pro, Mavic Pro 2, Mavic Air, Phantom 3 and 4, and Spark) but not the Mini 2. Despite the efforts and previous research, many questions remain unanswered regarding drone forensics, particularly the new models, such as DJI Mini 2. Methodology Malicious drone usage can be conducted in many ways, and based on the given scenario, the investigation will differ. This is due to drones varying in manufacturer, technical capabilities, version, and even size. In addition, when it comes to drone investigation techniques during real-life crimes, the drone might not be the only place to look for evidence. Applications used to operate, set up, and even update the drones can supplement and complement the existing evidence found on the device or used as primary evidence if deemed appropriate. This is to aid in finding possible connections between the software and the drone to help future investigations. Experiment Design During real-life crimes, when the digital evidence is involved and the devices are ceased, very rarely, only information populated on those devices will be related to the crime. Filtering through the information presents challenges, especially when it comes to large storage capacities and unknown file structures. For this study, we decided to eliminate unwanted information not related to the case study by controlling the environment as much as our resources allowed us. This ensured that the data populated during the study would be easier to filter through and unwanted cross-contamination from another unrelated action was eliminated, providing comprehensive research from the beginning of the case to the end without any additional steps. The logic behind the selected devices for the study came from the extensive prior research conducted. The research has shown that these devices are most commonly used, available to the general public without any restrictions or prior requirements, and they are reasonably inexpensive. An additional reason for particularly choosing the DJI Mini 2 drone was the lack of research in the literature and digital forensics investigations. A full list of the devices utilized to create the case study scenarios and conduct the experiment for the research is listed in Table 1. Moreover, to find valuable information, this study and research followed best practices to acquire and examine the forensic images of the associated devices. For the two smartphones used in this study, we followed the data population and examination guidelines suggested by the National Institute of Standards and Technology (NIST) [16]. In addition, Figure 1 shows the complete methodology involving all the processes and components utilized. Each of the phases will be discussed in detail to outline the steps that were taken into consideration. Application Installation and Preparation Prior to working with the mobile phones and the drone, a brand-new WiFi Service Set IDentifier (SSID) was created for the devices to use during the study. Setting up a new wireless network further ensures control of the environment. The device used as a wireless router was a TP-Link (TL-WR1043N v5) with OpenWRT (19.07.5 r11257-5090152ae3). To set up the case study scenario and ensure no previous data were present, SD cards and smartphones were wiped and restored to factory settings. DJI Mini 2 was not susceptible to this process since the drone and its associated remote controller were new and never used previously. Additionally, Google Inc. Gmail service was used as the email provider and a primary account to set up iCloud, Apple Appstore, Google Play Store, and an associated user account for DJI-related applications (apps). Next, the software necessary to operate the drone during the flight (i.e., DJI Fly) was installed on both phones running Android and iOS. Various digital forensics software tools were used to acquire and analyze the data. In this research, we used resources that included open-source tools (i.e., Autopsy [17]) that were complemented with proprietary tools such as Magnet AXIOM, and Cellebrite UFED 4PC, which are used worldwide by law enforcement agencies and practitioners [18,19]. These tools were also used for cross-validation. Table 2 lists all tools that are used in all phases of the experiment (i.e., data population, acquisition, and analysis). The iOS mobile device used for the study was an iPhone 7 with iOS version 13.3.1. After connecting to the WiFi network we created, we logged into the App Store using a previously activated Gmail account and downloaded the DJI Fly app (Version 1.3.1 (440)). Upon download, we opened the app and followed the default options. The DJI Fly application initially presented a video and then asked for enabling Bluetooth, location, and notification service. We selected "only while using the app". The next prompt was if we wanted to participate in the DJI product improvement, and we selected "not now". Lastly, upon logging into the DJI Fly application using the Gmail account, we were notified about geo-zone restrictions and selected "agree". After the previous process, the app was ready for the connection with the drone. Android Setup The Android device used in this study was a rooted Samsung Galaxy s7 running Android 8.0.0. The DJI Fly app had no official version available in the Android Play Store, so v1.3.0 was downloaded as an APK file and then installed on the phone. As a connection to the internet, the phone was connected to the WiFi access point created for this study, acquiring the time and accurate GPS location. Upon opening the DJI Fly app, the prompted messages asked for several permissions (e.g., notification and location services, and access to photos), which were all agreed to. The app on the Samsung Galaxy S7, similarly to the iPhone 7, asked us to either log in or create an account, where we chose to log in, since we already had the account credentials used when we downloaded the iPhone's app. Laptops Three laptops were used during the experiment, Lenovo G570 (Model 4334), HP Pavilion dv6 (dv6-7003em), and MacBook Pro (15-inch, 2018 running 2.9 GHz 6-core Intel i9). The Lenovo laptop had a Windows 10 Home (10.0.19042) operating system, used to set up the accounts, manage the WiFi network, and root the android phone. The HP Pavilion laptop with Linux Kali (Release: 2020.4) was set up to look for any interaction between the drone and the controller. For this, we utilized the monitor mode capability allowing the wireless card (Intel AC7260) to see 802.11 management frames and Wireshark (v3.2.7) capturing the frames on different channels. Although there was no need for the iPhone 7 to be jailbroken when acquiring the iPhone device using Cellebrite, there is a need for the device to be jailbroken using Magnet ACQUIRE because it does not offer jailbreaking for the investigator. Therefore, we needed the MacBook Pro laptop to jailbreak the iPhone using the Checkra1n app (beta 0.12.2). Drone The DJI Mini 2 drone used for this study was a brand new unit. Therefore, we made sure that it was fully charged and equipped with an empty external SD card. To simulate an attacker opening the drone and testing it before the deployment, we logged on to the app and operated once before the data population scenarios using the iPhone 7. All associated applications to operate the drone were installed on the phone used in this study. Devices used in this study are presented in Figure 2. Moreover, the figure includes additional items, which will be explained in detail further in the paper. Machines Used for Investigations In this study, two forensic workstations equipped with the same tools were used across all phases of the study to validate the results and eliminate any known software limitations. Additionally, to prevent any software bias, all acquired data were examined with at least two different software solutions if applicable. The first machine was equipped with Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, with 16 GB RAM, Windows 10 Education 20H2, while the second was equipped with Intel i9-10900K processor, NVIDIA GeForce RTX™ 3070 graphics card, and 32GB of RAM memory running X version of Windows 10 Pro. Note that this study does not consider the performance of the forensic processes performed on different workstations. Data Population Following the initial setup of the DJI Mini 2 and confirming the proper functionality (i.e., operating and flying it once), the drone was taken out to a location for the data population. The data population consisted of flying the drone in various patterns and altitudes while gathering images, videos, and other data. The process was completed on both iPhone 7 and Samsung Galaxy s7 utilizing the DJI Fly application for drone controlling and flying. During travel from the setup location of the drone to the location of the data population, we left the drone powered on. In addition, two phones were utilized to navigate to the location using native map apps (i.e., Google Maps for Android, Apple Maps for iPhone). Upon reaching the location, we flew the drone in 4 different scenarios on the first test day and additional flights for testing weight carrying capabilities on another day (i.e., Scenario 5). We discuss these scenarios in detail in the following Sections 3.3.1-3.3.5. Moreover, Table 3 illustrates the scenarios in detail. Note that all times used in this study are based on Eastern Time. This scenario was designed to portray a criminal-like action where a malicious user would purchase a drone that does not require any prior paperwork, and it is widely used. The user sets up the drone using a personal mobile device (i.e., iPhone 7) to make it operational. The user first tests the drone to make sure that it is working as expected for the action. Once the test is completed, the user leaves for the location where the drone will be flown from. Once arrived, the user prepares the drone and confirms the home location before the flight. The user flies the drone to the desired location and flies it back. During the flight, the user was able to take videos and photos of the flight. This type of scenario can be applied to many different real-life situations where malicious users utilize the drone to drop off payloads and perform reconnaissance, etc. Scenario 2 In this scenario, the same device was used as in Scenario 1 but only to perform a one-way flight. The reason behind designing a one-way flight is that we tried simulating the drone being captured or destroyed, hence not returning back. While catching the drone in this scenario, we tried shutting the drone by force clicking and holding the turn-off button; however, we could not turn it off while flying. During the process of capturing, the drone was turned off immediately by rotating it 180 degrees (i.e., top of the drone facing the ground). Scenario 3 The scenario follows a similar process and the idea behind the malicious use as in Scenario 1 with few differences. First, instead of iPhone 7 as a mobile device connected to the controller, the device used was Samsung Galaxy 7. Second, the timestamps of the flight were different, as well as the flying path. Scenario 4 This scenario follows the methodology of Scenario 2 except using Samsung Galaxy S7 instead of iPhone 7 as a mobile device. Just like other scenarios, this one has different timestamps of the flight and flying path. Scenario 5 In this last scenario, we decided to test the carrying capabilities of the DJI Mini 2 drone using the iPhone 7 with multiple flights. This scenario was conducted at a separate location and on a different day than the previous four scenarios. The reasoning behind the test was to see the maximum carrying capacity and any difference between carrying payloads using multiple methods. The exact times of this flight scenario were not recorded in Table 3 due to multiple short flights; while trying to determine the maximum carrying capacity, only the date of the flight was recorded. This scenario shows the capabilities of the drone and what potential malicious items that are similar to the size and weight the drone can carry. Maximum carrying capacity can play a big role during the malicious act where dangerous items can be transported, dropped, and deployed. Acquisition To ensure that we made the most out of the two smartphone devices and gain privileged status, we performed jailbreak and rooting operations for the iPhone and Samsung devices, respectively, as previously discussed. This was required to create physical forensic images using Cellebrite UFED, Magnet AXIOM Process, and Magnet Acquire. The SD card used in the case scenario was acquired using Cellebrite's Memory Card Reader with write blocker capability (see Figure 3). Additionally, Table 4 presents the devices and tools used, the version of the tools, and image creation date and time in detail. Analysis and Findings In this phase of the experiment, we gathered, analyzed, and compared the previously populated data. The comparison of the collected data was achieved by utilizing multiple digital forensics tools. We started by analyzing the images created by the acquisition tools. Table 5 explains the symbols used in Table 6, which illustrates the findings from our analysis using the three initial forensics tools utilized for this study. Table 6. Tool evaluation assessment. Moreover, we were able to recover significant PII that can help in investigations. First, we were able to recover the location of the first time that the iPhone was used to connect to the drone (i.e., in our study, it was the setup location) from Application\38FA31DB-9A11-4365-9083-8657C089F83D\Library\.space_db\flysafe_dji_flight_dynamic_areas.db database inside a table named dynamic_geofence_amba_record. We believe that this location was acquired from the phone and not the drone because the first time we operated the drone was inside a building, and the drone was not able to detect GPS signals. Figure 5 illustrates the user geolocation finding. Other PIIs, such as the email that is used for app login, activation timestamp, DJI model name, last connected time, uptime, and aircraft camera serial number were recovered from Application\38FA31DB-9A11-4365-9083-8657C089F83D\Library\Preferences\com.dji.golite.plist. Figure 6 shows flight controller serial number, camera serial number, product name, last connect time, and the drone serial number. Moreover, Figure 7 displays the last connected email used to fly a drone, and Figure 8 shows the first time the iPhone was connected to the drone after the user finished with the activation process. iOS Analysis Another finding worth mentioning, in the scenario where the drone was captured, is that the camera on the drone kept recording for another 20 s even though it was flipped upside down, and the propellers were not spinning. Although we compared the acquisition after the first day that consisted of the first two scenarios for the iPhone (see Sections 3.3.1 and 3.3.2) and the acquisition after Scenario 5 (see Section 3.3.5), the uptime did not change. On the other hand, the last connection recorded has changed and updated following the second acquisition that was performed after Scenario 5 to when the drone was last connected to the application using this iPhone. In addition, in \private\var\mobile\Containers\Data\Application\38FA31DB-9A11-4365-9083-8657C089F83D\Documents\Tmp\DJISyncLog_2021-03-26_ [13-36-13].txt file, we found records stored that convey the DJI drone sync with the phone/server, and the timestamps are similar to the scenarios. Android Analysis Similar to iOS analysis, we were able to discover both .txt and .DAT file types in an encrypted form. The .txt files containing flight logs were recovered from \data\media\0\DJI \dji.go.v5\FlightRecord folder, and the .DAT files were recovered from the \data\media\0\DJI \dji.go.v5\FlightRecord\MCDatFlightRecords\ folder. Regarding PII, the location of the first time that the phone connected to the drone was recovered from the table named dynamic_geofence_amba_record inside the following database \data\data\dji.go.v5\databases\flysafe_dji_flight_dynamic_areas.db. Other similar PII to the ones recovered from iPhone (e.g., DJI model name, and aircraft camera serial number) can be recovered from the \data\data\dji.go.v5\databases\dji.db database. Figure 9 demonstrates the geolocation values (longtitute and latitute) recovered from the dynamic_geofence_amba_record table, where Figure 10 shows the drone serial number along with other valuable information. Moreover, we were able to recover a thumbnail image showing an image captured from when the iPhone was controlling the drone; the image recovered from the following path \data\media\0\DJI\dji.go.v5\CACHE_IMAGE\ImageCaches\. On the other hand, the video captured during Scenario 4 (see Section 3.3.4) was cut off and not complete, whereas for the iPhone (see Section 3.3.2), it was complete. Additionally, it appears that Cellebrite was not able to extract any images embedded inside the .txt flight logs on Android. SD Card Analysis Analyzing and examining the external SD card revealed that the SD card is storing media files taken during the flight and some encrypted logs. Although these logs are not the flight path logs, we were unable to decrypt them despite our efforts. On the other hand, all recovered videos have locations associated with them (see Figure 11). In addition, we were able to access media files from both phones and all four scenarios where the SD card was used. Even the video recorded during Scenario 4 (see Section 3.3.4) is found to be complete. Both the pictures and videos stored in the SD card can be recovered from the \img_Dump_001.bin\vol_vol2\DCIM\100MEDIA folder. In addition, we were able to use ExifTool to recover the camera serial number from the pictures taken. Figure 12 demonstrates the recovered camera serial number and plenty of other information. Carrying Capacity Analysis The first step of the carrying capacity drone analysis was to determine the overall weight of the drone. The drone was placed on the scale (Perfect Portion) and measured at 240 g (±3 g) compared to the 249 g of marketed weight. Next, the fishing line measuring 60 cm was tied to the bottom of the drone with an attached plastic box where the weights were added. The weight of the plastic box with the fishing line added up to 36 g, making the overall weight of the drone and box 276 g. Weights were added between each flight until the drone was not able to take off. The maximum capacity that the drone was able to lift was around 300 g (including the plastic box, weight, and the line). Although the drone was able to lift that amount of weight during the take-off and flight, it could not maneuver easily. Therefore, the next step was to determine the optimal capacity that the drone could carry and maneuver without any problems. For this experiment, we started removing weights (starting at 300 g) until we were satisfied that the drone operations were restored to normal. The weight we determined in which the drone can restore its normal flight was around 250 g. It is important to note that during the flight, we were experiencing high winds and that it is quite possible that the carrying capacity can vary during different flying conditions. As our last experiment, we decided to place the weights and the box on top of the drone. The drone was able to carry and fly with 250 g attached to it. Moreover, we observed more responsiveness from the drone when the weights were attached on top of the drone compared to hanging and swinging below it. Figure 13 depicts the setup used during this analysis. Data Analysis As discussed earlier, .DAT and .txt flight logs/records were not recognized by the forensics tools used in this study because they were encrypted. Therefore, we ran an entropy analysis using the Binwalk tool (v2.2.0) [20] to calculate each file's entropy score. Figure 14 illustrates the outcome of the tool after running it on both files for Scenario 1 (see Section 3.3.1). As shown in Figure 14, the results showed very high entropy scores throughout both files. Following the literature review, many resources suggested using different tools in an attempt to decrypt the .DAT files generated during the scenarios. The most used freeware tool to decrypt .DAT files is the DatCon [21]. Even though the website stated the model of our drone was not supported, we tested it regardless. It turned out to be true, and the .DAT files could not be decrypted. As an alternative solution to examine the contents of the .DAT files, we found the website airdata.com. Airdata helps pilots and others (e.g., researchers) by providing fleet drone management, as well as crash-prevention information [22]. After the registration of a free account, we were able to upload our .DAT files and decrypt them successfully. Despite the ability of airdata.com to decrypt the .DAT files, we also tried to decrypt .txt files; however, the process was not successful. Moreover, airdata.com allows the user to download the decrypted .DAT files in KML, GPX, CSV, and original formats. Therefore, to ensure data integrity, we compared the outcome of the downloaded CSV file once after the decryption, the second time after deleting the file and enabling the web tool to decrypt it again. Interestingly, we found that the checksum values using the MD5 hashing algorithm of the two downloaded CSV files match, meaning that the tool provides consistent decryption results. While the main motive behind the website (airdata.com) was not to provide digital forensic services to the end-users but to give pilots crash-prevention information and manage large fleets of drones, we found this website very beneficial during our study [22]. Its intuitive user interface and detailed analysis of the log files allowed us to visualize flight patterns, altitudes, battery levels, controls sent to the drone, error codes, and many more. A screenshot in Figure 15 shows the user interface of the airdata.com web tool during the analysis process of Scenario 1 and Scenario 2. Moreover, Figure 16 illustrates the weather tab that contains wind speed faced during one of the flights in Scenario 5. One important finding is we could not have a matching location (i.e., drone flight location and iPhone cached location) from the cached locations recovered from the iPhone in the first acquisition using Magnet Acquire. However, we were able to find matching locations for the drone and the iPhone after the second acquisition using Magnet Acquire. This is due to the fact that the iPhone keeps the cached locations for a certain time (approximately one week), and our first Magnet acquisition was more than seven days apart from the day we flew the drone. However, we were able to find the matching location in the second acquisition because it was performed within seven days. In addition, in the first Cellebrite acquisition for Scenarios 1 and 2, we were able to match the user location with the cached locations on the drone flight records because it was less than a week from the scenario. We recovered the iPhone cached locations from the ZRTCLLOCATIONMO table located in the \private\var\mobile\Library\Caches\com.apple.routined\Cache.sqlite database. Moreover, we wanted to confirm that the drone or the controller was not beaconing any management frames during the flight. We matched the channels by setting the same channel and frequency on the app and the wireless card on the laptop. We then opened Wireshark (v3.2.7) and filtered DJI's Media Access Control (MAC) address. Since we did not want the filter for a specific MAC address, we created a byte-offset capture filter for the first three octets unique to each manufacturer, such as DJI. The filters did not show any MAC addresses related to the DJI. The probable reason for this is because the drone is using OcuSync 2.0, which is a wireless transmission protocol used in these newer types of drones when transmitting video feeds over long distances. Although the user interface on both iOS and Android devices looked similar, one major difference we spotted was the password character limitation when accessing the accounts before the flight. Lastly, during the capturing scenarios of the drone operations on both iOS and Android mobile devices, we tried shutting off the drone first while in flight by pressing and then holding the power button. However, this method did not work. When we flipped the drone upside down, the propellers stopped, and we were able to suspend the drone operations. This illustrated how this drone can be best captured without causing damage to the drone. Discussion During the course of this study, we experienced several difficulties with the digital forensic tools when it came to decrypting the flight records. Therefore, we can safely conclude that there is no one tool that would be superior in drone forensics, as discussed in Section 4 in detail. It is important to note that during these types of investigations, the investigators need to utilize multiple techniques and sources to find and examine data. As we have shown, the tool that was able to decrypt the .DAT files was not intended for digital forensics investigations; however, it was able to decrypt and then analyze the flight logs. Moreover, current digital forensics tools could not recognize the drone when connected directly to the forensics machine. This is because this model of the drone is new, and it is found that this model is not equipped with storing data internally. However, that does not imply that the internal memory of some sort does not exist. This leaves the door open for further analysis, such as chip-off analysis, which is out of the scope of this research. As it was discussed in Section 4.4, the carrying capability of the DJI Mini 2 drone was more than we expected. The ability to carry more than its weight should be very concerning when thinking about malicious activities that can be performed easily without being detected. Furthermore, another consideration is that this type of drone does not require any previous paperwork or registration, which leaves law enforcement agencies with no ability to track back to the owner or a malicious actor. At the time of this writing, there are no major changes expected in FAA rules regarding micro drones (e.g., DJI Mini 2) at least until 2023 [23], leaving room for malicious activities. Security concerns arose when the DJI Fly app was not in the official Android Play Store, and it had to be downloaded as an APK file and then installed on the mobile device. This presents great security concerns since the owner of the device does not always have the latest version (i.e., updated version) of the app. In addition, the app can contain malicious code placed by accident or on purpose. It is also possible that malicious code can be introduced or hard-coded into the APK file, which leads to vulnerable mobile devices used by the end-users. During the initial setup of the drone, we were able to fly the drone without a GPS signal successfully, further indicating anti-forensic challenges to hide any PII related to the location. As a result, these challenges will make the investigation challenging, even if they consist only of the phone and the drone with the controller. The error codes were found during the examination of the .DAT files via the airdata.com website. These errors showed a possibility to discover if the drone was carrying a load while operating. The error code: "Not Enough Force/ESC Error" was repeated continuously during the weight testing flights. Figure 17 shows the definition of this error code according to the airdata.com wiki page for notification [24]. Moreover, Figure 18 demonstrates the error code while the drone is being captured in Scenario 2 (see Section 3.3.2). Conclusions and Future Work Drone usage over the last few years, especially in recreational uses, has dramatically increased in production. This increase greatly affects law enforcement agencies that are trying to battle malicious drone usage. In this case study, we had complete control over all stages of the conducted comprehensive experiment on the DJI Mini 2 drone to utilize it in creating criminal-like scenarios. Similar scenarios are likely to be committed for nefarious activities in the real world by malicious users. While the findings of this study are limited to only one UAV (i.e., DJI Mini 2), it is important to note that the availability, price, and lack of regulations for the drone make it easy to conduct criminal-like activities. That is why having a comprehensive study on the drone can help forensic investigators when faced with similar scenarios. As discovered in our analysis, the carrying capacity of the DJI Mini 2 is more than the weight of the drone itself, which might be used by malicious actors in different scenarios such as dropping contraband materials on prisons or smuggling drugs over borders, etc. For future work, we plan to perform chip-off data extracting and analysis. This may be very useful due to the ability to acquire data from the drone body, which would reveal insightful information, as seen from previous research. Moreover, a further experiment can be performed to check how battery usage is affected when carrying various weights. Finally, an examination of OcuSync 2.0 transmission technology can be performed to experiment on communication methods between the devices by applying a UAV kill chain to identify possible intrusion vulnerabilities.
9,688
2021-06-01T00:00:00.000
[ "Computer Science" ]
Machiavellian Variations, or When Moral Convictions and Political Duties Collide Commenting on Michael Walzer’s essay, the author adopts a perspective that traces back to Machiavelli. In this view, ‘dirty hands’ is a true problem faced by politicians, not a philosophical fiction or a moral quandary resulting from wrong reasoning. ‘Dirty hands’ results from the collision of two spheres of human action -morality and politics- which entail different duties; it concerns actions which have extremely serious public consequences and therefore applies eminently to politicians and the public sphere. The author examines different scenarios to elicit a clear view of the specificity of this problem, which is not analogous to the conventional issue of immorality in politics. ‘Dirty hands’ is a problem that cannot be avoided by politicians, because they have responsibility over the ultimate decisions; it follows that people who wish not to dirty their hands should thus refrain from entering the political realm. In this essay I will argue that Walzer meritoriously reproposed the issue of the conflict between moral values and political ends, which I will interpret in a more restricted way, as comprising only actions which affect a large population or an entire State and which have dramatic public consequences.The problem of dirty hands is exquisitely political because politics is the realm where decisions affecting all citizens are made, their happiness and misery and, sometimes, their life or death.It arises from the collision of two spheres of human action which entail different duties and it is dramatic because the true politician realizes that saving the State, the common good of citizens, has priority over one's moral convictions.I will conclude that the problem of dirty hands reveals the dramatic side of politics because doing what is politically right does not morally exonerate the politician: it leaves us with a good statesman and a bad human being. Walzer famously took the name of the problem he wanted to examine from Jean-Paul Sartre's 1948 play Les mains sales [Dirty Hands].In the play, the Communist leader Hoerderer concludes his response to the anarchist Hugo by rhetorically asking "Do you think you can govern innocently?"This is actually a very old question, possibly as old as politics itself -witness Solon's rejection of the alluring offer of tyranny in the 6th century BCE. 1 In France, the staunch revolutionary Saint-Just had already answered it negatively, famously stating that "Nobody can rule guiltlessly". 2Walzer sees the problem of dirty hands as a moral issue concerning all human beings and not only philosophers or politicians; for it has to do with the very possibility of living a moral life while discharging the duties of one's office.Walzer acknowledges one can get dirty hands in private life but focuses on political action as he is only interested in the dilemma as faced by politicians.He explains very well "that a particular act of government (in a political party or in the state) may be exactly the right thing to do in utilitarian terms and yet leave the man who does it guilty of a moral wrong" (Walzer: 161).On the other hand, failure to make the right but difficult decision would result in not fulfilling the duties of one's office.Walzer thus concludes that "The notion of dirty hands derives from an effort to refuse 'absolutism' without denying the reality of the moral dilemma" (162). I find it significant, and even amusing, that Walzer begins his answer by offering "a piece of conventional wisdom to the effect that politicians are a good deal worse, morally worse, than the rest of us (it is the wisdom of the rest of us)" (162). 3Perhaps this sounded appropriate in Richard Nixon's America and perhaps it suits people who deal with la politique politicienne, but I am not convinced this is a general rule.Politics, admittedly in rare and exalted moments, is also the realm of elevated ideals and of great realizations: one Martin Luther King Jr. or one Gandhi redeem thousands of petty carpetbaggers and even a Joseph McCarthy.Machiavelli, the author whose 1 The Athenian statesman and lawgiver Solon was elected archon, one of the supreme offices in Athens, in 594 BCE.He was given the specific task of pacifying the factional strife which plagued the city and he did so while remaining within the legal boundaries of his office.Solon himself in his poems recalls that he rejected the offer to become tyrant of Athens and was derided for his choice; however, he knew that "Justice always comes in the end".See Edmonds (1982). 2 On Louis Antoine Léon de Saint Just, 'the Archangel of the Terror', see Abensour (1990). 3 Susan Mendus makes many interesting observations while examining the question whether politicians are morally worse than the rest of us: see Mendus (2009).perspective I will adopt, considered politics the most important sphere of human endeavour; for it is only in politics, and through true politicians, that great changes affecting millions of people take place. Walzer states that there have been three ways of thinking about dirty hands: Machiavelli is the first and best representative of one of these traditions.He did not question existing moral standards, but he argued that statesmen sometimes must trample upon them in order to attain power and glory -the two supreme rewards for a politician.Success is the standard by which the statesman is evaluated: if he succeeds, he is a hero and a good man -in politics we judge using a consequentialist perspective.Walzer hints at another crucial point: Machiavelli probably believed that politicians surrender salvation in exchange for glory -I will return to this point.The second tradition is exemplified by Max Weber and his view of politicians as tragic heroes, beset by anguish for the decisions they have to make.In a very Machiavellian vein (noticed by Walzer), Weber argues that the statesman should adopt the "ethics of responsibility" and make his decisions thinking about the consequences, discarding moral absolutes when these lead to political ineffectiveness or failure.Politics is a vocation and a serious matter.As representatives of the third tradition Walzer picks Albert Camus' protagonists in his 1949 play Les Justes [The Just Assassins].Based on historical events of the Russian Revolution of 1905, this play is inspired by, and a reply to, Jean-Paul Sartre's Les mains sales.The protagonists are the members of the Revolutionary Socialists who plan to kill the Grand Duke Serge and the debate between Kalyaev and Stepan mirrors the disagreement between Camus and Sartre about utilitarian morality, sparing the innocent and committing evil deeds to achieve a greater good.Walzer believes that the 'Catholic Model', as he calls this last tradition, enables the crimes of dirty hands actions to be socially recognised and punished.This is desirable since the Protestant (Weberian view) places too much emphasis on the conscience of the politician and requires that she limit her actions and sense of guilt according to her own assessments of the crimes done.The 'Catholic Model' allows some form of expiration of guilt and blame for the politician.Walzer concludes that this last description is the most convincing, for the agents know that what they are doing is morally wrong and are ready to pay the price for their actions. I believe that a good starting point for our discussion consists in the realization that the problem of dirty hands is a true problem, not a philosophical abstruse fiction: to have dirty hands is not a good thing, and (almost) all people would prefer to abstain from actions which dirty their hands.It is simplistic to argue that certain deep moral dilemmas are simply the result of a failure of rational reasoning. 4A quick look at biographical literature would confirm that many statesmen who had to make dramatic decisions had such dilemmas and moral qualms after the decision was made.This is due -as Walzer rightly put it-to the existence of a moral world in which actions take place; this fact leads to the necessity sometimes to override deep moral constraints. Walzer characterizes the dilemma of dirty hands by asking: "And how can it be wrong to do what is right?Or, how can we get our hands dirty by doing what we ought to do?"I think that the question should be rephrased, allowing the problem to appear in a clearer light: "How can it be (morally) wrong to do what is (politically) right?". 5 The two spheres -morality and politics-entail different duties and those who wish to be always moral should not enter the political realm.For politics, serious politics, means placing the common good above one's preferences and even above one's moral considerations.This fact, however, is valid only in extreme circumstances.Ordinarily, we want our politicians to be moral people: they should keep their promises, they should not embezzle money, they should always think of what is advantageous to the country and not merely to their party.We chose and voted for certain people exactly because we believed them to be moral people and we trusted they could be good representatives.Walzer nicely adds the realistic consideration that we want our politicians to be good -but not too good, hinting at their willingness to dirty their hands when it is absolutely necessary. Some Conceptual Distinctions and Clarifications Walzer posed the problem of dirty hands in analytic as well as historical terms.He saw in Machiavelli the author who first identified the issue and singled out his statement that "the prince should learn how not to be good".Following Machiavelli's perspective, Walzer conceives of this as an eminently political problem (the title of his essay is revelatory): I think this is quite correct and I will argue too that the scope of the problem of dirty hands should be restricted to the political sphere, and more specifically to extreme circumstances.It is only in this realm that we can appreciate the dramatic quality of the problem, when decisions must be made which affect the lives of millions of citizens.In addition, I take 'dirty hands' to refer only to political circumstances in which some "supreme emergency" -to quote Walzer-is involved: the problem of dirty hands regards only these situations, when killing the innocent, or mass killing, or other gravely immoral acts are involved. 6The trite notion that politics is 'the art of compromise', where the best is the enemy of the good, to the effect that politicians must inevitably make trade-offs, sometimes compromising even their own morality, should not concern us here: sordid transactions, petty misdemeanours, agreements with despicable people obviously 'dirty' a politician's hands but they are an inevitable outcome of the imperfection of human nature: judges and priests exist to take care of those deeds.Those actions concern the relationship between personal morality and politics; on the contrary, dirty hands should be taken to mean bloody hands and to imply a dramatic, catastrophic scenario; they refer to extreme situations, when the survival of a nation or of a political arrangement, with all that this involves especially in terms of loss of liberty and lives, is at stake.The problem of dirty hands, therefore, discloses why, in certain extreme circumstances which are not rare in politics, political considerations should take precedence over moral convictions. The problem of dirty hands is inherent to politics and has thus always existed ever since human beings started living together and dealing 'politically' with public matters; and it will always be there to haunt and challenge statesmen and their conscience. 7Politics presents alternatives to politicians and requires them to make decisions; sometimes these alternatives are dramatic, because they challenge, or even clash with, the politicians' moral beliefs and values.Aristotle described the wise person and good statesman as "the person who is capable of making the right decision" (Nicomachean Ethics VI 5, 1140a25-30; cf.I 1, 1094b11) but did not conceive of a possible clash between moral values and political virtue because in his view ethics is a part of politics: for only the virtuous person judges correctly the situation and makes the right decision accordingly.Machiavelli discovered, described and dramatized this clash, although it was always there.I would make here another important distinction.As I said, I take the problem of dirty hands to refer only to the political sphere and only to extreme situations. 8This means that all circumstances which involve a clash between moral values and political action are not a question of dirty hands: to be so, they require a state of emergency and the possible destruction of a political arrangement or of a way of life (from democracy to tyranny, for instance, from freedom and autonomy to slavery); and the loss of many human lives.Most of the conflicts between morality and politics regard political choices in ordinary circumstances, so it is important not to confuse the choice of a (immoral) policy with a problem of dirty hands.Let's take a vivid example from an actual politician and master moralist of the past: Plutarch.In his Life of Themistocles 20, 1-2 Plutarch recounts that after the surprising Greek victory over the Persians at Salamis, Themistocles made an incredibly bold proposal to increase and secure Athenian power over all Greece.When the panhellenic fleet was wintering at Pagasae, Themistocles addressed the Athenians saying that "he had a certain scheme in mind which would be useful (ophelimon) and salutary (soterion) for them, but which could not be broached (aporrheton) in public".At this point in Plutarch's moralizing story Themistocles' long-time political opponent, Aristides, enters the scene: the former embodies shrewdness and expediency, the latter justice and fairness.The Athenians tell Themistocles to inform Aristides alone, and if he should approve of the scheme, it will be put into execution.Here is the conclusion: "Themistocles accordingly told Aristides that he purposed to burn the fleet of the Greeks where it lay; but Aristides addressed the people and said of the scheme which Themistocles purposed to carry out, that none could be either more advantageous (lusitelesteran) or more iniquitous (adikoteran).The Athenians therefore ordered Themistocles to give it up."Themistocles' proposal was most unjust because it treacherously exploited the circumstances to increase Athenian power: it did not concern the survival of Athens or the freedom of her citizens; it was a matter of foreign policy not a question of dirty hands. Likewise, I would not consider Plato's 'noble lie' (Republic 3, 414b-415d) an issue of dirty hands: lying to your fellow-citizens to prevent social unrest and, ultimately, to enable all to flourish according to their idea of happiness, is not a matter of life and death; it is a policy issue. 9If we do not confine the problem to the political realm, and more specifically to extreme situations which could lead to the destruction of the State and the misery of most citizens, we cannot grasp its complexity and tragic character.Surely, ordinary human beings in their everyday life may experience dramatic moral dilemmas; but the scope of their consequences is inevitably limited because even the most powerful private individual cannot but affect only few people with their actions.If they are capable of affecting millions, then the question becomes political.Mark David Chapman's murder of John Lennon was not a political act, whereas Gavrilo Princip's killing of Archduke Franz Ferdinand of Austria and his wife Sophie was -one simply needs to look at the consequences and the people affected. What is unique about politics and politicians, then -one may ask?Politics is the sphere where decisions which affect thousands, sometimes millions of people, are taken.Politics concern the State, namely associated human life and hence the 'common good': a stateman's decision affects the life and well-being of an entire people; it may produce happiness or misery, life or death for millions -witness the actions of Saddam Hussein and Muhammad Ghaddafi, or more recently Vladimir Putin.While morality focusses on the behaviour of individuals; economic decisions may affect thousands of people when, for instance, a top-level banker bets on high-risk financial tools or the CEO of a big corporation causes bankruptcy for their careless behaviour.But even these financial disasters do not impact on an entire population nor, typically, entail death on a large scale.If they do, these decisions become political issues.If we apply the label 'dirty hands' to individual moral issues or minor political questions, and we can surely use the expression in an evocative way in these realms, we ultimately trivialise the problem.Carl Schmitt spoke of the "utmost degree of intensity" of "the political" and he grimly, but correctly, explained that it is the only sphere of human life where one can ask people to sacrifice their lives or can ask them to shed blood and kill other human beings. 10Even if we do not conceive of the political as residing in the distinction/opposition between "friend and enemy", we must acknowledge the supreme importance this sphere of human action has; for, by 9 Plato's Socrates argues that rulers should tell their fellow-citizens a "noble lie" concerning their birth and origin: they are all born from the earth and are therefore brothers; but the god moulded those who are suited to rule mixing gold in them; those who are good auxiliaries and soldiers mixing silver in them; and moulded farmers and artisans mixing bronze and silver in them.This lie is 'noble' because it serves as a foundational myth to support the established order.No need to add that all Platonic interpreters with liberal leanings found this use of lies in politics deplorable and rebarbative.See for all Popper (1945). 10See Schmitt (2007: 35).He accordingly added that if religious leaders are able to persuade their followers to sacrifice their lives and kill others for their cause, they have entered the sphere of 'the political'.Likewise, pacifist bleeding hearts enter the political realm the moment they can "declare a war against all wars": 36-37.concerning the common good, it is the pre-condition of all other human endeavours -art, morality, economics and so on. Democratic Politics and Dirty Hands Returning to the story that Plutarch offers us in his Life of Themistocles, it is worth noting that it is interesting for two main reasons.It depicts a scene where two political options, two courses of action, are possible: one has expediency (and power politics) as the top consideration, the other justice and loyalty to allies.The immorality of the former, as well as the morality of the latter, are obvious and evident in any epoch, because loyalty has always and everywhere been praised over betrayal.The Athenian people opted for morality.And this leads to the second point of interest: how much praise or blame should the people in a democracy receive from the actions of their leaders?For, what characterises democracy is common, public decision-making, which entails sharing the merit and the responsibility for political choices.Walzer himself, and many authors afterwards, have argued that democracy entails a sort of collective responsibility; as Martin Hollis put it: "Political actors, duly appointed within a legitimate state, have an authority deriving finally from the People.[…] When their hands get dirty, so do ours".11I will return to this question after examining in more detail the context of political decision-making. Along the same lines, we may wonder whether a democratic leader, and especially a principled one, perceives differently the problem of dirty hands.In my view, if we understand the problem correctly, in the terms I have specified, there is no great difference.A principled democratic leader has only two additional burdens as compared to other politicians: she is accountable to the public opinion as well as to the democratic institutions;12 and she is accountable to her conscience.But the two fundamental elements of the problem of dirty hands remain the same: it is not only a matter of choosing the right alternative, which implies the capacity to correctly understand the situation; it is a matter of having the courage to do it because, once the right option appears evident, acting requires trampling upon one's personal moral values. Let's take two apparently similar examples which, however, lead to different conclusions.The decision of the American President Harry Truman to drop two atomic bombs on the cities of Hiroshima and Nagasaki is an example of the dramatic and fateful decisions that sometimes a politician is required to make.The cost in civilian lives was so high that many philosophers and political theorists (including Walzer himself) have argued that it was unacceptable and tantamount to a war crime (see e.g.Anscombe 1981a, b).Walzer forcefully maintained that this decision, like the choice to carpet-bomb German cities after 1942, was based on untenable utilitarian calculations.Truman's decision has all the elements of a tragic dilemma, meaning that any choice inevitably entails guilt.As we know, he was the Vice-President of the United States and replaced President Roosevelt when he died in April 1945; he found himself in the condition of having to make choices which he probably did not envisage when he campaigned alongside Roosevelt.At the same time, he knew very well what his position entailed: being the 'Commander-in-chief' means taking care of the well-being of one's fellow-countrymen, placing their lives at the top of one's priorities and bearing the ultimate responsibility for all political decisions. 13If one visits the Truman Library in Independence (Missouri), one may observe a sign that was placed on the President's desk, stating 'The buck stops here'.Apparently, Truman liked the expression very much, which he thought to perfectly catch the role of the President: in his address in January 1953, he stated that "The Presidentwhoever he is-has to decide.He can't pass the buck to anybody.No one else can do the deciding for him.That's his job".We may imagine the agony and the pro-and-con reasoning preceding his decision but in the end one argument proved to be resolutory -the utilitarian calculation of the casualties on both sides had an invasion of Japan been tried.Behind it, in all evidence, there was Truman's sense of responsibility towards humankind (any human life is valuable), but especially towards his country and fellow-citizens: the duty of the statesman is first towards his citizens.Truman acted on this principle and had the courage to make the decision which he defended to the end. 14e should analyse in a similar perspective the decision of President Barack Obama to authorize the killing of American citizens suspected of terrorism.They include Anwar al-Awlaki and (apparently by mistake) his 16-year-old son Abdulrahman al-Awlaki.The father, a radical Muslim cleric, was accused of posing "an imminent threat of violent attack against the US" because of his virulent proselytising, involvement in al-Qaida terrorist plots and incitement to violence.He was killed in a drone strike in Yemen on September 30, 2011, together with another American citizen. 15any organizations, including the American Civil Liberties Union, decried these killings but court rulings supported the President's decision.Obviously, the courts evaluated only the lawfulness of these acts, not their morality.We should do more than that.We should first question whether these killings can be subsumed under the 13 Some authors, such as S.L. Sutherland, have argued that the conventional dirty hands problem lays too much emphasis on "the condition of the soul of the supra-ethical or maverick leader": Sutherland (1995). 14See for instance Harry Truman's Address in Milwaukee, Wisconsin of 14 October 1948, where he credited President Roosevelt for "the courage and foresight" to authorize the Manhattan Project and continued: "As President of the United States, I had the fateful responsibility of deciding whether or not to use this weapon for the first time.It was the hardest decision I ever had to make.But the President cannot duck hard problems-he cannot pass the buck.I made the decision after discussions with the ablest men in our Government, and after long and prayerful consideration.I decided that the bomb should be used in order to end the war quickly and save countless lives-Japanese as well as American." 15 For an account of the controversial figure, and killing, of al-Awlaki and the details of the operation see Shane 2015.For insightful comments see de Wijze (2009) and Lenze and Bakker (2014).De Wijze has a nuanced position: he argues that targeted killings may be morally reprehensible but also morally justifiable (and sometimes even obligatory) to protect citizens from great harm; they reveal "the messy moral position of politicians and military strategists" who end up with dirty hands, for they do wrong in order to do right.Since he examines the targeted killing of a foreign combatant, I am in complete agreement with him.Lenze and Bakker too present the problem in a nuanced fashion; I only disagree with their conclusion that President Obama did what was necessary and therefore his act is justified from a moral perspective. notion of the 'problem of dirty hands' since they were not carried out in a situation of emergency that put the entire nation at stake.I do not think the notion is applicable here.These actions did not occur during wartime against a legitimate, declared enemy since the notion of 'War on Terror' is merely an evocative expression.Wars have rules that have been developed over many centuries by political and legal theorists.The idea of 'imminence' in al-Awlaki's case was evidently applied very loosely and the Fifth Amendment's guarantee of 'due process' was completely neglected.These considerations would not matter if the actions had occurred in extreme circumstances which could have potentially killed many American citizens -after all, the President's first duty is towards his own fellow-citizens.But this threat did not occur in a condition of extreme and imminent danger and the US government had many other options to counter it before resorting to the targeted killing of al-Awlaki.I think that the diriment consideration here is al-Awlaki's nationality: he was an American citizen; I find morally contradictory and politically ominous the killing of one's own fellow-countrymen, for the primary role of the State is to protect one's citizens.I thus believe these killings should be subsumed under the category of 'government policy' and, as such, be considered immoral, illegal, and unwarranted.They are the beginning of a slippery slope that eventually ends in totalitarianism. We may pose the question again: Why should political considerations trump moral, and other, considerations?Machiavelli had already understood this problem and gave an innovative answer, which horrified many of his contemporary and subsequent readers: we may call his discovery 'the pre-eminence of politics'.Machiavelli believed that without the State, without law and order, moral agency and morality are not possible nor is any other decent human activity: this is why he saw the State as the common good, which is to be preserved at all costs. 16Aristotelian talk about 'the good life' is inane if there are no laws, institutions, government, in a word the State, to make them possible, to enable citizens to act as moral people and thrive.We need not look far for examples and demonstrations: the recent collapse of two political systems, in Albania and in Libya, vividly show what happens when the government and the enforcement of law and order are absent: there is chaos, killings and the obvious dominance of the stronger over the weak, leading to the terrible social and political ethos that 'might makes right'.Life itself, let alone the good life, is imperilled in these situations.This is why for Machiavelli it is necessary to maintain the State: it is the pre-condition of everything, of practicing politics, of exercising morality and living a truly human life, of pursuing one's image of happiness. 1716 I think this point was well caught by Thomas Jefferson in a letter in which he defended his decision to authorize the purchase of Louisiana from Napoleon (the Constitution did not give him this power): "A strict observance of the written law is doubtless one of the high duties of a good citizen, but it is not the highest.The laws of necessity, of self-preservation, of saving our country when in danger, are of higher obligation.To lose our country by a scrupulous adherence to written law, would be to lose the law itself, with life, liberty, property and all those who are enjoying them with us; thus absurdly sacrificing the end to the means": Letter to John Monticello dated 20 September 1810. 17I am here arguing for a specific image of Machiavelli as the discoverer of the tragic side of politics.For he forcefully argued that morality and politics entail different duties and these may clash in certain dramatic situations: and in these circumstances, true politicians must remember that their first priority is to save the State, which equals the common good.We should therefore discard two traditional interpretations of Machiavelli's thought: the family of interpretations which argues that Machiavelli was a political realist The Notion of 'Necessity' in Politics It is at this stage that we encounter the notion of 'necessity' in politics.Some distinctions are, again, to be made.In his Just and Unjust Wars (1977) Walzer reported the speech of Chancellor von Bethmann Hollweg to the Reichstag on August 4, 1914, where he stated "Gentlemen, we are now in a state of necessity, and necessity knows no law". 18Walzer comments, and I concur, that if one observed the situation objectively, there was no emergency or necessity and thus Bethmann Hollweg's words were mere rhetoric, a call to action -an action, to be sure, that Germany was about to initiate.However, in politics, we noticed, certain circumstances actually force the statesman to act in a way that is against justice, against morality and (for the believers) against religion, and it is thus legitimate to use the notion of 'necessity', which is connected to 'emergency' and often borders with that of 'reason of State'. Machiavelli deserves that honour.In the sound and fury of Italian wars in the early 16th century, in a society imbued with Christian morality, he realized and dared to write that the first and supreme goal of a politician-saving the State-requires him to be ready to damn his own soul.This is a fine point, well caught by Walzer in his article.Machiavelli was not an innovator in the field of morality.He used the words 'good', 'evil', 'cruelty', in their ordinary, accepted meaning of the age; he never argued that what is evil in the sphere of morality may become good in that of politics: politics, contrary to Croce's famous statement, who attributed this discovery to Machiavelli, is not autonomous from morality.This is exactly what makes certain political choices so dramatic.Machiavelli discovered the 'seriousness of politics'-as Nicola Matteucci put it-the fact that politics has an inner dimension of duty which is sometimes in contrast with that of morality (for the notion of 'seriousness of politics' see Matteucci 1984: 31-67).Moral convictions and political duties sometimes clash, and the politician must rise to the challenge and realize that the common good, the State, must be preserved at all costs, including giving up eternal salvation.It is not that 'the end justifies the means', as the popular interpretation goes, but rather that one end justifies all means (Prince: 18): this end is the preservation of the State, or the creation of a new one, because without the State no moral life, no good life, indeed no bare life is possible.Building on this assumption, Machiavelli argued that the statesman should "not depart from good, if possible, but be able to enter evil, when necessitated" (Prince: 18).For him this choice does not entail a moral dilemma: evil remains evil, it is not redeemed by political considerations; and the statesman does not have any special moral dispensation when he acts.Machiavelli may accordingly conclude that the statesman must be ready to damn his own soul to protect his fatherland.For instance, he praised Cesare Borgia for his behaviour concerning his lieutenant Remirro dell'Orco: a "cruel and ready man", Remirro had in short time disposed of unruly aristocrats as well as highwaymen in Romagna, and thus "pacified and therefore thought that morality has no place in politics; or the first 'political scientist' who separated the art of politics from morality; or that he was a 'teacher of evil'.We should also reject the 'republican', or 'oblique', interpretation of Machiavelli, which sees him as a supporter of republican (or even democratic) regimes; the author who secretly, or disguisedly, discloses the evil doings of princes to the people.I have argued for this interpretation of Machiavelli in Giorgini (2017). 18See Walzer (1977: 240).and unified" it, making it possible for ordinary people to live well.However, since Remirro's cruelties had earned him (and Borgia) a certain amount of hatred, Cesare had him executed in a theatrical way: Remirro's body was found cut in two pieces on the piazza at Cesena and -Machiavelli comments-"the ferocity of this spectacle left the people at once satisfied and stupefied" (Prince 7).It is an obvious notion that using a lieutenant to serve your political purposes and then having him killed for the same purposes when this is more convenient is a disloyal, treacherous and murderous behaviour; Machiavelli, like all his contemporaries, agreed on that.In a Christian perspective, Cesare Borgia was surely destined to Hell and Machiavelli would not have objected to this fate.But here lies the drama of politics: the statesman's duty towards the common good forces him to sometimes make immoral choices and face eternal damnation (for a more complete treatment of this topic I wish to refer to Giorgini 2019). In Machiavelli's perspective the fact that the context for the immoral action was created by someone else is not important; the immorality of the circumstances is not a requirement in his view (for a different opinion see de Wijze 2007).Machiavelli takes for granted that there are moral values on which people agree: loyalty is better than disloyalty; generosity is better than avarice; forthrightness is better than sneakiness.This is why he goes such a long way to show to his prospective prince that there are situations in politics when one must be disloyal, stingy and sneaky in order to fulfil the duty to save the State (Prince: 15-18).I follow Machiavelli in believing that the problem of dirty hands arises because two different spheres of action, comprising different values and ends, sometimes collide.It is not a moral dilemma or a conflict of moral values; it is an alternative between goods and ends and I believe Machiavelli was right in pointing out that saving the State (and its citizens) is the politicians' first priority.We call it the problem of 'dirty hands' because we all acknowledge that certain actions are good and others are evil; so dropping the bomb on civilians was surely evil but it was not wrong from a political perspective.And we expect the person who authorized it, because we imagine them to be moral persons, to have the same sense of deep remorse that Colonel Paul Tibbetts had after he dropped the bomb on Hiroshima as the commander of the B-29 Enola Gay.(Perhaps it is not just a tragic irony of history that one of the three B-29 airplanes that participated in the mission on Hiroshima was named Necessary Evil.) Politics has no special exemption from the moral order for Machiavelli and this is well shown by Machiavelli's constant appeal to the notion of 'necessity' and by his ubiquitous use of the word in the infamous Chaps.15-18, where he examines the qualities that the new prince should have.Walzer was drawn to the famous statement "the prince must learn how not to be good".Let's examine Machiavelli's exact phrasing and its context.In Prince 15 Machiavelli sets forth to examine "the things for which men and especially princes are praised or blamed".He prefaces his analysis by saying that "since my intent is to write something useful to whoever understands it, it has appeared to me more fitting to go directly to the effectual truth of the thing than to the imagination of it".After this profession of realism, Machiavelli gives his view of the human condition: "It is so far from how one lives to how one should live that he who lets go of what is done for what should be done learns his ruin rather than his preservation.For a man who wants to make a profession of good in all regards must come to ruin among so many who are not good."And this is the human predicament: "Hence it is necessary to a prince, if he wants to maintain himself, to learn to be able not to be good, and to use this and not use it according to necessity" (Machiavelli 1998, Chap.15, emphasis mine).It is a necessity of the human condition that a prince must learn "how not to be good", how to overcome the common morality that he himself shares in his ordinary transactions.In the subsequent chapters Machiavelli reiterates this lesson by examining the canonical virtues that a prince should have according to the specula principis and overthrowing their teachings.He sums up his thought in Chap.18, where he explains that sometimes using the laws is not enough and the prince must therefore use force, which is typical of beasts; and he adds that "a prince is compelled of necessity to know well how to use the beast".This is his famous conclusion: This has to be understood: that a prince, and especially a new prince, cannot observe all those things for which men are held good, since he is often under a necessity, to maintain his state, of acting against faith, against charity, against humanity, against religion.And so he needs to have a spirit disposed to change as the winds of fortune and variations of things command him, and as I said above, not depart from good, when possible, but know how to enter into evil, when forced by necessity (Prince: 18). It is barely necessary to point out the dramatic, frequent occurrence of the notion of 'necessity' in these lines.They emphasise the dimension of duty inherent in the political realm.Ordinary morality is still in place, also for the prince, but his political duty forces him to contravene it: he, and his soul, will pay the penalty personally for that, but the common good will be safe. There is thus no specific morality appropriate to political activity.The only difference between a statesman and an ordinary citizen lies in the fact that politics concerns the common good of the citizens and therefore statesmen must have this as their first priority, accepting the fact that it may collide with moral imperatives.This is why dirty hands is a true problem: if there were a specific morality appropriate to statesmen, there would not be any clash with the moral imperatives of ordinary people; but there is not.This realization discloses the tragic side of politics: there may arise extraordinary situations in which the statesman must trample upon these moral imperatives.It is a conflict of allegiances but the statesman, by entering politics, made a choice and opted for placing the common good first. This conflict of allegiances was known to ancient authors but for them it was a fact and, as such, it simply illustrated the tragic side of politics.For Machiavelli politics was a life-choice, and by making this choice the statesman willingly accepts the rules of the game.Consider the tragic alternative faced by Agamemnon: the success of the expedition against Troy required the sacrifice of his own daughter.His duty as the leader of the Hellenic army clashed dramatically with his duty as a father; it is a conflict of allegiances and a clash of imperatives -personal and political.No choice is obviously correct and both entail guilt -hence the tragedy.Agamemnon's fateful deci-sion will eventually bring revenge and death at his wife's hands upon him, another dramatic turn which will set Orestes' revenge in motion in an almost endless drama. 19rom this drama I elicit another lesson.'Dirty hands' is not a matter of choosing the lesser evil: for morality and politics have different dimensions of duty which are incomparable.If we applied Hume's logic to Agamemnon's dilemma, the solution would be easy and immediate: the destruction of the entire Hellenic army is preferable to a scratch to Iphigenia's finger. 20However, Agamemnon bears the responsibility of being the leader of the army that intends to avenge his brother's honour, and he feels that his public duty must trump his private affection.Machiavelli would have commented that Agamemnon made the right political choice; and that he paid the penalty for his moral outrage. Machiavelli's addition to this picture lies in his insistence that the statesman should know all this in advance and should be prepared to rise to the occasion.Politics is the most rewarding sphere of human endeavour because a person can be the author of his fellow-countrymen's flourishing and can thus reap that eternal glory which is the reward of great statesmen (an idea Machiavelli took from Cicero's somnium Scipionis, another connection with the classics). 21But in a Christian world (Machiavelli never questioned certain Christian moral imperatives), the well-being of the citizens and the salvation of the State may come at the price of the statesman's soul: for, by committing certain deeds, he will renounce eternal salvation.The statesman must thus be ready to accept to pay the penalty for his moral crimes. Conclusion The fascination that emanates from Machiavelli's works, and especially The Prince, stems from the exalted position he grants to the statesman combined with the responsibilities that go with it: Machiavelli could subscribe to Plato's definition of 'politics' as the most architectonic of all arts, because it directs all the others to produce the common good.Likewise, Machiavelli's statesman has this elevated position because he can accomplish deeds that are reachable uniquely in politics -this is why he writes that creators and saviours of States are always lauded.However, Machiavelli bluntly warns his prospective statesmen of the responsibility they carry -the wellbeing, and sometimes the life and death, of their fellow-countrymen.If they are not ready to do everything that this requires, they should stay out of the political arena.Once you opt in, the only honourable way out is by performing your duty.The problem of dirty hands cannot be solved; it can only be avoided by refusing to be a politician.However, subsequent politicians, and in their wake moral and political theorists, dis-19 See Aeschylus' trilogy Oresteia: Agamemnon, The Libation Bearers and The Eumenides, performed in 458 BCE.Always interesting on this topic Nussbaum (1986). 20I am referring here to Hume's famous saying to the effect that "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger": see Hume (2007: 2.3.3.6). 21Stuart Hampshire rightly insisted that Machiavelli conceived of the good for man as virtù, exemplified by "glorious worldly achievements which will be recognised in history": see Hampshire (1989: 165).Accordingly, "the virtues that are essential to an admirable private life, such as loyal friendships and a sense of personal honour and of integrity, have their cost in political powerlessness".covered the eternal truth of Machiavelli's insight, and sometimes they found out the hard way.
9,872.8
2023-09-13T00:00:00.000
[ "Philosophy", "Political Science" ]
Local immune responses to tuberculin skin challenge in Mycobacterium bovis BCG-vaccinated baboons: a pilot study of younger and older animals Individuals over the age of 65 are highly susceptible to infectious diseases, which account for one-third of deaths in this age group. Vaccines are a primary tool to combat infection, yet they are less effective in the elderly population. While many groups have aimed to address this problem by studying vaccine-induced peripheral blood responses in the elderly, work from our lab and others demonstrate that immune responses to vaccination and infectious challenge may differ between tissue sites and the periphery. In this pilot study, we established an in vivo delayed-type hypersensitivity model of Mycobacterium bovis BCG vaccination and tuberculin skin test in two adult and two aged baboons. Vaccination generates BCG-specific immune cells that are recruited to the skin upon tuberculin challenge. We tested short term recall responses (8 weeks post-vaccination) and long term recall responses (25 weeks post-vaccination) by performing skin punch biopsies around the site of tuberculin injection. In short term recall responses, we found increased oxidation and decreased production of immune proteins in aged baboon skin at the site of TST challenge, in comparison to adult skin. Differences between adult and aged animals normalized in the long term response to tuberculin. In vitro, aged peripheral blood mononuclear cells had increased migration and functional responses to antigen-specific stimulation, suggesting that age-related changes in the tissue in vivo impairs aged immune recall responses to antigenic challenge. These findings highlight the impact of age-associated changes in the local tissue environment in memory recall responses, which may be more broadly applied to the study of other tissues. Moreover, these findings should be considered in future studies aimed at understanding and improving aging immune responses to vaccination and tissue challenge. Background By 2060, individuals aged 65 and older will constitute approximately one-fourth of the U.S. population [1]. This shift in demographics will have significant financial and public health consequences. As we age, so do many aspects of our immune system resulting in decreased vaccine protective efficacy and increased susceptibility to infection [2,3]. This has been recently highlighted by the increased susceptibility of the elderly to SARS-CoV-2, in addition to other infections, such as influenza and bacterial pneumonia [4,5]. To improve health outcomes in our aging population, we must gain a better understanding of age-associated changes in immune cell and tissue function in the elderly. With increasing age immune cells in the periphery shift to more differentiated CD4 and CD8 T cell subsets with altered function and acquisition of senescence markers [6][7][8][9]. Additionally, increased systemic levels of inflammation with advanced age contribute to age-related diseases [8,10]. While peripheral blood studies have led to a greater understanding of the immunological aging process, much less is known about the impact of age on memory responses in tissue compartments. Studies, including those from our lab, have shown that immune responses in the periphery and at the site of infection can differ [11][12][13]. The skin is a site of frequent exogenous challenge and contains resident and recruited cells of both the innate and adaptive arms of the immune system [14]. Ageassociated changes in the skin tissue, i.e. increased inflammatory mediators and reactive oxygen species (ROS), decreased chemokine production by resident cells, and altered skin vasculature, reduces cutaneous immunity and increases risk of skin infections [14][15][16][17]. In humans, studies have shown that development of delayed-type hypersensitivity responses to antigen challenge, including TST, are reduced and/or delayed in the elderly due in large part to high levels of inflammation that blunt recall memory responses in the skin [18][19][20]. The skin is therefore an ideal model to study age-related changes in tissue immunity. Because elderly tissue samples in humans are difficult to obtain and limited, it is necessary to develop novel models to study aging immune responses in the tissue. Here we took advantage of the high homology between the baboon and human immune system to establish an in vivo tissue model to study immune responses to vaccination [21,22]. A group of two adult and two aged baboons were vaccinated with Mycobacterium bovis BCG (BCG) and the classical delayed-type hypersensitivity (DTH) reaction of the tuberculin skin test (TST) was utilized to evaluate antigen-specific recall responses to the skin in timed biopsies [23]. We tested short term (ST) recall responses (8 weeks post-vaccination) as well as long term (LT) recall responses (25 weeks postvaccination) to evaluate the impact of age on vaccineinduced tissue immunity. Results and discussion Establishing an in vivo model of tuberculin recall response in BCG-vaccinated baboons To evaluate the impact of age on vaccine-induced recall immunity to the skin, we vaccinated 4 baboons (two adult and two aged, according to age ranges previously described [24], (Table 1) We tested ST recall responses to BCG with TST, or 0.9% saline skin injection (control), at 8 weeks postvaccination (Fig. 1a) by performing two TST and two saline skin injections on the chest of each animal (Fig. 1b). This time period of 8 weeks allows for development of immune memory to BCG [25]. Immune recall responses were determined by performing skin punch biopsies surrounding the site of TST or saline injection (Fig. 1b, c) at two time points post-TST (ST 3-day biopsy, and ST 7-day biopsy) (Fig. 1a). We chose these two time points post-TST for biopsy collection to account for age-related differences in kinetic recall response to TST. In support of this, clinical responses to TST manifest within 3 days of TST; however, maximal cellular infiltration doesn't occur until 7 days [23]. Moreover, TST responses in the elderly are commonly delayed or reduced [18,26]. A second challenge was performed 25 weeks after BCG-vaccination to determine LT recall immune responses (Fig. 1a), with two TST and two saline skin injections performed on a different region of the chest. For determination of LT recall responses, LT 3-day biopsies and LT 7-day biopsies post-TST were also obtained (Fig. 1a). Saline injection did not induce any changes in skin biopsies obtained from adult and aged BCG-vaccinated baboons at any time point ST or LT (included in figures as a baseline measure; shown in Fig. 1c, left), supporting that the injection procedure did not induce non-specific responses. Additionally, animal weight in both age groups was unchanged throughout the duration of the study (See Supplementary Fig. 1B, Additional File 1). Increased oxidation and altered immune mediator production in skin from aged baboons, in response to TST Work from our labs has shown that the elderly lung environment is pro-oxidative and inflammatory, leading to increased susceptibility to Mycobacterium tuberculosis (M.tb) infection [27][28][29]. Moreover, cutaneous infections are more commonly observed in older individuals due to age-related changes in the skin, such as decreases in structural integrity and cellularity and increases in ROS [14,19,30,31]. A predominant source of ROS in the skin is superoxide anion [15]. We evaluated superoxide levels in the skin of BCG-vaccinated adult and aged baboons in response to TST using electron parametric resonance (EPR). In the ST response, we observed For adult and aged baboons in the study, sex, date of birth, and age in years is provided increased superoxide in aged skin from 3-day biopsies (Fig. 1d, left). Oxidation in adult skin increased in the ST 7-day biopsy time point reaching levels comparable to that of aged skin (Fig. 1d, right). In the skin, glutathione is one of the most important antioxidants capable of reducing ROS levels [16]. Higher levels of reduced glutathione were found in aged skin from ST 3-day biopsies in response to TST (Fig. 1e, left). Similar to oxidation levels, reduced glutathione were higher in adult skin from ST 7-day biopsies, reaching the levels of those observed in the aged skin (Fig. 1e, right). Lastly, we measured protein carbonyls in the skin, which reflect the degree of tissue oxidative damage [16]. We saw no differences in protein carbonyls present in the skin between our age groups in ST biopsies (Fig. 1f). Skin obtained from the LT challenge time point showed a trend increase in superoxide levels and protein carbonyls in aged skin from LT 3-day biopsies ( Fig. 1g & h, 3-day), with oxidation levels decreasing in both age groups in LT 7day biopsies ( Fig. 1g & h, 7-day). Reduced glutathione levels showed a trend increase in adult skin in response to TST in LT 3-day and 7-day biopsies post-TST (Fig. 1I). These results suggests that older animals mount an early, enhanced oxidative response not observed in adult animals. Elderly skin has been shown to have higher basal levels of inflammatory cytokines [19]. When challenged, early and non-specific inflammatory responses in the skin have been observed [32], blunting subsequent antigen-specific responses. We next evaluated cytokine, chemokine, and growth factors present in aged and adult skin from BCG-vaccinated baboons. In response to TST, we observed a significant decreased production of cytokines, chemokines, and growth factors in the skin of aged baboons in the ST 3-day biopsies ( Fig. 2a). In both age groups, immune protein levels were decreased in ST 7-day biopsies (See Supplementary Fig. 2A, Additional File 1). During the LT recall response, TST-challenged 3-day biopsies from aged skin responded similarly to TST-challenged adult 3-day biopsies, although aged skin had higher levels of IL1β and a fold change increase in several chemokines (Fig. 2b). Overall, the relative magnitude of immune protein levels in LT biopsies was less than levels of immune proteins observed in the ST response to TST, suggesting that recall responses for Cell infiltration and skin histological analysis is similar between aged and adult skin Histological analyses of skin using hematoxylin and eosin staining methods from BCG-vaccinated aged and adult animals was performed to determine the extent of cellular infiltration. Inflammation was visually assessed and quantified as percentage inflammation by a boardcertified pathologist. In the ST recall response, TSTchallenged adult skin from 3-day biopsies had more percent affected inflammation in comparison to aged animals supporting the elevated cytokines detected at this same time point (Fig. 2c,e). Inflammation decreased to comparable levels in ST 7-day biopsies in response to TST (See Supplementary Fig. 3, Additional File 1). We observed equal levels of inflammation in TST-challenged skin from the LT 3-day and 7-day biopsies in our age groups (Fig. 2d,f; See Supplementary Fig. 4, Additional File 1). Skin from aging individuals has known structural changes including decreased extracellular matrix components and reduced epidermal thickness [19,33], so we also evaluated structural alterations. Adult ST 3-day skin biopsies showed minimal increase in epidermal thickness by visual analysis, which is supported by the slight increase in inflammation in adult skin in response to TST, relative to aged skin (Fig. 2c,e). Aged PBMCs have increased migration in response to skin tissue homogenates To test if advanced age impacts recall responses through changes in cell migration, we tested peripheral blood mononuclear cell (PBMC) chemotaxis in response to mediators present in skin tissue. Adult or aged PBMC migration was evaluated in response to adult skin or aged skin homogenates, respectively, termed homogenous chemotaxis (Fig. 3a). Moreover, heterogeneous chemotaxis was evaluated by testing adult PBMC migration to aged skin homogenates and aged PBMC migration to adult skin homogenates (Fig. 3b). Data are shown as fold-increase in the response of PBMCs from aged baboons relative to adult. In response to TST skin tissue, aged PBMCs had increased migration in both homogenous and heterogeneous migration assays, relative to adult PBMCs ( Fig. 3a & b, Adult TST vs Aged TST). This increase of aged PBMC migration was observed regardless of biopsy time point (ST vs LT 3-day and 7-day biopsies). To account for non-specific migration in response to immune mediators present in resting skin, homogenous and heterogeneous cell migration was also tested in response to 3-day and 7-day biopsies from saline injected skin (Fig. 3a&b, Adult NaCl & Aged NaCl). In response to saline skin tissue, aged PBMCs had higher levels of cell migration than adult PBMCs. These findings suggest that in vitro aged BCGvaccinated PBMCs have better capacity to migrate in response to TST biopsy homogenates. Also, these findings suggest that the PBMC response was increased independent of the tissue-specific milieu (TST vs saline). Based on these findings, we believe that age-associated changes in the tissue structure in vivo impacts cell migration that is not detected using an in vitro cell assay that requires tissue disruption. It is important to note that these migration studies were performed using PBMCs, a mixture of cell types, as the input cell source, and future phenotypic analysis of migrated cells may shed light on aged-related differences in migrating cell populations. While it is possible that a yet unidentified chemoattractant is driving the enhanced migration we observed in aged PBMCs, this is a less likely explanation because the chemoattractant would need to be found in both adult and aged skin, and at both basal levels and in response to TST. PBMC from aged baboons have functional responses to antigen stimulation Following our observations of increased migration of PBMC from aged baboons, we tested functional responses of PBMCs from our vaccinated animals in response to in vitro stimulation with mycobacterial antigen. In the ST (Fig. 3c-e, Weeks 8, 8 + 3-day, 8 + 7day) and LT ( Fig. 3c-e, Weeks 25, 25 + 3-day, 25 + 7-day) response to TST, no differences in antigen-specific T cell cytokines (IL2 and IFN-γ) were observed between age groups (Fig. 3c). Adult PBMCs had transient increased production of the pro-inflammatory cytokines IL1β, TNF and IL6, although this was from blood collected prior to LT skin challenge and therefore not induced by TST (Fig. 3d). Adult PBMCs from the LT TST response had increased CXCL10, although this was only observed at one time-point and not sustained throughout the LT response (Fig. 3e). At all time points tested, aged and Fig. 3 Increased aged PBMC migration in response to skin homogenates and functional PBMC responses to antigen. a Homogeneous migration of adult and aged PBMCs in response to skin tissue homogenates from TST and NaCl ST 3-day and 7-day biopsies and LT 3-day and 7-day biopsies. A is adult; O is aged. Each dot represents one animal. b Heterogeneous migration of adult and aged PBMCs in response to skin tissue homogenates from TST and NaCl ST 3-day and 7-day biopsies and LT 3-day and 7-day biopsies. Each dot represents a replicate, performed in triplicate, from adult or aged PBMCs by age group. Shown is fold change vs adult for homogeneous (A) or heterogenous (B) experimental setup. Student's t test Adult TST vs Aged TST and Adult NaCl vs Aged NaCl, *p < 0.05, **p < 0.01, ***p < 0.001. c-e PBMCs from adult and aged vaccinated baboons from the time points indicated were stimulated with CFP for 5 days. Supernatants were collected and antigen-specific responses were detected by Luminex for production of (c,d) cytokines and (e) chemokines. Student's t test Adult vs Aged PBMCs, *p < 0.05 adult PBMC baseline responses (media alone) were below the limit of detection. Due to the nature of a pilot study, this work is limited by a small sample size of four total animals and a modest age group difference between our adult and aged animals. In our study, adult animals were defined as less than 12 years old; however, other aging baboon studies define adult animals as 5 years and older [34]. Despite these limitations we observed robust tissue changes between our age groups and established the use of this model for future studies in a larger group of animals. Our findings demonstrate that age-related changes in the skin tissue (increased oxidation, decreased immune proteins, and decreased inflammation) result in reduced early immune responses to antigenic challenge at the tissue site. This suggests that studying the impact of increased age on tissue immune responses in response to vaccination or infection may be a more successful approach to understand immunity in the elderly. Animal procedures Studies were conducted in four (two adult and two aged) baboons from the conventional colony at The University of Oklahoma Health Sciences Center (OUHSC) [Fig. 1]. All animals were housed in Animal Biosafety Level 1 (ABSL1) facilities at OUHSC for the duration of the study. All procedures were approved by the University Institutional Animal Care and Use Committee at OUHSC and the OUHSC Biosafety Committee. Animals were vaccinated with Mycobacterium bovis BCG (Pasteur strain, ATCC, Manassas, VA) via the intradermal route in the upper arm at a dose of 5 × 10 5 CFUs. Skin tests were performed on animals at the times indicated in Fig. 1 by injecting 100 μL of tuberculin (Colorado Serum Company, Denver, CO) containing 5 tuberculin units of purified protein derivative (PPD). Saline injections (100 μL of 0.9% NaCl, Baxter International Inc., Deerfield, IL) were performed in animals to serve as negative controls. All skin tests were performed in the chest skin of animals according to the diagram in Fig. 1. All animals had positive TST responses in the study as determined by visual observation (Fig. 1c). At 72 h and 7-days post-skin test challenge, 8 mm skin punch biopsies (ThermoFisher Scientific, Waltham, MA) were obtained from the sites of tuberculin and saline injection (schematic in Fig. 1). All biopsies were shipped overnight to Texas Biomedical Research Institute (Texas Biomed) for downstream processing. Blood was collected in sodium heparin-vacutainers according to the timeline in Fig. 1 and shipped overnight to Texas Biomed for processing. All procedures were performed under anesthesia (10 mg/kg Ketamine and 0.05-0.5 mg/kg Acepromazine, Covetrus, Portland, ME), and included monitoring of weight, body temperature, heart rate, respiration rate and capillary refill time. Preparation and storage of skin punch biopsies Immediately following collection at OUHSC, each 8 mm skin biopsy was cut in half using a pathology blade and biopsy halves were stored in 10% neutral buffered formalin (ThermoFisher Scientific), snap-frozen in liquid nitrogen, or prepared for electron paramagnetic resonance (EPR) analysis. Remaining skin tissue was banked for future analysis. For EPR analysis, skin biopsies were further divided into two pieces, weighed, and incubated with the following: 1) 400 μM CMH-hydrochloride (Enzo Life Sciences, Farmingdale, NY) or 2) 400 μM CMH + 500 nM rotenone (MP Biomedicals, Santa Ana, CA) + 100 μM antimycin A from Streptomyces sp. (MilliporeSigma, Burlington, MA) (CMH + RA) for 30 min at 37°C. After 30-min incubation, CMH or CMH + RA solution was removed from the tissue and stored in a cryovial. Both tissue and CMH solutions were snapfrozen in liquid nitrogen. Biopsies were shipped on dry ice (snap-frozen tissues) or at room temperature (tissues in formalin) to Texas Biomed for downstream processing. Preparation of tissue homogenates from skin biopsies Skin punch biopsies (snap-frozen in liquid nitrogen) were homogenized in Lysing Matrix D tubes (MP Biomedicals) in Tissue Extraction Reagent I (ThermoFisher Scientific) with cOmplete Mini Protease Inhibitor Cocktail (MilliporeSigma). Protein content was determined by Pierce BCA protein assay kit (ThermoFisher Scientific), according to kit instructions. Measurement of tissue oxidation levels For superoxide determination by EPR, CMH and CMH + RA tissue supernatants were thawed and loaded into a quartz cell. Superoxide levels in the samples were determined by EPR. EPR spectra were obtained on the Bruker EMXnano ESR system (Bruker Corporation, MA, USA) using the following parameters: Frequency, 9.636541 GHz; Center Field, 3435.30 G; Modulation Amplitude, 2.000 G; Power 0.3162 mW; Conversion Time of 40.00 ms; Time Constant of 1.28 ms; Sweep Width of 100.0 G; Receiver Gain at 40 dB; and 1 total number of scans. Samples were baseline corrected relative to CMH only (control). Area under the curve for each sample was then determined. Protein carbonyls in skin tissue homogenates were determined using a Oxiselect Protein Carbonyl ELISA kit (Cell Biolabs, Inc., San Diego, CA), according to kit instructions. Carbonyl levels were normalized to protein content in homogenates, as determined by BCA assay. Reduced glutathione (GSH) was determined in skin tissue homogenates using a GSH/GSSG Ratio Detection Assay kit (Abcam, Cambridge, MA), per kit instructions. Glutathione levels were normalized to protein content in homogenates, as determined by BCA assay. Isolation of PBMCs from whole blood Blood was diluted with 1X PBS and Lymphocyte Separation Media (Corning Life Sciences, Tewksbury, MA) was slowly dispensed underneath the blood, followed by centrifugation at 950 x g for 20 min at room temperature with disconnected brake. After centrifugation, the interface containing PBMCs was transferred to a new tube, washed once with 1X PBS, and red blood cells were lysed (freshly prepared lysis solution containing 0.15 M NH 4 Cl, 10 mM KHCO 3 , 0.1 mM Na 2 -EDTA). Cells were washed twice to remove lysis solution and re-suspended in complete medium: 1X RPMI 1640 supplemented with 25 mM HEPES (MilliporeSigma), 10% heat-inactivated fetal bovine serum (Atlas Biologicals, Fort Collins, CO), 1% HyClone, 1% L-glutamine, and 1% MEM Non-Essential Amino Acids (all from ThermoFisher Scientific). PBMC stimulation For cell stimulations, PBMCs were plated at a final concentration of 250,000 c/well. Cells were stimulated for up to 5 days in the presence of media or Mycobacterium tuberculosis culture filtrate protein (CFP), which contains BCG cross-reactive antigens (BEI Resources, Manassas, VA). For 24-28 h incubations, CFP was used at a concentration of 20 μg/mL. For 5 day incubations, CFP was used at a concentration of 10 μg/mL. Supernatants were collected at the end of the incubation and stored at − 80°C. Freezing and thawing of PBMCs To prepare PBMCs for freezing, cells were re-suspended at a concentration of 10 × 10 6 c/mL in freezing medium: 85% heat-inactivated FBS + 10% DMSO + 5% of glucose 45% solution (MilliporeSigma). Cryovials containing 1 mL cell suspension were transferred to a pre-chilled Mr. Frosty and stored at − 80°C for no longer than 48 h. Vials were then transferred to liquid nitrogen for long term storage. To thaw PBMCs, pre-warmed complete medium was supplemented with 0.2 μl/ml of Benzonase HC (MilliporeSigma). Cryovials containing frozen PBMCs were quickly thawed in a 37°C water bath. After thaw, 1 mL cell medium + 0.2 μl/ml Benzonase was added to each cryovial and the volume was transferred to a 15 mL conical tube. Cells were centrifuged at 250 x g for 7 min at room temperature. Cells were then resuspended in complete cell medium and counted for downstream analyses. PBMC migration to skin tissue homogenates PBMC migration was determined using the CytoSelect 96-Well Cell Migration Assay (5 μM, Fluorometric Format [Cell Biolabs, Inc]), according to manufacturer's instructions. Briefly, skin tissue homogenates with known protein content according to BCA assay were prepared at 50 μg protein in a final volume of 150 μL serum-free medium (1X RPMI 1640 supplemented with 25 mM HEPES, 1% HyClone, 1% L-glutamine, and 1% MEM Non-Essential Amino Acids). Homogenates were added to the bottom chamber of the cell migration plate according to homogenous (Adult to Adult; Aged to Aged) or heterogeneous (Adult to Aged; Aged to Adult) experimental design. Then 500,000 PBMCs in serum-free media were added to the top chamber of the cell migration plate and incubated for 6 h at 37°C. At the end of the incubation period, cell detachment solution was added to the cell harvesting plate. Next, media in the top chamber wells from cell migration plate containing nonmigrating cells was discarded and the cell migration top chamber plate was inserted into the cell harvesting tray for 30 min at 37°C to detach cells. The bottom chambers of the cell migration plate were set aside. After the cell detachment incubation, 75 μL of detachment media (from harvesting plate) and 75 μL of media (from bottom chambers of the cell migration plate) was added to a clear 96-well plate. Lysis buffer and dye solution (from kit) was added to each well of the clear 96-well plate and incubated for 20 min at room temperature. After the incubation, 150 μL from the clear 96-well plate was moved to a clear bottom, black plate and fluorescent was read in a plate reader with a 485 nm/ 538 nm filter and 530 nm cutoff. Statistical analyses Data analysis, graphing, and statistical analysis was performed using GraphPad Prism version 7-9 (La Jolla, CA). For statistical analysis the following were used as described in the figure legends: One-way ANOVA and Tukey's post hoc correction for multiple-testing and Student's t test for testing of means between two groups. Statistical differences between groups were reported significant when the p-value is less than or equal to 0.05. The data are presented in mean ± SEM for n = 2 animals per age group.
5,693.8
2021-04-07T00:00:00.000
[ "Medicine", "Biology" ]
Narrowing the Digital Divide: Framework for Creating Telehealth Equity Dashboards Telehealth presents both the potential to improve access to care and to widen the digital divide contributing to health care disparities and obliging health care systems to standardize approaches to measure and display telehealth disparities. Based on a literature review and the operational experience of clinicians, informaticists, and researchers in the Supporting Pediatric Research on Outcomes and Utilization of Telehealth (SPROUT)–Clinical and Translational Science Awards (CTSA) Network, we outline a strategic framework for health systems to develop and optimally use a telehealth equity dashboard through a 3-phased approach of (1) defining data sources and key equity-related metrics of interest; (2) designing a dynamic and user-friendly dashboard; and (3) deploying the dashboard to maximize engagement among clinical staff, investigators, and administrators. (Interact J Med Res 2024;13:e57435) doi: 10.2196/57435 Telehealth Equity The COVID-19 pandemic catalyzed a surge in telehealth adoption [1,2].However, disparities in access to and adoption of digital health care persist among Black, Hispanic, public-insured, low-income, and rural populations [3,4].This "digital divide" risks worsening health disparities in these populations [5].As such, Crawford and Serhal [6] created the Digital Health Equity Framework (DHEF) to guide the equitable design and implementation of future digital health interventions.The DHEF takes into consideration, how individuals' sociocultural and economic contexts influence intermediate factors, such as environmental stressors and health behaviors, which then drive the digital determinants of health (eg, acceptability of or access to digital health and digital health literacy) at the root of these disparities. While health systems can use the DHEF to implement equity-minded telehealth strategies, understanding and bolstering XSL • FO RenderX the quality of the digital infrastructure within the communities they care for are critical steps to ensuring equitable access to telehealth [7].Unfortunately, digital analytics are still lacking in understanding patterns of use for those underserved by technology infrastructure.Dashboards that showcase key performance indicators in real-time have become valuable tools to track health care access, understand disparities, and apply interventions.Yet, there are no consensus guidelines for the creation of telehealth-specific equity dashboards, which can apply the nuanced considerations for telehealth equity outlined through the DHEF to existing standards for data monitoring. To standardize such dashboards, the Supporting Pediatric Research on Outcomes and Utilization of Telehealth (SPROUT)-CTSA Network formed the Telehealth Equity Workgroup.Evidence on best practices for the collection and use of equity-related data continues to evolve.Based on the review of the existing literature and the operational experience of clinicians, informaticists, and researchers in this workgroup, we aim to describe a strategic framework for adult-and pediatrics-serving health systems to execute telehealth equity dashboards through 3 phases: define, design, and deploy (Figure 1).In addition, we offer a checklist for framework navigation (Figure 2) to motivate more critical monitoring and evaluation of health systems' current telehealth practices and ultimately identify service delivery gaps. Engaging Interested Parties Before beginning to create a telehealth equity dashboard, health systems must identify all interested parties to balance diverse perspectives and priorities.This should include all potential dashboard users such as clinical staff, investigators, and administrators as well as dashboard experts and patient advocates.Early engagement facilitates institutional buy-in to both the development and use of a dashboard.In addition, as there is notable variation in data privacy regulations based on patient age, type of medical problem, local health system policy, and federal laws, early involvement of senior leadership can help ensure dashboards are implemented appropriately.Once identified, interested parties must be continuously engaged throughout all phases of the framework process to ensure these dashboards are developed with the intended users in mind. Phase 1: Define First, health systems should consider what data sources to leverage.Data source mapping is one useful technique to identify usable sources for dashboard development.This inventory process involves cataloging all available sources and describing potentially relevant data to allow teams to consider the feasibility, reliability, and quality of these sources [9]. Poor data quality can have negative downstream impacts, as inaccurate or incomplete data can mask disparities [10].First, patient and caregiver demographics can often be conflated in pediatric and elderly care settings.In addition, previous research found that non-White patients were less likely to have the correct race in their health records and were often mislabeled as White, skewing disparities [11]. Several strategies can mitigate the limitations of missing or inaccurate data [12].Imputation or Bayesian modeling techniques can help bolster existing data by addressing missingness with inferred values.For example, imputing race and ethnicity identified greater disparities in the COVID-19 pandemic compared with only excluding missing data [13].Health systems can also enhance existing data by linking their databases to external sources to conduct area-based monitoring [14].To illustrate, health systems could integrate regional-level population data from national datasets (eg, the National Survey of Children's Health or the American Community Survey for United States health systems) with internal patient data by census tract.Inequities can then be tracked between geographic regions to further support patients from medically underserved areas. Unfortunately, these methods fail to address the root of data inaccuracy.Improvement of data collection processes is the best long-term solution.Staff training, patient education, and options for self-reporting outside of clinical encounters are the key to improved collection [10].Greater transparency regarding the purpose of data collection and improved framing of questions to reduce discomfort in sharing sensitive data could also increase self-reporting [11]. Once data sources are established, health systems can select metrics from the domains of the SPROUT Telehealth Evaluation and Measurement Framework [8], including health outcomes (ie, disease-specific measures), health delivery (ie, quality and cost), individual experience (ie, patient experience data), and key performance indicators (ie, implementation measures), as well as equity stratifiers (ie, environmental and patient attributes).In addition, defining each metric's performance target is critical.Targets can be based on peer organizations' performance, past institutional achievements, national-, state-, or county-wide standards, and public policy goals. Phase 2: Design Next, health systems should carefully consider the design of their dashboards, as literature demonstrates how data aggregation and visualization influence the ability to detect disparities.Common broad racial or ethnic categories such as Black or Hispanic obscure within-group differences that can have significant clinical implications [15].For example, when Asian is grouped with Native Hawaiian and Other Pacific Islanders, such aggregated statistics conceal meaningful differences between subpopulations [16].Thus, it is important to present data as disaggregated by equity stratifiers as possible, acknowledging that some level of aggregation is necessary given data quality limitations.A recent proposal for revised federal government standards for race or ethnicity classification may guide new best practices [17]. We recommend, at a minimum, comparing data from medically underserved populations tailored to each health system with an aggregated "catch-all" category.Health systems may consider including a reference, which is often the total population, or the group with the largest population, the most favorable health outcomes, or the greatest socioeconomic advantage [18].However, there are risks of identifying a "reference" group.Selecting White, for example, as the "reference" population may inherently imply "nonreference" populations require assimilation or acculturation or are generally "abnormal." In addition, designing dashboards with filter functionality across multiple metrics can provide more robust analytics and displays.Irrespective of the population that a health system serves, intersectionality, or the connection between personal identities, is another key attribute to dashboard design, allowing for a more in-depth look at identified disparities.Race as a stratifier on its own could be a proxy for other variables underlying why these disparities exist.However, through filter functionality, users might consider assessing telehealth equity across races with another key attribute such as social determinants of health or internet access [18]. Designers should follow best practices for data visualization [19], including maximizing data-ink ratios and selecting the appropriate software for desired displays.Commercial visualization tools can be found in Figure 2. When choosing visualizations, it is essential to consider ease of interpretation and potential risks of misrepresentation.Tables explicitly lay out comprehensive information but can be difficult to digest.Interpretation can be supported through bolding or color-coding.Graphs can simplify data presentation and draw attention to specific insights, but this simplicity can be misleading [18].It is essential to include missing data percentages to illustrate uncertainty and incorporate features to understand the context of the data for accurate interpretation.For instance, when interpreting a narrowed disparity, the availability of hover functionality to display numerators, denominators, and count breakdowns for each data point can help users understand the source of this change.In addition to reporting current statistics, the ability to view metrics over time permits the detection of trends and postintervention changes in disparities, which is an essential dashboard function. Once a preliminary design has been determined, teams can develop a draft dashboard.From this point forward, design and development should proceed concurrently.The draft dashboard should undergo pretesting with sample end users, which can subsequently inform alterations to the design.Keep in mind, multiple designs are likely needed to accommodate different audiences, from frontline staff implementing care and XSL • FO RenderX monitoring day-to-day activity to administrators interested in quarterly or annual trends. Phase 3: Deploy Finally, intentional deployment of a telehealth equity dashboard is critical to increase use, inform and monitor operational and clinical interventions, preserve institutional buy-in, and create a data-driven culture to improve health equity. Socialization, the process of organizations adjusting to, learning about, and buying into a new initiative, is a key aspect of successful dashboard deployment.Socializing with leadership and clinical providers allows teams to create relationships for long-term reporting and inspires clinicians to use the dashboard in day-to-day operations.Normalizing the use of equity dashboards at all levels can stimulate maintained awareness and action to improve telehealth equity hence laying the foundation for a culture of accountability and quality data collection to address disparities in telehealth and beyond. In this phase, it is also essential to identify a cadence of dashboard review and updates, given the likely differing preferences among users.For example, leadership may expect a quarterly update on high-level telehealth equity experience, while interpreter services may desire monthly check-ins to monitor progress on their practice changes.Socialization with regular review allows for opportunities for feedback, which studies have shown improve data quality [20].By recognizing the appropriate set of interested parties, health systems can continue to enhance their dashboards with the right feedback from a broader and inclusive user group. Once the dashboard has been deployed, data can be used and updated to advocate for new programs or workflows supporting medically underserved populations.The implementation of a dashboard is an ongoing, iterative process through each phase.For example, the telehealth equity dashboard may highlight a disparity that motivates the creation of a new intervention.The implementation of a new intervention may then require new metrics to be added to the existing dashboard or identify other ways to track performance.The dashboard development team may thus return to phase 1 to re-evaluate their sources and metrics.In addition, periodic usability testing by end users can allow for the identification of these key areas of improvement for subsequent iterations.This process, akin to the plan-do-study-act cycle in improvement science, can ensure the adaptability and continual advancement of a dashboard to meet the demands of a dynamic health system [21]. Call to Action Dashboards offer an avenue to improve data transparency.Data sharing, especially as it relates to equity, may be limited due to lack of incentives, fear of public scrutiny, or perceived opportunity costs if data are used for research by external parties [22].However, this creates silos between and even within health systems.Data sharing has the potential to establish shared standards and cross-institutional efforts to improve health on the population level.Therefore, as technology use in health care advances, we must pay close attention to what the data are telling us, be transparent with our progress and shortcomings, and push for change in our care models to ensure equitable quality of and access to care for all patients. Conclusions The COVID-19 pandemic laid bare the implications of the digital divide on health disparities.Nevertheless, telehealth continues to serve as a potential cost-effective care model and promising access point for patients with barriers to in-person services.As such, our strategic framework for developing a telehealth equity dashboard offers a valuable means to track patterns of use and outcomes to provide the evidence needed to support continued investment in an equitable telehealth offering.Telehealth equity dashboards present a promising means to build a culture of data transparency, equity-centered implementation, and continuous improvement to narrow the digital divide and improve access to care for all patients in this expanding world of digital health care.
2,872.8
2024-02-27T00:00:00.000
[ "Medicine", "Computer Science" ]
Model-Free Extremum Seeking Control of Bioprocesses: A Review with a Worked Example : Uncertainty is a common feature of biological systems, and model-free extremum-seeking control has proved a relevant approach to avoid the typical problems related to model-based optimization, e.g., time- and resource-consuming derivation and identification of dynamic models, and lack of robustness of optimal control. In this article, a review of the past and current trends in model-free extremum seeking is proposed with an emphasis on finding optimal operating conditions of bioprocesses. This review is illustrated with a simple simulation case study which allows a comparative evaluation of a few selected methods. Finally, some experimental case studies are discussed. As usual, practice lags behind theory, but recent developments confirm the applicability of the approach at the laboratory scale and are encouraging a transfer to industrial scale. Extremum Seeking: A Real-Time Output Feedback Optimization Technique Real-time optimization (RTO) of steady-state plants stems from computer-aided production engineering (CAPE) and aims at improving process performance through the optimization of a measurable criterion or objective function under economical, safety or quality constraints [1]. One of the most critical limitations of earlier RTO methods lies in model accuracy, and this has lead to adapted RTO techniques [2] which take possible plant-model mismatches into account. The resulting model adaptations may impact the optimization sequentially as in run-to-run methods developed by [3], or also impact the optimization criterion and the related constraint gradients which are modified to match with the true plant [4,5]. Direct input adaptation methods transforming the RTO into a feedback control problem constitute a third alternative exploiting the invariance properties of the steady-state plant. These latter methods include: • Self-optimizing control (SOC), which tracks specific invariants or set-points of the considered process to achieve an indirect optimization [6]. • Extremum seeking control (ESC), which aims at driving the system to optimal operating conditions corresponding to the extremum of a measurable convex objective function [7][8][9]. • Necessary conditions of optimality (NCO) tracking, which uses the NCO as invariants to enforce optimality conditions [10][11][12] and gradient-based optimization. They are therefore considered the run-to-run optimization parent branches of both SOC and ESC; • Dynamic real-time optimization (DRTO) [1,13,14], which originates from the integration of RTO strategies, achieving process operating condition updates, in a model predictive control (MPC) framework. The recent additions of economic costs and constraints has lead to the introduction of DRTO solutions in the form of nonlinear MPC strategies with economic objectives [14]. Figure 1 shows the extremum seeking control (ESC) scheme as a simple output feedback where the objective function y is assumed to be measurable and the controller drives the system to an optimum y * of the steady-state function y = h(u). The Origins of ESC Before delving into the specific subject of ESC of bioprocesses, this section gives a brief historical perspective of the development of ESC in a more general context. For more details and references, we direct the reader to the excellent review of Tan [8] which covers 90 years of developments starting in the 1920s, and the recent research monograph of Scheinker and Krstic [15]. The idea of using a perturbation signal at the system input and observing its effect at the output to estimate the slope of a nonlinear static map can be traced back to the work of the French engineer Leblanc in 1922 [16] in the context of power transfer from a transmission line to a tram car. However, it seems that it was only during World War II that the technique was investigated again by Russian scientists [17]. In 1951, Draper was the first to report research on the subject in the English literature [18]. The 50s and 60s were then flourishing with various works, such as [19,20], essentially contributing to the algorithm's performance without truly developing a systematic design framework. In this period, Blackman [21] published an interesting survey of those methods. Basically, classical ESC methods can be subdivided into methods that involve an external perturbation (dither) signal and those which do not require one and resort to relay switches [18,22] or sliding mode techniques [23]. In 1971, a first Lyapunov stability analysis of a so-called peak-holding scheme, in which the gradient of the performance index is not estimated, but the variation of a parameter allows sensing the direction in which the performance index is evolving, was developed by Luxat and Lees [22]. Three decades were needed for a general stability analysis by Wang and Krstic [24] following the impulse of [25] in developing the framework of ES. An important milestone in fostering the popularity of ES is the publication in 2003 of the research monograph [7] by Ariyur and Krstic. From an (real-life) application point of view, ES was applied successfully in various fields including maximum power point tracking in photovoltaic systems [26,27], drag reduction in formation flight [28], continuously variable transmission [29], ignition systems in combustion engines [30], vapor compression systems [31,32], wind energy conversion systems [33], cavity resonators [34,35], braking sytems in automotive industry [36]. The classical perturbation-based ES scheme suffers from the time-scale separation between the process which is considered as "fast" (as the process has to settle down quickly and be viewed almost as a static map by the controller), and the perturbation signal which must be slow with respect to the process dynamics. Successful applications mostly consider relatively fast electrical and mechanical systems where the slow convergence of ES is not detrimental. Applications with large time constants and long transient phases are more problematic, as is the case for bioprocesses which are the subject of the next sections. One way around the slow convergence is to include some prior knowledge about the model, leading to model-based ES, or some form of parameter estimation, which allows one to keep on model-free extremum seeking. This distinction is also addressed in the following. ESC of Bioprocesses The classical bioprocess optimization problem [37,38] considers a culture of microorganisms or cells in a continuous bioreactor where the following generic reaction takes place where X, S, and P denote, respectively, the biomass (microorganism/cell population), the limiting substrate, and the product of interest. k 1 and k 2 are stoichiometric coefficients. The reaction kinetics can be described by various laws, among which the most popular are Monod law (2a) describing substrate activation; Contois law (2b) describing substrate activation and biomass inihibition; and Haldane law (2c), describing substrate activation at low concentrations, and inhibition at higher concentrations: Mass balances lead to a set of ordinary differential equations: The measurable (via direct measurements or state estimation) performance index can be linked to the productivity of the biomass, the product of interest, or the reaction rate: Optimization of bioprocesses was first addressed in 1999 by [37] in a numerical study using a classical ES scheme with a sinusoidal dither signal and a combination of high-pass and low-pass filters (further developed in [7]). In the same period, the experimental work of Akesson [39,40] applied a simple and practical probing method in the spirit of the peak-holding algorithm of Luxat and Lees [22], the idea being to send pulses of substrate and sense whether it is consumed by the micro-organisms. This work can be considered as the seed of a more general ES theory using nonlinear programming and developed in [41]. Extremum Seeking Approaches Depending on the available knowledge about the model structure, different extremum seeking approaches have been developed. Logically, the better the system is known, the more efficient the controller may be. Roughly, the ES controllers can be classified into two main streams: • Model-based strategies, wherein a model-based controller takes advantage of the knowledge about the model structure and either estimates the unknown parameters from online information, or exploits some robustness properties. • Model-free strategies, wherein online information is directly exploited to estimate the gradient of the measurable cost function, and to drive the process to the optimum using feedback control. No prior knowledge about the kinetic model is required, and the optimum is simply reached by driving the process wherein the estimated cost function gradient vanishes. Of course, the two streams are not strictly disjunct, and for instance, one can imagine taking advantage of any prior knowledge to improve a model-free strategy. While this paper mostly focuses on the second category of methods, the first stream has brought a number of important results that are briefly reviewed in the next section. Model-Based Strategies: Uncertain Trajectory Tracking Model-based ES strategies rely on a hypothetical kinetic structure and aim at driving the system to an optimal operating point or along an optimal operating trajectory. Since kinetic parameters are unknown or at least uncertain, the control law requires their estimation by adaptive laws or the consideration of model uncertainties and robust design. Adaptive Output Feedback Control The concept of adaptive ESC of nonlinear systems with uncertain parameters has been developed by Guay and Zhang [42], who introduced an "inverse optimal" approach to solve the optimization problem. The application of this method to continuous bioreactors has been proposed in the wake of this latter publication in [43] starting from the simple microbial growth model (3a), (3b), and (4c). These equations can be rewritten in the following compact form, replacing the kinetic term by its measurement:ẋ Define the dilution rate as the system input (u = D); then an extremum seeking control law is proposed to drive and stabilize the system in the neighborhood of y * = f (µ * ). Analyze the behavior of the output time derivativeẏ = h(x, s, u, θ) where θ contains the stoichiometric parameters k 1 and k 2 but also the kinetic parameters involved in the hypothetical structure of µ. Since these parameters are unknown, parametric adaptive laws are inferred from the overall loop, including the extremum search combined with the adaptive controller. Stability and convergence studies resort to Lyapunov stability theory coupled to LaSalle-Yoshizawa's theorem, LaSalle's invariance principle, or Barbalat's Lemma [44][45][46]. Further application results can be found in [38,47], which propose model-based extremum seeking controllers optimizing either the reaction rates or the biomass production of continuous and fed-batch processes, characterized by substrate activation and saturation following Monod kinetics (2a). Further, Refs [48][49][50][51] have extended these studies to Haldane kinetics (2c), which represents the additional effect of substrate inhibition at high concentrations. More complex kinetic structures have been considered in [52] where biomass productivity is optimized under the assumption of microbial overflow metabolism [53]. In this case, growth inhibition results from the accumulation of by-products reducing the cell respiratory capacity, and the kinetics involve switching between different models. As discussed in the survey [47], adaptive extremum seeking can be generalized to the situations where no prior knowledge is assumed, using universal approximators such as neural networks [54]. Sliding-Mode Control Another approach consists of optimizing the cost function by sliding-mode control along the cost function manifold and choosing its gradient as switching function σ = ∂y ∂u . Based on this idea, Lara-Cisneros et al. [55] proposed the optimization of fed-batch bioreactors modeled by (3) using a sliding-mode extremum seeking control combined with a high-gain observer estimating the uncertain kinetics. This study was extended to a two-stage anaerobic digestion model in [56]. In the same spirit, Vargas et al. [57] tackle the problem of biomass productivity optimization in bacteria cultures, presenting overflow metabolism using a two-level controller with super-twisting observers as process output estimators. Further, an input-output linearizing sliding-mode control is derived for well-defined relative degree systems with minimum phase in [58,59]. Integration of ES to Model Predictive Control The optimization problems introduced in Sections 2.1.1 and 2.1.2 do not consider constraints, which can be unrealistic in the context of real-life applications. Indeed, industrial processes are run under specific conditions implying media compositions, mininum and maximum authorized production rates, and energy consumption costs. Application of ES control to constrained optimization problems of control-affine nonlinear systems such as (3) was first proposed in [60] using a flatness-based approach. The optimal trajectory is generated by adaptive ES, and a state and input-constrained dynamic nonlinear programming problem is solved using an interior-point technique. This approach is applied to batch processes in [61]. The study of [13] generalizes the approach by integrating ES to model predictive control (MPC) considering a time-varying economic cost function based on a model presenting parametric uncertainty sets and state constraints. Uncertainty issues are solved by considering min-max optimization to determine the worst/best solutions with respect to the parameter uncertainty sets and a Lipschitz-based adaptive approach imposing Lipschitz bounds. This method is extended to economic model predictive control (EMPC) in [62], which does not track a specific trajectory but directly solves the time-varying economic optimization problem. Model-Free Extremum Seeking Control Model-free ES strategies rely on the existence of a measurable and convex objective function but do not require prior knowledge about the process model. Their principle is to estimate the gradient of the objective function and to force this estimate to zero (a vanishing gradient being the first necessary condition for an extremum of a differentiable function). The Classical Pertubation-Based ES and the Use of a Filter Bank One of the earlier forms of model-free ES is the perturbation-based ES of [7] shown in Figure 2 and described by the following equations : whereû is the estimated input; u =û + d stands for the input signal applied to the process, including a sinusoidal dither signal d = A sin(ωt); y is the measurable objective function; y − η is the high-pass filtered output, that is, without a continuous component; δ is the demodulated signal resulting from the multiplication of y − η by d; andξ ≈ 1 k I ∂û ∂t is the gradient estimation, obtained as the output of an optional low-pass filter. The use of a bank of filters leads us to designate this scheme as "BOF" (a wink to the French speaking reader, but there is no prejudice in this denomination) in the sequel of this review (see Figure 2). The design parameters of this ESC have to obey some rules: the cut-off pulsations of the low-pass filter ω l and high-pass filter ω h should be smaller than the dither signal pulsation ω; i.e., they should be located in the interval ]0 ω[. On the other hand, the dither signal pulsation ω should be small with respect to the process dynamics-for the process to be in periodic steady state-and its magnitude sufficiently large to overcome the information attenuation by the filters. Static Map These rules reflect the time-scale separation [7] already highlighted in the introduction to this review, where: • The process has the fastest dynamics, which, in the limit, can be seen as a static map; • The dither signal has intermediate dynamics; • The gradient estimator has the slowest dynamics. It is important to note that the issue of the time-scale separation worsens in cases of systems with multiple inputs [7]. In the particular case of multi-input single-output systems with r inputs, it is necessary to design r+2 2 (rounded to the next lower integer) dither frequencies [63], thereby requiring more time separations and slowing down the gradient estimator. A suitable choice of the design parameters ensures an exponential convergence of y in a O(ω + A) neighborhood of the optimum y * = f (u * ) [64], where u * is the corresponding optimum input. Optimizing control of bioreactors using BOF-ES was first reported in [37], where a bifurcation analysis of model (3) with Monod and Haldane kinetics was achieved. This study highlights the operational limits and suggests the use of wash-out filters to guarantee the system's stability. Subsequently, Nguang and Chen [65] proposed an illustrative application of the same strategy to continuous fermentation processes and [66] to anaerobic digestion. However, perturbation-based techniques and the BOF-ESC are only successful if the system can be seen as almost static, that is, slowly perturbed by the dither signal. The slow convergence resulting from this time scale separation can be a serious drawback in biological applications where the dominant time constants can be very large. Moreover, the characterization of the effects of the dither signal frequency and magnitude on the convergence, as presented in [64], is a significant additional issue for general nonlinear systems. This has motivated further studies, such as [67], which concluded that the convergence accuracy of BOF-ES is only concerned with the magnitude of the dither signal for a specific class of nonlinear systems represented by Wiener and Hammerstein block-oriented models. Furthermore, Tan et al. [68] considered several dither signals and discussed their impacts on the domains of attraction, convergence, speed, and accuracy. To alleviate the undesirable time scale separation effect, Krstic [69] suggested introducing controllers to speed-up the closed-loop dynamics or to compensate for the phase-shift introduced by the system dynamics at the perturbation frequency. This latter idea has been proven achievable by [70], depending on the system relative degree, and thus, for a restricted class of nonlinear systems. In the last decade, several practical design guidelines have been proposed to improve the convergence rate, without true analysis, based on the selection of the operating conditions [71], or the use of several dither signals to enrich the frequency content [72,73]. In [74], an application of BOF-ESC to yeast fed-batch cultures with overflow metabolism is presented, where the convergence speed was enhanced thanks to an appropriate manipulation of the substrates presenting fast dynamics. The first experimental validation of BOF-ES in the field of bioprocess optimization can be attributed to [75] optimizing microalgae growth at pre-industrial scale by manipulating the pH. Overall, BOF-ES can definitely be considered as a useful, practical, and easy-to-implement solution to optimize bioprocesses as long as operating conditions, dithers and controlled variables are carefully selected with respect to system dynamics. Recursive Parameter Estimators A simple solution to improve the convergence speed of perturbation-based ES algorithms is to replace the bank of filters, that is, one part of the time scale separation, in a recursive estimator as shown in Figure 3, where: This ES structure convergence is now only dependent on the system dynamics and the dither signal parameter selection. An illustration of this recursive strategy, using a continuous recursive least-squares (RLS) algorithm with forgetting factor [25,76], can be found in [74], which discusses the application of BOF-ES and RLS-ES to biomass productivity optimization in fed-batch cultures. The main conclusion of this study is that the recursive estimator provides a more robust behavior and a faster convergence. The continuous ES-RLS scheme is described as follows: where K is the positive adaptation gain; R is the inverse covariance matrix acting as a directional adaptation gain; λ is the forgetting factor used in order to avoid covariance wind-up issues due to the absence of bounds in R growth (if λ = 0,Ṙ ≥ 0 [76]). Later, Chioua et al. [77] characterized the convergence rate and proved the stability of the ES loop with RLS as the gradient estimator for nonlinear systems, independently of their possible decompositions in block-oriented models (Wiener/Hammerstein). In the same time period, Guay and Dochain [78] have reformulated the ES description as a time-varying control problem for general nonlinear systems, using the combination of a continuous parameter estimator based on a generic prediction model of the output y (not requiring a priori knowledge on the process), and an original integral control law. The main improvement of the method lies in the use of the dither signal, which, following a favorable contribution from the natural time-variation of the unknown parameter (gradient), becomes no longer necessary. Guay and Dochain [78] therefore show that the suppression of the intermediate dither dynamics is also possible, limiting the convergence speed to the process dynamics and the estimator performance. Empirical ESC: Probing Control The model-free methods presented in Sections 2.2.1 and 2.2.2 rely on the existence of a convex and measurable objective function. When the available process instrumentation does not allow this latter condition to be verified, one may resort to more practical or even empirical software solutions. For instance, optimizing biomass productivity of cells exhibiting overflow metabolism requires one to measure an objective function related to the cell respiratory capacity which should be maximal in order to optimize substrate consumption [53]. This critical measurement can, however, only be achieved indirectly [74] and the convex objective function is reconstructed using measurement signals and some prior knowledge about the reaction stoichiometry. The performance of the ES technique therefore depends on the stoichiometric coefficient estimation accuracy, which may become critical. In the early 2000s, Akesson et al. [39] proposed an interesting empirical probing control method detecting the possible saturation of the respiratory capacity of Escherichia coli, and in turn, inhibitory by-product (acetate) accumulation [39], by measuring the oxygen uptake following a sudden change in the substrate (glucose) uptake caused by feed pulses. Indeed, when the cell's respiratory capacity is saturated, the dissolved oxygen concentration (measured variable) does not vary any longer, since cells are not able to assimilate more oxygen. In analogy with classical ESC, probing control uses these pulse sequences as dither whose magnitude and frequency are designed in accordance with the sensitivity level of dissolved oxygen variation detections, and the combined respiratory capacity and substrate uptake maximization define the extremum of an unknown objective function. Probing control became popular during the first decade of the 21st century, promoted by several experimental studies in various fields such as human health [79] and agriculture [80]. The characterization of overflow metabolism for yeasts (S. cerevisiae, P. pastoris) and bacteria (E. coli) was also provided by [81] using probing control and a scaling-up approach of the method was proposed by [82]. Convergence Issues and Acceleration The ESC problem, as defined in the previous sections, assumes basic practical constraints such as fast process dynamics (with respect to the dither and the optional filters) and a single measurable objective function. However, several issues may arise when the process natural dynamics are detrimental to objective-related time restrictions or when several competing objectives must be handled. In the following, some practical ways to manage and accelerate the ESC loop convergence are addressed, starting with a simple feedback form. Extremum Seeking Control: A Simple Output Feedback Form A minimal ESC loop description, as illustrated in Figure 4, considers two blocks: the gradient estimator and the integral controller pushing this gradient to a reference value (in the case of extremum seeking, zero) ξ re f such that the error = ξ re f −ξ is zero. A simple pole-placement procedure can be used to define the ES closed-loop dynamics, i.e., to design the integral constant k I . As discussed in [83], a controllable state-space representation of the gradient evolution can be inferred as: where w is a function of the input reference u re f and the parameter θ estimated by the recursive estimator from Figure 3, both assumed to provide the desired gradient ξ re f . By comparing Figure 4 with Figure 3, we can easily infer that A = 0, B = 1, and C = 1. By introducing an integral action, the system reads: When applying classical pole-placement procedures, the closed-loop dynamics can be specified in the time domain, yielding the integral (extremum seeking) control gain. However, it must be noticed that some unexpected transient dynamics may not be taken into account and u re f must therefore be chosen in accordance with the domain of attraction and stability analysis, often restricting the input operating range to a specific interval [u min u max ]. Consequently, if the closed-loop response presents transients due to neglected system dynamics, such that the input is out of this permissible range, the overall system may fall into unstable regions. To minimize the risks of loop destabilization, a sufficient level of robustness should be imposed, with large gain margins and limited overshoots. Figure 4. Extremum seeking as a simple output feedback. Proportional-Integral Extremum Seeking In an alternative attempt to accelerate the convergence of the ESC, Guay and Dochain [84] first proposed to combine the time-varying estimation method from [78] with a proportional-integral controller, as shown in Figure 5. This method generalizes standard ESC using the integral action as "extremum seeker" and proportional action as the fast optimizer of the objective function, amplifying the estimated gradient descent or ascent effect. A general formalism of proportional-integral extremum seeking control scheme (PIESC) has been provided by the same authors in [85]. The closed-loop PIES controller reads: where k p and k i are respectively the proportional and integral gains. A second PIESC approach has also been proposed by [86], using the classical perturbation-based BOF scheme. In this study, the parameter (gradient) estimator block in Figure 5 is replaced by an output derivativeẏ estimator modeled by a high-gain high-pass filter block with transfer function ω h s s+ω h and ω h >>. The filter output may then be read as: The tuning rules proposed by [86] may be summarized as follows: • The PIESC with BOF converges to a 1 ω neighborhood of the optimum and ω should therefore be chosen sufficiently high; • ν(0) should be chosen equal to y ω to avoid sudden initial jumps arising fromν; • ω h should be of a greater order of magnitude than ω to favor the derivative effect. An application of the PIESC to the optimization problem of compressor discharge temperature setpoint selection for a vapor compressor system, minimizing power consumption, has recently been achieved by [87], who showed the performance of the method in realistic operating conditions (noise, input/sensor quantization), through experimental validation on a room air conditioner. As a result, the PIESC is able to achieve the optimizing control in a time-scale in the order of the process dynamics. Since no PIESC design for bioprocesses has been made available in the literature, a first attempt is provided in the current review in Section 4.1 and numerical results are in Section 4.4. Dynamic Modeling with Block-Oriented Representations As stated in Sections 2.2.1 and 2.2.2, the convergence accuracy of BOF-ES and RLS-ES applied to a specific class of nonlinear systems represented by Wiener/Hammerstein (W/H) block-oriented models, is only impacted by the dither signal magnitude. Semi-global stability results of the application of observer-based extremum seeking to W/H plants is presented as an extension of the work of [88] in [89,90], showing that the ES scheme requires the knowledge of the relative orders of both input and output dynamics of the plant. From the latter results, a discrete version of the method for W/H plants with knowledge limited to the linear dynamics is also proposed, suggesting guidelines for the choice of the ES parameters, first setting the dither signal magnitude and then adapting the ES integral gain. The latter also presents an experimental validation of the method on a spark-ignition engine. Eventually, the same authors proposed directly adding some model knowledge in the adaptation scheme of the fast extremum seeking method in [91]. Interestingly, this allows improving convergence not by completely removing the time-scale separations of, in decreasing speed order, the optimizer and the estimator, but by reversing them, that is, by accelerating the estimation and subsequently the optimization. Another fast algorithm using proportional-integral phasor extremum seeking was proposed by [92] for W/H systems; its semi-global and practically asymptotical stability was shown. The principle is to drive the system to its optimum by considering the measured output phasor, from which a proportional approximation of the static cost function gradient is obtained, under the assumption of a priori known input and output dynamic block relative orders. It should be noticed that considering the phasor allows taking both the gradient and the phase shift, respectively related to the static map and the plant dynamics, into account. This estimator is then combined to a PI-ES scheme, in the spirit of [86] with an original amplitude update mechanism reducing the perturbation amplitude as the algorithm approaches the optimum vicinity, in the spirit of [93], which will be discussed in the next subsection. In the field of biosystems, Feudjio et al. [83] proposed an advanced auxiliary model-recursive prediction error method (AM-RPEM)-based ES algorithm for systems represented by block-oriented models. In this algorithm's structure, represented in Figure 6, a slope generator is coupled to the controller to reach any steady-state belonging to the static map, making ES a particular case of the slope-seeking strategy. The considered class of bioprocesses is characterized by the following assumptions: • The bioprocess is operated in continuous mode with a constant volume (the inflow is equal to the outflow); • The measurable performance index is the biomass production y = DX, i.e., the product of the dilution rate D imposed by the inflow and outflow pumps by the biomass concentration X; • The bioprocess can be approximated by a Hammerstein model [94] presenting a quadratic static nonlinearity of the form: v = M 1 u 2 + M 2 u + M 3 (13) where u = D and v represent the static map input and output signals, respectively. This representation assumes that the performance index can be approximated, at least locally, by a quadratic form. For a maximum, it is required that M 1 < 0. The Hammerstein model also includes dynamics, which can be described by a transfer function. • A discrete second-order transfer function may suffice to represent these dynamics: where z −1 is the backward shift operator such that z −1 y(k) = y(k − 1) and K is a gain which can be chosen so as to ensure unitary steady-state gain, i.e., G(1) = 1, which implies K = 1+β 1 +β 2 1−α . The two poles defined by the parameters β 1 and β 2 represent the actuator and process dynamics. Under these assumptions, the unknown model parameters can be estimated using a simple recursive least-squares (RLS) method, as originally proposed in [74], or in order to improve the convergence and precision of the estimates, the recursive prediction error method (RPEM) from [95,96], where, unlike classical RLS algorithms which provide as estimates the linear combinations of the unknown parameters, the Hammerstein model parameters are directly inferred. However, since analytically computing parametric sensitivities can be tedious, an auxiliary model is used to simplify the evaluation of the sensitivities. Another option is also provided, considering an output-error auto-regressive model [97] which is mostly useful in the presence of relatively large process/measurement noise. Multivalued Cost Functions and Competing Objectives An intrinsic property of the previously discussed systems is the existence of a sole extremum. For complex (bio)processes, it is legitimate to consider possible multiple local extrema of the steady-state objective function. In [93], this problem is tackled using an adaptive dither magnitude update law, specifying the conditions of convergence to the global extremum. The proposed solution consists of designing and initializing the dither signal magnitude law to allow the bifurcation towards the submap of the objective function where the global extremum is located. Since ES convergence is affected by the dither signal magnitude, the latter is adaptively decreased with time, as the algorithm approaches the global extremum. Starting from a simplified ES scheme drawn in Figure 7, the closed-loop representation can be written, denoting σ = ωt: where f stands for the nonlinear state space function of the system, x is the state vector, δ is a positive design parameter, A(t) is the varying magnitude of the dither signal with A(0) > 0, and g(A) is a locally Lipschitz function that is zero at zero and positive otherwise. Fast and slow dynamics are inferred by considering x in equilibrium, that is, x eq = l(û + Asin(σ)), which is assumed to be asymptotically stable, leading to the following reduced system: Considering the corresponding average system over one 2π period and introducing τ = δσ, (16) becomes: Tan et al. [93] have shown that there exists a continuous function p(u, A) such that Q av = A p(u, A). The standard singular perturbation technique is unfortunately not applicable (for more details about this statement, see [93]) and it is therefore recommended to analyze the convergence properties of the ES scheme for a specific initialization A(0) = A 0 , assuming that there exists an isolated and unique real solution u = (A) to Q av = 0 such that is continuous and ∂p ∂ < 0 for all A ≥ 0 and u * = (0) is the global optimum. Following this latter assumption, it may be demonstrated that the convergence of the extremum seeking scheme of Figure 7 to u * is practically semi-global. The global extremum of a multimodal objective function can therefore be reached by the perturbation-based ES following judicious choices of the initial dither signal magnitude and the function g(A). A practical interpretation of the latter results in optimal operating mode seeking for bioprocesses is discussed in [98], where a multivalued cost function originating from the combination of both yield and production objective functions is derived from model (3) where only the biomass and the substrate are considered. The kinetics are modeled by a simple Monod law and an exponential factor representing growth inhibition by the substrate (which, physically, can be compared to the Haldane kinetics (2c)). Consequently, the resulting cost functions present two distinct quasi-steady-state maps between which the ES algorithm must be able to operate a switch when initialized on the map which does not contain the global extremum. Considering only the stable branch, the set-valued cost function Q(u, A) is, in this particular case, defined as a set of two continuous single-valued functions such that, omitting u and A for the sake of clarity, Q = {Q 1 , Q 2 } with the conditions: For each value of u 1 and u 2 , there are asymptotically stable equilibria x = 1 (u 1 ) and x = 2 (u 2 ); To enable some switch conditions from one single-valued cost to the other, in addition to the application of the results from [93], it is concluded that restrictive conditions involving the initial value A 0 and g(u, A) should be used in order to make the strategy successful, also implying a loss of performance. The existence of several extrema may also result from a dynamic separation interpretation. Even if a process presents a single and a priori global optimum on the steady-state map, the isolated fast dynamic analysis may reveal the existence of a transient extremum. An illustration based on the CANON process, used to remove ammonium from concentrated wastewater streams, is provided by [99] where the fast dynamics, called the boundary layer model, are represented by the substrate and flow dynamics, while biomass growth is assumed to be the slow dynamics. In this example, even when the biomass is not at steady-state, there exists a transient optimum on the boundary layer, and more interestingly, it turns out that locally optimizing the transient objective leads to the global optimization of the steady-state objective function, which comes to optimize an instantaneous objective without taking the future trajectory into account. A phasor-based extremum seeking controller inspired by [100] is therefore applied and the resulting algorithm is called "greedy" ES in analogy to greedy algorithms in computer science [101] that select the best instantaneous options without regard to future decisions. An Illustrative Example: Optimization of Biomass Productivity The aim of this section is to provide a worked illustrative example of the respective performances of the model-free ES algorithms introduced in Sections 2.2 and 3 where biomass production (4a) is optimized using model (3) and considering three classical kinetic structures: Monod (2a), Haldane (2c), and Contois (2b) laws. All the parameter values used in the following simulations are included in Table 1. Regarding the bifurcation analysis of system (3) with Monod and Haldane kinetics, the reader may refer to [37] for detailed results which can be summarized as follows (the wash-out equilibrium (x e , s e ) = (0, S in ) is not considered): under the assumption of Monod kinetics, wherein the "e" index denotes equilibrium values and "*" index values at optimum, and under the assumption of Haldane kinetics for which there exist two equilibrium trajectories. A similar analysis using the Contois kinetic law (2b) can be achieved and provides the following original results. The new equilibrium reads: and the equilibrium definition condition is D e < µ max . The steady-state extremum y * e can be calculated considering ∂y e ∂D e = 0, which leads to the following optimal dilution rate: The stability analysis of the equilibrium requires one to determine the Jacobian of system (3) while considering the Contois kinetic law, which yields: and in steady-state, we obtain: The eigenvalues of (23) are: and the stability conditions on D e are then: Bank of Filters with PI Control A PIESC with BOF tuning is suggested, based on [86], as follows: where d = Asin(ωt) is the dither signal and the output derivativeẏ is computed by an additional high-pass filter (12), of the form:η where η is an intermediate variable, the image of the output derivative, and ω h is the filter cut-off frequency which should be chosen to be greater than ω, the dither signal angular frequency. Even if some tuning rules are proposed in [86] to design ω and ω h , there is unfortunately no indication concerning the tuning of k p , k i , and the dither signal magnitude A which must be designed by trial and error. In order to illustrate the advantages of the PI method with respect to the classical BOF-ES from [7], two parametrizations are considered in the following and are summarized in Table 2. The first case corresponds to the simple BOF-ES proportional control (k p = 0) strategy, for which the parameters are optimally selected by trial and error to provide the best transient behavior (fastest convergence). For an accurate analysis of the choice of the dither signal magnitude and frequency, the reader may refer to [88] for semi-global stability results and [67] for the particular cases of Wiener/Hammerstein approximations. The second case corresponds to the BOF-ES PI control application with a specific parametrization respecting the tuning rules of [78]. Recursive Least-Square Estimation with PI Control A simple recursive least-square extremum seeking algorithm (RLS-ES) combined with the PI control structure introduced in Section 4.1 is proposed in the same spirit as [85]. The PIRLS-ES controller slightly differs from (26) which is rewritten: whereγ replacesẏ and denotes the estimator of the cost function gradient y u . Indeed, taking the partial derivative of y = ux (u = D) with respect to u yields: Multiplying (29) by u, we obtain: where ϕ = [u 1], θ = [θ 1 θ 2 ] = γ − u 2 ∂x ∂u , and γ = ∂y ∂u . It must be noticed that when reaching the extremum steady state y * , γ = 0, and from (29), y * = θ * 2 = −u * 2 ∂x ∂u u=u * ,x=x * . This time, the cost function gradient is assumed to be estimated by a continuous RLS with forgetting factor [76] which reads: The chosen parametrization, which is also set by trial and error, is reported in Table 3. Table 3. PIRLS-ES controller parameters. Hammerstein Model and Pole-Placement The recursive least-square method of Section 4.2 is now considered under the assumption of a block-oriented model representation as in [83]. The microbial growth model (3) is approximated by a Hammerstein model with a static map x(t) = F(u(t)) followed by a first-order (strictly proper and stable) transfer function describing the system/sensor dynamics G(s) = 1 1+τs . Since measurements are collected at discrete times, a discrete form of G(s) is considered using the matched pole-zero method, leading to G(z) = K 1 z−α with K 1 = 1 − α and α = e − Ts τ , T s being the sampling period. The corresponding scheme will be denoted BOM-ES (block oriented model extremum seeking) in the following. The static map is approximated by the linear form : where x is an intermediate state variable, u = D the input, ξ the gradient, and c a constant. The first-order dynamics reads: Combining (31) and (32) results in: The regression (33) and associated RLS problem provide the gradient estimateξ = θ 3 1−θ 2 of ξ wherê ξ ≈ ∂y ∂u , the gradient estimation, is driven to zero in average by the extremum seeking control loop. A dither signal d = Asin(ωt) is used to ensure a persistency of excitation [25]. As a general rule, a minimum of n 2 distinct sinusoids is necessary for the identification of n parameters [63]. As suggested in Section 3.1, a simple pole-placement procedure is applied. The BOM-ES parameters are summarized in Table 4. A good compromise between convergence speed and robustness is to impose a rise time of 20 h to the closed-loop system, resulting in an integral gain k I = 5 × 10 −4 . Since 3 parameters of (33) are estimated, the dither signal is chosen as the sum of two sinusoids (which allows identifying up to 4 parameters) d = A 1 sin(ω 1 t) + A 2 sin(ω 2 t) and the measurement sampling time is set to T s = 0.1 d. Numerical Results The several ES algorithms designed in Sections 4.1-4.3 are now compared when optimizing the biomass production, y = Dx in the presence of different kinetic structures, such as Monod, Haldane, and Contois. First, Figures 8 and 9 show the results of the application of the BOF, PI-BOF, PI-RLS, and BOM extremum seeking methods to the Monod case. Not surprisingly, BOM and PI-BOF algorithms achieved the fastest convergences of, respectively, 50 and 25 h; PI-RLS and BOF converged within, respectively, 300 and more than 1000 h. However, Figures 8 and 9 also show that PI-BOF reaches the optimum neighborhood with a significant offset (see the zoom box of Figure 9) while the convergence of BOM, even if slightly slower, is more accurate. These first results are confirmed in the Haldane case in Figures 10 and 11, even if the BOM algorithm performs better this time in terms of convergence time and accuracy, and PI-RLS and BOF respectively converged within 200 and 600 h. Nevertheless, the algorithm behaviors are interestingly emphasized when using the Contois kinetic law, as shown in Figure 12. While the performances of the BOM and BOF strategies are identical to the previous cases, PI-BOF and PI-RLS show larger offsets with respect to optimal state and dilution rate equilibria. This can be attributed to the gradient values which are, however, still very close to 0. In the particular case of Contois kinetics, the convergence diagram of Figure 13 shows that the equilibrium slopes are much smoother compared to the previous Monod and Haldane cases. A small shift in a close neighborhood of the optimum, keeping y ≈ y * , may then lead to important offsets on the dilution rate and therefore on the states. Observations based on these several numerical illustrations can be summarized as follows: • The BOM strategy is by far the most accurate since it allows taking unknown output dynamics into account. • The PI-BOF strategy allows solving the extremum seeking problem almost instantaneously and therefore provides the fastest convergence, but as a drawback, has a technically inherent and significant offset (see [78]). • The PI-RLS strategy presents a smaller offset and is overall easy to implement while achieving reasonable convergence times. • The BOF strategy, even if in the original it was the most intuitive and easiest to implement method, presents several drawbacks, such as long convergence time, tedious parameter tuning, and possible offsets. Real-Life Applications: Microalgae Production Optimization Most of the publications discussed in this review propose simulation results in order to provide some practical assessment and recommendations in view of future experimental implementation. Among the very few experimental validations reported in the literature, two investigations aiming at maximizing microalgae culture productivity by extremum seeking have been proposed in [75,102]. Microalgae Growth Rate Optimization In [75], the method consists of modulating the pH to optimize the measured Nannochloropsis Oculata microalgae biomass growth rate, using the CO 2 "flow as actuator" in an internal loop regulating the pH. In this regard, a perturbation-based filter bank ES was first designed in a simulation and some tuning guidelines were suggested as follows: • Quantification of the system dynamics and uncertainties; • Selection of the dither frequency, and to some extent, its structure; • Trial and error optimization routine defining the high-pass filter cut-off frequency; • If the level of noise requires it, determination of the low-pass filter cut-off frequency. The use of a low-pass filter should, however, be avoided as much as possible since it is a source of convergence slow-down (the reader may refer to the time-scale separations explained in Section 2.2.1). Following the initial simulation results, an experimental validation with a 27-liter tubular bubble column photobioreactor reproducing a down-scaled industrial process was achieved. The main results show that the ES scheme managed to bring the system to an acceptable pH range, as predicted by other works from the literature. Microalgae Productivity Optimization In a recent experimental investigation achieved by the authors [102], a RLS-ES controller was designed, assuming a Hammerstein representation between the biomass X of the strain Dunaliella tertiolecta, as measured output, and the dilution rate D, assumed to be the sole input. The biomass productivity, defined as P = DX is then maximized and its gradient estimate is provided by: assuming the static mapping X(k) = m D(k) + b. It must be noticed that a filtered biomass variable X f may also be considered leading to similar performance. The estimated optimal productivity and dilution rate can be inferred as: The experimental set-up was a 13-liter flat photobioreactor shown in Figure 14. For more details regarding the monitoring devices, the reader may refer to [102]. A ten-day experiment was performed, leading to the results presented in Figures 15-17, wherein the gradient was indeed pushed to zero on average within 4 days leading to the maximum productivity level. Figure 17 compares the predicted static map with the convergence results, proving the method's accuracy. Following these encouraging first results, further experiments were scheduled, using the BOM-ES method from Section 4.3, and are currently in validation. Conclusions During the last two decades, extremum seeking (ES) has become a very popular research subject, and has lead to an increasing number of real-life applications to electro-mechanical systems. The application of ES to biosystems is, however, quite tedious due to the uncertain nature of the "living" in a broad sense, and the slow dynamics of the bioprocesses. In view of the model uncertainties, model-free ES appears as an appealing approach to achieve "plug-and-play" process optimization. Even if the seminal perturbation-based techniques based on the use of high and low pass filtering provide guaranteed results if a correct parameterization is selected, the three time-scale separations entail very lengthy convergence times. Most of the recent ES approaches therefore focus on ES convergence issues, acting on the controller structure and design, on parameter estimation, or on the dither signal definition. This review paper discussed some of these techniques using an illustrative simulation example, where the biomass production of a simple microbial growth model was maximized considering several kinetic structures with different complexity levels and corresponding static maps. Only a few real-life applications of ES to bioprocesses are described in the open literature, namely, those to continuous microalgae cultures in photo-bioreactors. Hopefully, these experimental studies will pave the way for future studies at pilot and industrial scales. Conflicts of Interest: The authors declare no conflict of interest.
11,109
2020-09-25T00:00:00.000
[ "Engineering" ]
Reduced Renal Mass, Salt-Sensitive Hypertension Is Resistant to Renal Denervation Aim: Activation of the sympathetic nervous system is common in resistant hypertension (RHT) and also in chronic kidney disease (CKD), a prevalent condition among resistant hypertensives. However, renal nerve ablation lowers blood pressure (BP) only in some patients with RHT. The influence of loss of nephrons per se on the antihypertensive response to renal denervation (RDNx) is unclear and was the focus of this study. Methods: Systemic hemodynamics and sympathetically mediated low frequency oscillations of systolic BP were determined continuously from telemetrically acquired BP recordings in rats before and after surgical excision of ∼80% of renal mass and subsequent RDNx. Results: After reduction of renal mass, rats fed a high salt (HS) diet showed sustained increases in mean arterial pressure (108 ± 3 mmHg to 128 ± 2 mmHg) and suppression of estimated sympathetic activity (∼15%), responses that did not occur with HS before renal ablation. After denervation of the remnant kidney, arterial pressure fell (to 104 ± 4 mmHg), estimated sympathetic activity and heart rate (HR) increased concomitantly, but these changes gradually returned to pre-denervation levels over 2 weeks of follow up. Subsequently, sympathoinhibition with clonidine did not alter arterial pressure while significantly suppressing estimated sympathetic activity and HR. Conclusion: These results indicate that RDNx does not chronically lower arterial pressure in this model of salt-sensitive hypertension associated with substantial nephron loss, but without ischemia and increased sympathetic activity, thus providing further insight into conditions likely to impact the antihypertensive response to renal-specific sympathoinhibition in subjects with CKD. INTRODUCTION The frequent association between chronic kidney disease (CKD) and treatment resistant hypertension (RHT) leads to a high incidence of adverse renal and cardiovascular outcomes, reflecting the likely, although not well studied, reciprocal potentiation of these conditions on the severity of the hypertension and its progression (Calhoun et al., 2008;De Beus et al., 2015;Rossignol et al., 2015;Thomas et al., 2016;Wolley and Stowasser, 2016). High levels of renal sympathetic nerve activity (RSNA), as found in many patients with RHT and normal kidney function (Grassi et al., 2015), may further diminish the excretory capacity of the injured kidneys and, therefore, exacerbate sodium retention, volume overload, and hypertension. While several studies have reported increased sympathetic activity to the skeletal muscle, RSNA has not been directly assessed in patients with CKD (Converse et al., 1992;Schlaich et al., 2009;Grassi et al., 2011Grassi et al., , 2015De Beus et al., 2014). Furthermore, although catheter-based renal denervation (RDNx) appears promising for the treatment of RHT, at present, the clinical results are inconclusive, revealing the need to better understand the determinants for a favorable blood pressure (BP) response to this novel treatment (Symplicity HTN-2 Investigators Esler et al., 2010;Bhatt et al., 2014;Persu et al., 2014;Iliescu et al., 2015). A particularly neglected area of investigation is the impact of CKD on the antihypertensive response to RDNx. Despite initial encouraging results from small-scale studies (Schlaich et al., 2013;Wallbach et al., 2014;Beige et al., 2015;Ott et al., 2015;Kiuchi et al., 2016;Hering et al., 2017), the efficacy and safety of RDNx in patients with RHT and CKD remain uncertain, as large clinical trials using this non-pharmacological approach for BP control have excluded patients with impaired renal function, for fear of worsening renal injury (Symplicity HTN-2 Investigators Esler et al., 2010;Bhatt et al., 2014;De Beus et al., 2014;Persu et al., 2014;Grassi et al., 2015;Wolley and Stowasser, 2016). The overarching impetus for conducting the present study was to investigate the role of the renal sympathetic nerves in mediating the hypertension associated with reduced renal function. The remnant kidney model used in the present study has been the mainstay in the investigation of the pathogenesis of CKD and hypertension associated with reductions in functional nephron number (Koletsky, 1959;Brenner, 1985;Griffin et al., 1994;Ibrahim and Hostetter, 1998;Griffin et al., 2000;Hildebrandt et al., 2016). Surgical reduction of renal mass (RRM) by ∼80% (not to be confused with the model of renal infarction-induced hypertension caused by tying off branches of the renal artery) causes minimal injury to the remnant nephrons in the early stages of the hypertension (Koletsky, 1959;Griffin et al., 1994;Ibrahim and Hostetter, 1998;Griffin et al., 2000;Hildebrandt et al., 2016). Although the role of the sympathetic nervous system in mediating the hypertension in this model has not been established, the resulting phenotype of salt-sensitive, volume overload hypertension associated with loss of functional nephrons mimics the clinical situation of patients with CKD and RHT who commonly have inappropriately high levels of salt intake and hypertension that is frequently resistant to pharmacological therapy (Calhoun et al., 2008;Pimenta et al., 2009;Nerbass et al., 2015;Rossignol et al., 2015;Wolley and Stowasser, 2016). In contrast to the uncertainty regarding the role of the sympathetic nervous system in the RRM model of hypertension, there is a consensus that increased sympathetic activity likely contributes to the development of the salt-insensitive hypertension that follows infarction of two-thirds of the remnant kidney (Campese and Kogosov, 1995;Augustyniak et al., 2010;Veiga et al., 2016). In the infarction model of CKD, local ischemia has been postulated to trigger renal afferent sympathoexcitatory reflexes (De Beus et al., 2014;Grassi et al., 2015;Glassock and Rule, 2016). However, renal ischemia may not be uniformly present in patients with CKD (Michaely et al., 2012;De Beus et al., 2014;Glassock and Rule, 2016) and it is unclear whether reduced renal function itself may influence sympathetic activity. Therefore, from a mechanistic perspective, the RRM-salt model of hypertension used in the present study provides an untainted understanding of the fundamental impact of reduced baseline renal function per se on the antihypertensive response to RDNx in those instances in which there is limited renal parenchymal injury. Furthermore, a particular advantage of this model is that it is devoid of comorbidities (obesity, sleep apnea, hyperaldosteronism) and use of antihypertensive medications that alter sympathetic activity and, therefore, undoubtedly contribute to the variable BP response to RDNx in clinical trials. Thus, we tested the hypothesis that regional-specific abrogation of sympathetic outflow to the kidneys by RDNx chronically lowers BP in this salt-sensitive model of hypertension. A unique experimental approach in this study was that BP was measured continuously over several weeks to allow precise longitudinal determinations of changes in BP throughout the development of the hypertension and, moreover, during the immediate and subsequent days after RDNx. Furthermore, from the analysis of sympathetically mediated BP oscillations, along with inhibition of central autonomic outflow with clonidine, we determined whether increased sympathetic drive to non-renal targets play a role in mediating the salt-sensitive hypertension. These approaches allowed for a mechanistic evaluation of the role of the sympathetic nervous system in mediating salt-induced hypertension associated with reductions in the number of functional nephrons. MATERIALS AND METHODS Eight adult male Sprague-Dawley rats (28-32 weeks of age) were obtained from the National Research Institute "Cantacuzino" (Bucharest, Romania). Rats were housed in a temperature (21-23 • C) and humidity-controlled environment with a 12 h light/dark cycle, with ad libitum access to food and water, and were acclimatized for at least 3 weeks before the experimental protocols. Animals were fed a standard rodent diet containing either 0.8% NaCl (normal salt, NS), 0.1% NaCl (low salt, LS), or 4% NaCl (high salt, HS) (National Research Institute "Cantacuzino, " Bucharest, Romania). Surgical procedures were conducted under isoflurane anesthesia (2-3%). Tetracycline (2 mg/mL in drinking water) was administered for 3 days after each surgical procedure. All experiments were performed in accordance to the European Directive 2010/63/EU on the Protection of Animals Used for Scientific Purposes, the European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes (Council of Europe No. 123, Strasbourg, 1985) and the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Research Ethics Committee of the Grigore T. Popa University of Medicine and Pharmacy, Ias , i. Placement of Telemeters for Continuous BP Recording For implantation of telemeters, incisions were made in the midline abdominal and left inguinal regions. The body of the telemetry transmitters (TRM54PB, Millar, Inc., Houston, TX, United States) was secured to the right flank of the inner abdominal wall with silk sutures (3-0; Ethicon, NJ, United States). A 2 cm piece of PE90 polyethylene tubing (Intramedic TM , Becton Dickinson, Sparks, MD, United states) was guided from the left iliac fossa through a 1 mm incision of the abdominal muscle layer toward the inguinal area. The solid-state pressure sensor was tunneled through the tubing (which was then removed) and a length of ∼3 cm was advanced into the femoral artery with the pressure sensing tip in the aorta, below the emergence of the renal arteries. The pressure sensor was then secured with silk threads placed around the femoral artery and at the emergence from the abdominal wall. Finally, the muscle layers were closed with silk sutures and the skin with stainless steel clips (Stoelting Co., Wood Dale, IL, United States). Surgical Reduction of Renal Mass The left and right kidneys were accessed via two longitudinal paravertebral incisions extending 1.5-2 cm caudally from the last rib. This retroperitoneal approach was chosen as it allows access to the kidney with minimal interference with the perirenal fat pads containing the renal nerve fibers. Following right nephrectomy, the upper and lower poles of the left kidney were cut with a scalpel blade so that a total of 75-80% of renal mass was removed. Hemostasis was obtained by gentle application of Gelaspon R strips (Chauvin Ankerpharm GmbH, Berlin, Germany). Throughout the procedure care was taken to avoid any manipulation of the left renal hilum containing the renal nerves. Incisions were closed with silk sutures and wound clips after visual assessment to verify hemostasis. Renal Denervation of the Remnant Kidney The left kidney stump was accessed through a 1.5-2 cm ventral abdominal paramedian incision. Fibrous tissue and intestinal loops adherent to the remnant kidney were carefully removed using fine tweezers under magnification by a stereomicroscope (Zeiss, Jena, Germany). The renal artery and vein were exposed and cleared of surrounding tissue. All visible nerve running along the renal artery and vein within an area extending from the aorta to the renal hilum were cut. Thereafter, the adventitial layer was gently stripped off the renal vessels and a 10% phenol/ethanol (v/v) solution was applied for at least 5 min to the vessels using damp fine cotton tips. Extensive care was taken to prevent any leakage of the phenol solution onto the remnant kidney or into the peritoneal cavity. Finally, the surgical area was flushed with warm saline to remove any residual phenol. Wounds were closed with silk sutures and metal clips. Continuous Recording of BP Waveform Rats were placed on TR181 smartpads (Millar, Inc., Houston, TX, United States) for signal acquisition and wireless charging of the implanted telemeters. The individual 24-h BP waveforms were acquired continuously at a sampling frequency of 2000 Hz using a PowerLab 16/35 acquisition system (ADInstruments, Bella Vista, NSW, Australia) connected to a PC for storage and subsequent analysis. All individual cardiac cycles identified using LabChart R built-in BP module (ADInstruments, Bella Vista, NSW, Australia) were averaged to compute mean arterial pressure (MAP) and heart rate (HR) every day from the continuous 7.00 AM to 6.00 AM recordings, excluding the 1 h necessary for daily animal care. Estimation of Sympathetic Activity Spectral analysis was performed using the Fast Fourier Transform, as we previously described (Iliescu et al., 2013). Briefly, the original daily BP signal sampled at 2000 Hz was analyzed in the frequency domain using the implementation of the LabChart R software. Power spectra were calculated for all artifact-free segments ∼1 min in duration (131,072 data points) within every 23-h period, overlapping by 50% and windowed using a Hamming function, and finally averaged to yield daily BP spectra. Analysis of the direct BP signal was preferred over extraction of cardiac cycle-related variables such as the systolic BP as it avoided the inherent issues related to resampling for unequally spaced time-series. Since the frequency band between 0.25 and 0.75 Hz (low frequency, LF) contains BP oscillations originating from sympathetically driven variations in arterial vascular tone (Oliveira-Sales et al., 2014), the power in this band was integrated and expressed as percentage of the total power below the HR (0.01-3 Hz). Pharmacological blockade While on HS and after RRM and RDNx, global sympathetic activity was assessed from the hemodynamic and BP power spectral responses during administration of the centrally acting sympatholytic drug, clonidine. Clonidine (300-150 µg/kg/day; Sintofarm S.A., Bucharest, Romania) was administered in the drinking water for 6 and 3 days, respectively, in doses reported to reduce BP under conditions of sympathetic activation (Abdel-Rahman, 1994; Thomas et al., 2003;El-Mas and Abdel-Rahman, 2010). The lower dose was administered to avoid the potentially deleterious rebound effect from clonidine withdrawal. Fluid intake was monitored daily during clonidine administration and adjustments of the drug concentration were performed when necessary to achieve constant drug intakes. Experimental Design After implantation of the telemeters, the rats were maintained on NS and monitored until the circadian rhythmicity of BP and HR was fully restored (10-14 days). Then, control hemodynamic variables were recorded while on NS. To assess the hemodynamic and neural responses to variations in salt-intake during normal renal function, the rats were then fed LS for 1 week, followed by 1 week of HS, before returning to LS. After 5 days on LS, the right kidney was removed and the poles of the left kidney were excised to produce a total reduction in renal mass of ∼75-80%. Then the rats were allowed to recover for 10 days before HS was initiated and maintained for 6 weeks. RDNx of the remnant kidney was performed after 2 weeks on HS, when BP levels were stable. Two weeks after RDNx, responses to clonidine were assessed. Finally, rats were again placed on LS until the end of the study. At weekly intervals throughout the study, the rats were placed in metabolism cages and urine was collected for 24 h in chilled containers at 4 • C (Tecniplast S.p.A., Italy). Sodium concentration was determined in urine using standard techniques, as previously described (Hildebrandt et al., 2016). Urinary protein concentration was determined based on the absorbance at 280 nm, using bovine serum albumin as the standard (NanoDrop, Wilmington, DE, United States). At the end of the study, the remnant kidneys and normal kidneys from seven additional age-matched rats housed in similar conditions were harvested after cervical dislocation and immediately flash frozen in liquid nitrogen. Renal tissue norepinephrine content was measured by HPLC by the Hormone Assay and Analytical Services Core at Vanderbilt University Medical Center (Goldstein et al., 1981). Statistical Analysis Results are expressed as mean ± SE. One-way repeated measures ANOVA followed by Dunnett's multiple comparison test were used (Prism 6.01, GraphPad Software) to compare the following experimental periods. Normal Renal Function Daily hemodynamic responses during varying salt intake (Days 5-24) to NS control (Days 2-3); Weekly urine responses during varying salt intake to NS control. Responses to Varying Salt Intake With Normal Renal Function As shown in Figure 1A, MAP remained at NS control levels (105 ± 2 mmHg) during both LS and HS, although MAP was 5 mmHg below control during LS recovery. This indicates little or no BP salt sensitivity when renal function was normal. While transient increases in HR (Figure 2A) occurred during the initial days of LS, LF power ( Figure 3A) remained unchanged from control levels for the entire week of LS. In contrast, HS led to significant and sustained increases of HR and estimated sympathetic activity as reflected by high levels of LF power. After returning the rats to LS in preparation for unilateral nephrectomy and partial surgical ablation of the remaining kidney, all variables returned to values indistinguishable from control except MAP, as indicated above. Urinary sodium and volume excretion closely paralleled the varying salt intakes while urinary protein excretion remained unchanged (Table 1). Responses to Salt Loading After Reduction of Renal Mass In contrast to the absence of changes in MAP with varying salt intakes when renal function was normal, after RRM MAP increased progressively during the first week of salt loading and plateaued during week 2 to levels ∼20 mmHg higher than LS baseline, clearly indicating salt sensitivity ( Figure 1B). This hypertensive response was associated with a fall in HR ( Figure 2B) and estimated sympathetic activity (Figure 3B), likely due to autonomic responses caused by pressure-induced baroreflex activation. Urinary sodium and volume excretion were significantly higher during the 2 weeks of HS compared to LS baseline while urinary protein excretion did not change significantly ( Table 1). Responses to Renal Denervation After Salt-Induced Hypertension During the first day after RDNx MAP fell dramatically by ∼24 mmHg, to levels similar to those recorded during LS intake Frontiers in Physiology | www.frontiersin.org ( Figure 1B), while HR ( Figure 2B) and estimated sympathetic activity ( Figure 3B) increased concomitantly. However, these acute hemodynamic changes after RDNx were only transient as MAP recovered back to hypertensive levels, while HR and LF power decreased toward pre-denervation levels within a week and were stable thereafter. Urinary sodium and protein excretion remained at levels similar to the first week of HS + RRM while urinary volume excretion increased by ∼35% after RDNx ( Table 1). Responses to Global Sympathoinhibition During Salt Loading Hypertension As expected, administration of clonidine (300 µg/kg) led to sustained sympathoinhibition as reflected by marked suppression of sympathetically mediated oscillations of BP in the LF band ( Figure 3B) and significant bradycardia ( Figure 2B). However, MAP increased slightly (Figure 1B), albeit not significantly. During the 3 days of clonidine dose tapering (150 µg/kg) and the following washout period, estimated sympathetic activity gradually returned to control levels while MAP remained stable at post-denervation hypertensive values. Rebound tachycardia occurred for the first 2 days after cessation of clonidine, before eventual recovery by the end of the washout period. No further changes occurred in urinary variables during clonidine administration when compared to the days preceding drug treatment ( Table 1). Responses to Salt Restriction After Reduced Renal Mass During the initial 2 days of LS there were precipitous reductions in MAP ( Figure 1B). Along with the abrupt fall in MAP, estimated sympathetic activity and HR increased during this time (Figures 2B, 3B). By the end of the LS recovery period, MAP and HR were significantly lower than the days preceding salt restriction while estimated sympathetic activity returned to baseline levels, once again showing salt sensitivity. Renal Tissue Levels of Norepinephrine Renal levels of norepinephrine measured at the end of the experiment (∼6 weeks after RDNx) were 21.7 ± 5.9 ng/g, approximately 90% lower than those of age-matched normal animals (231.1 ± 26.1 ng/g, P < 0.001), indicating completeness of RDNx. DISCUSSION The major findings of the present study are: (1) RDNx lowered BP transiently but not chronically in this model of salt-sensitive hypertension, indicating that the renal nerves have little role in the long-term maintenance of the hypertension. (2) Global suppression of sympathetic activity with clonidine did not attenuate the hypertension. (3) Indirect measures of sympathetic activity are consistent with these BP responses in that they indicate that sympathetic activity is actually suppressed during RRM and HS. Because there is independent activation of the sympathetic nervous system in many patients with CKD and RHT, sympathetic overactivity is believed to play a causative role in hypertension and disease progression (Converse et al., 1992;Schlaich et al., 2009;Grassi et al., 2011Grassi et al., , 2015De Beus et al., 2014). Therefore, inhibition of adrenergic drive is expected to be an effective therapy in these heterogeneous populations, especially when CKD and RHT coexist. However, not all patients with RHT have a satisfactory BP response to sympatholytic device-based therapies and the conditions for a favorable response are unclear, especially if CKD is present, as this has been an exclusion criterion in the large clinical trials (Symplicity HTN-2 Investigators Esler et al., 2010;Bisognano et al., 2011;Bhatt et al., 2014;Iliescu et al., 2014;Persu et al., 2014;Iliescu et al., 2015;De Leeuw et al., 2017). Furthermore, this understanding has been confounded by the multiple associated comorbidities and the variable antihypertensive medications given to these patients to control BP . These were non-confounders in the present study designed to evaluate the impact of renal-specific sympathoinhibition by RDNx in a model of salt-sensitive hypertension and CKD associated with loss of functional nephrons by surgical reduction of kidney mass. Experimental studies suggest an important role of heightened sympathetic activity in the development of hypertension following infarction of two-thirds of the remnant kidney (Campese and Kogosov, 1995;Augustyniak et al., 2010;Veiga et al., 2016) and are consistent with the frequently invoked contention that activation of afferent sensory nerves originating in the injured kidney in CKD triggers reflex increases in sympathetic outflow (Schlaich et al., 2009;De Beus et al., 2014;Grassi et al., 2015). Renal ischemia is the mainstay of the infarction model and its putative role as the trigger for sympathoexcitatory renal reflexes is further supported by the finding that even milder restriction of blood supply to the kidneys, as found in renovascular hypertension, result in sympathetic activation (Oliveira-Sales et al., 2014). However, renal ischemia may not be uniformly associated with CKD (Michaely et al., 2012;De Beus et al., 2014;Glassock and Rule, 2016) and it is not clear whether hypertensive patients with reduced renal function not accompanied by ischemia have sympathetic activation and stand to benefit from RDNx. We used the surgical excision approach for RRM, which avoids direct injury to the remnant nephrons (Griffin et al., 1994) and denervated the remnant kidney after salt-sensitive hypertension had developed but within a time frame in which there is at best only moderate glomerulosclerosis (Koletsky, 1959;Griffin et al., 1994Griffin et al., , 2000Ibrahim and Hostetter, 1998). Thus, our data suggest that the condition of salt-sensitive hypertension associated with non-ischemic nephron loss may be a predictor of a "nonresponder" phenotype to renal-specific sympathoinhibition by RDNx. Because hypertensive patients and animals characterized by increased sympathetic activity have the most consistent antihypertensive response to RDNx, the most likely explanation for the failure of RDNx to chronically lower BP in the present study is the absence of sympathetic activation. In contrast to the findings in ischemic models of CKD, we found evidence of suppressed, rather than increased, estimated sympathetic outflow as LF of BP power and the HR actually decreased following RRM and HS. Sympathoinhibition may be due to sustained baroreflex activation during established saltsensitive hypertension, as found in other models of hypertension . Accordingly, we found no evidence for increased sympathetic activity in dogs subjected to the same model of RRM-salt-induced hypertension used in the present study (Hildebrandt et al., 2016). Furthermore, although we did not directly measure RSNA, the lack of sustained BP lowering after RDNx indicates that the prevailing level of renal sympathetic outflow makes a minor contribution, at best, to the maintenance of salt-sensitive hypertension in this nonischemic model of RRM. These results are consistent with findings in experimental studies in which RSNA is not elevated, studies reporting no sustained hypotensive response to RDNx in normotensive animals (Lohmeier et al., 2007;Kandlikar and Fink, 2011) and in animals in the early stages of salt-sensitive hypertension associated with mineralocorticoid excess and devoid of distinct renal parenchymal injury (Kandlikar and Fink, 2011;. The hemodynamic responses on the days immediately following RDNx in the present study are consistent with the possibility that abolition of renal sympathetic outflow may still have acute effects to increase renal excretory function, even in the absence of overt sympathoexcitation. At that time, there was a pronounced fall in BP, consistent with increased fluid excretion and reduced extracellular fluid volume, although an effect of post-surgical stress could not be discounted. However, these hemodynamic responses did not persist chronically after RDNx. These acute and chronic changes in BP are consistent with observations in rats and dogs without increased RSNA subjected to RDNx and may reflect time-dependent antinatriuretic compensations that eventually oppose any expected sustained natriuretic effects due to loss of the tonic influence of the renal nerves on excretory function (Lohmeier et al., 2007;Kandlikar and Fink, 2011;Lohmeier and Iliescu, 2015). Thus, increased sympathetic activity appears to be an obligatory requirement for a long-term reduction in BP in response to RDNx. Activation of renal afferent nerve fibers has been implicated in the sympathetic activation and hypertension associated with kidney disease (Schlaich et al., 2009;De Beus et al., 2014;Grassi et al., 2015). However, if RDNx had central sympathoinhibitory effects caused by interruption of afferent renal reflexes in the present study, vasodilatory and cardioinhibitory responses to sympathetic suppression would have occurred concomitantly with the acute reduction in BP after denervation. On the contrary, during the first days after RDNx, sympathetically driven oscillations in BP (as reflected by LF BP power) and HR increased markedly, presumably due to baroreceptor unloading. Thus, we found no evidence for a contribution of renal afferent reflexes to the modulation of sympathetic outflow in this model. A number of studies have shown that salt-sensitive hypertension is associated with increased sodium concentration in plasma and/or cerebrospinal fluid, and acute studies report sympathetic activation occurs when similar increases in sodium concentration are achieved at these sites and in areas of the brain that are critical determinants of central sympathetic outflow (Stocker et al., 2013(Stocker et al., , 2015Kinsman et al., 2017). Furthermore, several acute studies show that the increased central sympathetic outflow induced by increased sodium concentration is confined to non-renal districts (Stocker et al., 2013(Stocker et al., , 2015Kinsman et al., 2017). Taken together, these acute studies support the contention that increases in peripheral resistance, and not reductions in renal excretory function, are causal in the genesis of salt-sensitive hypertension, a hypothesis not shared by many investigators, but nevertheless a recurring subject of debate (Hall, 2016;Kurtz et al., 2016). We therefore considered whether sympathetic activation to non-renal territories contributes to the hypertension in the absence of the renal nerves. Since we considered the possibility that the spectral analytical methods used may have been insufficiently sensitive to detect increases in global sympathetic activity after RDNx, we determined the sympathetic and hemodynamic responses to central sympathoinhibition by clonidine, administered chronically in doses that previously have been shown to lower BP and sympathetic activity in several experimental models of sympathetically mediated hypertension (Abdel-Rahman, 1994;Thomas et al., 2003;El-Mas and Abdel-Rahman, 2010). As expected, clonidine clearly lowered post-denervation LF, indicating sustained sympathoinhibition. HR decreased as well but, most significantly, BP did not. This shows that in the absence of sympathoexcitation, global reductions in sympathetic activity to regions exclusive of the kidneys do not chronically lower BP. However, although we used doses of clonidine reported to reduce BP under conditions of sympathetic activation (Abdel-Rahman, 1994;Thomas et al., 2003;El-Mas and Abdel-Rahman, 2010) it should be noted that the central actions of clonidine to lower BP in the present study, in the absence of increased sympathetic activity, may have been masked by direct stimulation of vasoconstrictor peripheral alpha-2 adrenoreceptors (Francis, 1988;Lohmeier et al., 2009). These observations are especially relevant to clinical practice, as the BP response to clonidine has been proposed to determine the dependency of hypertension on elevated central sympathetic activity and predict the response to RDNx in RHT patients (Katholi et al., 2010). Taken together, these data indicate that in the non-ischemic CKD model of salt-sensitive hypertension, global sympathetic activity is suppressed, likely via a baroreflex-mediated mechanism, thus limiting the antihypertensive efficacy of sympatholytic approaches. Limitations First, if progressive loss of function in hyperfiltering remnant nephrons were to lead to a time-dependent increase in BP, this could possibly account for the inability to demonstrate a sustained antihypertensive response to RDNx in the present study. However, this possibility is unlikely for the following reasons: (a) this study was performed during the early stage of salt-sensitive hypertension, below the time frame necessary for appreciable hypertensive nephrosclerosis in this model (Griffin et al., 1994(Griffin et al., , 2000Ibrahim and Hostetter, 1998);(b) there is no progressive decrease in glomerular filtration rate in remnant nephrons over the time course of this study (Griffin et al., 1994); (c) urinary protein excretion was unchanged throughout the 5 weeks of HS; and (d) BP promptly returned to control levels at the end of the experiment when HS was replaced by LS. Thus, based on the above facts, it is unlikely that the addition of a sham control group for RDNx would have significantly altered interpretation of the current results or would have been justified, based on experimental, ethical, and financial considerations. Second, another issue common to all RDNx studies is verifying that the extent of denervation is sufficient to abolish the functional responses to activation of renal sympathetic nerves including during the chronic follow up period that may be associated with functional renal reinnervation. In the present study, these were unlikely cofounders as we used a standard surgical-pharmacological approach for extensive RDNx and showed that renal tissue norepinephrine levels at the end of the study were at an accepted level of suppression (<90% of control animals) for interruption of functional responses. Furthermore, and consistent with our findings, morphological evidence of nerve regrowth is not accompanied by an increase of renal norepinephrine content in rats, at least until 12 weeks post-denervation, well beyond the timeframe of our study (Rodionova et al., 2016). CONCLUSION These findings in a classical model of kidney disease (Koletsky, 1959;Brenner, 1985;Griffin et al., 1994Griffin et al., , 2000Ibrahim and Hostetter, 1998;Hildebrandt et al., 2016) indicate that neither renal nor global sympathetic activation contribute to the salt-sensitive hypertension attributed solely to loss of functional nephrons. Furthermore, these data support the contention that increased sympathetic activity is an obligatory requirement for a sustained reduction in BP in response to RDNx. Since it is not feasible to expeditiously determine the level of sympathetic activation in a clinical setting, identification of the conditions responsible for sympathoactivation is crucial in the context of clinical trials showing that not all hypertensive patients have a favorable response to RDNx (Symplicity HTN-2 Investigators Esler et al., 2010;Bhatt et al., 2014;Persu et al., 2014;Iliescu et al., 2015). In this regard, a corollary to the present observations is that rather than simple reductions in renal function per se, signals from the injured kidney, such as ischemia, may underlie the reflex sympathetic activation, which promotes hypertension in CKD. Therefore, systematic exploration of the degree of ischemic renal injury may provide a mechanistic basis for identification of CKD patients with sympathetically mediated hypertension, which are likely to benefit the most from sympatholytic therapies such as RDNx. AUTHOR CONTRIBUTIONS IT designed and performed the experiments, analyzed the data, and drafted the manuscript. TL designed the experiments, performed the analytical measurements, analyzed the data, and reviewed the manuscript. BA performed the analytical measurements. DP designed the experiments, reviewed the manuscript, and provided the logistical support. DS designed the experiments, analyzed the data, reviewed the manuscript, and provided the logistical support. RI designed the experiments, performed the experiments, analyzed the data, ensured the funding, and drafted and reviewed the final manuscript.
7,210
2018-04-30T00:00:00.000
[ "Medicine", "Biology" ]